mirror of
https://github.com/gadievron/raptor.git
synced 2026-04-24 21:46:00 +03:00
[PR #48] [MERGED] Add defensive handling for LLM response field validation #52
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/raptor#52
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
📋 Pull Request Information
Original PR: https://github.com/gadievron/raptor/pull/48
Author: @gadievron
Created: 12/22/2025
Status: ✅ Merged
Merged: 12/26/2025
Merged by: @danielcuthbert
Base:
main← Head:fix/bug-42-sanitizer-attributeerror📝 Commits (1)
5e3b21cAdd defensive handling for LLM response field validation📊 Changes
1 file changed (+13 additions, -1 deletions)
View changed files
📝
packages/codeql/autonomous_analyzer.py(+13 -1)📄 Description
Summary
Adds robust error handling for variable LLM response structures to prevent AttributeError when unexpected fields are returned during autonomous vulnerability assessment.
Problem
LLM can return various response types depending on Instructor configuration:
When response structure includes unexpected fields like
sanitizersthat aren't part of the VulnerabilityAnalysis schema, creating the dataclass can fail or lead to AttributeErrors.Root Cause
Instructor library can return different response types based on:
Current code assumes response_dict exactly matches VulnerabilityAnalysis schema.
Changes
File:
packages/codeql/autonomous_analyzer.pyLines: 290-302
Added defensive field filtering:
Why This Fix is Correct
Defensive Programming ✅
Design Philosophy ✅
Aligns with RAPTOR's "defense-in-depth" approach:
Type of Change
Impact
Fixes #42
🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.