mirror of
https://github.com/gadievron/raptor.git
synced 2026-04-25 05:56:00 +03:00
[GH-ISSUE #42] Add defensive handling for LLM sanitizer responses #9
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/raptor#9
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @gadievron on GitHub (Dec 22, 2025).
Original GitHub issue: https://github.com/gadievron/raptor/issues/42
Problem
Lines 488-500 in
packages/codeql/agent.pyassumeanalysis.sanitizersattribute exists, but LLM can return various response types depending on Instructor configuration:When response structure differs, AttributeError occurs and breaks autonomous analysis vulnerability assessment.
Root Cause
Instructor library can return different response types based on model capabilities, schema complexity, and Pydantic configuration. Current code assumes Pydantic model with
sanitizersattribute always exists.Impact
Fix
Add defensive attribute access:
File:
packages/codeql/agent.py:488-500Type
Related
@gadievron commented on GitHub (Dec 22, 2025):
Fixed in PR #48