Skip to content

fix(security)!: drop support for templating in conformity rules#2382

Merged
kevinmessiaen merged 3 commits intomainfrom
advisory-fix-1-ghsa-7xjm-g8f4-rp26
Apr 10, 2026
Merged

fix(security)!: drop support for templating in conformity rules#2382
kevinmessiaen merged 3 commits intomainfrom
advisory-fix-1-ghsa-7xjm-g8f4-rp26

Conversation

@kevinmessiaen
Copy link
Copy Markdown
Member

Description

This PR updates the conformity check in two ways:

  • BREAKING CHANGE: it drops jinja2 templating for the conformity rules, to remove any possible abuse via SSTI which led to a security vulnerability if rules were exposed to untrusted users.
  • Updates the check to use the full trace rather than the last interaction.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request simplifies the Conformity check by removing Jinja2 templating from the rule attribute and passing the full Trace object directly to the LLM prompt. It also updates dependencies and refines the evaluation prompt to focus on outputs and metadata. Feedback includes correcting a non-existent model name in documentation examples, using generic types for consistency in method signatures, and fixing inconsistent tag spacing in the prompt template.

... rule="The response should contain the keywords '{{ trace.last.inputs.keywords }}' and be polite.",
... generator=Generator(model="openai/gpt-4o")
... rule="The last response should be polite.",
... generator=Generator(model="openai/gpt-5-mini")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The example uses gpt-5-mini, which is not a currently available model from OpenAI. It is better to use a real model like gpt-4o-mini to avoid confusion for users following the documentation.

Suggested change
... generator=Generator(model="openai/gpt-5-mini")
... generator=Generator(model="openai/gpt-4o-mini")


@override
async def get_inputs(self, trace: Trace[InputType, OutputType]) -> dict[str, str]:
async def get_inputs(self, trace: Trace[InputType, OutputType]) -> dict[str, Any]:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with the class definition and the base class BaseLLMCheck, the get_inputs method should use the generic TraceType instead of the concrete Trace[InputType, OutputType].

Suggested change
async def get_inputs(self, trace: Trace[InputType, OutputType]) -> dict[str, Any]:
async def get_inputs(self, trace: TraceType) -> dict[str, Any]:

Comment on lines +50 to +52
< TRACE >
{{ trace }}
</ TRACE >
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The markers in the prompt instructions (line 41) are defined as <TRACE>...</TRACE>, but the actual tags used in the template have extra spaces: < TRACE > and </ TRACE >. Removing these spaces ensures consistency and adheres to standard XML/HTML-like tagging that LLMs are typically trained on.

<TRACE>
{{ trace }}
</TRACE>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Development

Successfully merging this pull request may close these issues.

1 participant