LLM의 동작을 제어하는 새로운 방법: 런타임 개입(Runtime Intervention) 기술
요약
Mentat은 LLM의 행동에 결정론적 통제력을 부여하는 API를 제공합니다. 기존의 RAG나 프롬프트 엔지니어링 방식이 확률적이고 취약한 한계를 가졌던 것과 달리, Mentat은 모델의 추론 과정(forward pass) 중 잠재 특징 벡터(latent feature vectors)에 직접 개입하여 편향이나 환각을 수정합니다. 이를 통해 금융, 의료 등 높은 신뢰도가 요구되는 산업에서 감사 수준의 안정성(audit-grade reliability)을 확보할 수 있습니다.
핵심 포인트
- Mentat은 모델 추론 과정 중 잠재 특징 벡터를 수학적으로 조작하여 편향이나 오개념 같은 '내부' 문제를 해결합니다. (h_prime = h - alpha * (h @ v) * v)
- 기존의 RAG는 지식 제공에만 그치며, 공간 추론(spatial inversion)과 같은 복잡한 통합적 오류를 수정할 수 없습니다.
- GPT-OSS-120b 모델을 대상으로 테스트했을 때, 오개념 특징 벡터 억제만으로 TruthfulQA 정확도를 21%에서 70%로 향상시켰습니다.
- 시스템은 지식 그래프(Knowledge Graph) 검증과 결합하여 의미 엔트로피를 확인하고 관계적 환각을 포착함으로써 신뢰성을 극대화합니다.
Launch HN: Mentat (YC F24) – Controlling LLMs with Runtime Intervention
Hi HN, I’m Cyril from CTGT. Today we’re launching Mentat (
https://docs.ctgt.ai/api-reference/endpoint/chat-completions), an API that gives developers deterministic control over LLM behavior, steering reasoning and removing bias on the fly, without the compute of fine-tuning or the brittleness of prompt engineering. We use feature-level intervention and graph-based verification to fix hallucinations and enforce policies.
This resonates in highly regulated industries or otherwise risky applications of AI where the fallout from incorrect or underperforming output can be significant. In financial services, using GenAI to scan for noncompliant communications can be arduous without an easy way to embed complex policies into the model. Similarly, a media outlet might want to scale AI-generated summaries of their content, but reliability and accuracy is paramount. These are both applications where Fortune 500 companies have utilized our technology to improve subpar performance from existing models, and we want to bring this capability to more people.
Here’s a quick 2-minute demo video showing the process: https://video.ctgt.ai/video/ctgt-ai-compliance-playground-cf...
Standard "guardrails" like RAG and system prompts are fundamentally probabilistic: you are essentially asking the model nicely to behave. This often fails in two ways. First, RAG solves knowledge availability but not integration. In our benchmarks, a model given context that "Lerwick is 228 miles SE of Tórshavn" failed to answer "What is 228 miles NW of Lerwick?" because it couldn't perform the spatial inversion.
Second, prompt engineering is brittle because it fights against the model's pre-training priors. For example, on the TruthfulQA benchmark, base models fail ~80% of the time because they mimic common misconceptions found on the internet (e.g. "chameleons change color for camouflage"). We found that we could literally turn up the feature for "skeptical reasoning" to make the model ignore the popular myth and output the scientific fact. This matters because for high-stakes use cases (like Finance or Pharma), "mostly safe" isn't acceptable—companies need audit-grade reliability.
Our work stems from the CS dungeon at UCSD, with years spent researching efficient and interpretable AI, trying to "open the black box" of neural networks. We realized that the industry was trying to patch model behavior from the outside (prompts/filters) when the problem was on the inside (feature activations). We knew this was important when we saw enterprises struggling to deploy basic models despite having unlimited compute, simply because they couldn't guarantee the output wouldn't violate compliance rules. I ended up leaving my research at Stanford to focus on this.
Our breakthrough came while researching the DeepSeek-R1 model. We identified the "censorship" feature vector in its latent space. Amplifying it guaranteed refusal; subtracting it instantly unlocked answers to sensitive questions. This proved the model had the knowledge but was suppressing it. We realized we could apply this same logic to hallucinations, suppressing "confabulation" features to reveal the grounded truth. While some hallucinations stem from the inherent randomness of generative models, many can be identified with the concerted activation of a feature or group of features.
Instead of filtering outputs, we intervene at the activation level during the forward pass. We identify latent feature vectors (v) associated with specific behaviors (bias, misconception) and mathematically modify the hidden state (h):
h_prime = h - alpha * (h @ v) * v
This arithmetic operation lets us "edit" behavior deterministically with negligible overhead (<10ms on R1). For factual claims, we combine this with a graph verification pipeline (which works on closed weight models). We check semantic entropy (is the model babbling?) and cross-reference claims against a dynamic knowledge graph to catch subtle relational hallucinations that vector search misses.
On GPT-OSS-120b, this approach improved TruthfulQA accuracy from 21% to 70% by suppressing misconception features. We also improved the performance of this model to frontier levels on HaluEval-QA, where we reached 96.5% accuracy, solving the spatial reasoning failures where the baseline failed. It also handles noisy inputs, inferring "David Icke" from the typo "David Of me" where base models gave up. Full benchmarks at https://ctgt.ai/benchmarks.
Most startups in this space are observability tools that tell you only after the model failed. Or they are RAG pipelines that stuff context into the window. Mentat is an infrastructure layer that modifies the model's processing during inference. We fix the reasoning, not just the context. For example, that’s how our system was able to enforce that if A is SE of B, then B is NW of A.
We believe that our policy engine is a superior control mechanism to RAG or prompting. If you’re frustrated with current guardrails, we’d love it if you would stress-test our API!
- API: Our endpoint is drop-in compatible with OpenAI’s /v1/chat/completions: https://docs.ctgt.ai/api-reference/endpoint/chat-completions
- Playground: We’ve built an "Arena" view to run side-by-side comparisons of an Ungoverned vs. Governed model to visualize the intervention delta in real-time. No signup is required: https://playground.ctgt.ai/
We’d love to hear your feedback on the approach and see what edge cases you can find that break standard models. We will be in the comments all day. All feedback welcome!
AI 자동 생성 콘텐츠
본 콘텐츠는 HN AI Engineering의 원문을 AI가 자동으로 요약·번역·분석한 것입니다. 원 저작권은 원저작자에게 있으며, 정확한 내용은 반드시 원문을 확인해 주세요.
원문 바로가기