LangChain Releases Better-Harness Framework for Self-Improving AI Agents

LangChain Releases Better-Harness Framework for Self-Improving AI Agents




Darius Baruo
Apr 08, 2026 20:11

LangChain open-sources Better-Harness, a system that uses evaluation data to autonomously optimize AI agent performance with measurable generalization gains.



LangChain Releases Better-Harness Framework for Self-Improving AI Agents

LangChain has released Better-Harness, an open-source framework that treats evaluation data as training signals for autonomous AI agent improvement. The system, detailed in an April 8 blog post by Product Manager Vivek Trivedy, achieved near-complete generalization to holdout test sets across both Claude Sonnet 4.6 and Z.ai’s GLM-5 models.

The core insight: evaluations serve the same function for agent development that training data serves for traditional machine learning. Each eval case provides a gradient-like signal—did the agent take the right action?—that guides iterative harness modifications.

How the System Works

Better-Harness follows a six-step optimization loop. Teams first source and tag evaluations from hand-written examples, production traces, and external datasets. The data splits into optimization and holdout sets—a critical step the team emphasizes prevents the overfitting problems that plague autonomous improvement systems.

“Agents are famous cheaters,” Trivedy writes. “Any learning system is prone to reward hacking where the agent overfits its structure to make the existing evals pass.”

After establishing baseline performance, the system runs autonomous iterations: diagnosing failures from traces, experimenting with targeted harness changes, and validating that improvements don’t cause regressions. Human review provides a final gate before production deployment.

Concrete Results

Testing on tool selection and followup quality categories showed strong generalization. Claude Sonnet 4.6 improved from 2/6 to 6/6 on holdout followup tasks. GLM-5 jumped from 1/6 to 6/6 on the same category while gaining ground on tool use metrics.

The optimization loop discovered several reusable instruction patterns across both models: using reasonable defaults when requests clearly imply them, respecting constraints users already provided, and bounding exploration before taking action. GLM-5 particularly benefited from explicit instructions to stop issuing near-duplicate searches once sufficient information exists.

Production Integration

All agent runs log to LangSmith with full traces, enabling three capabilities: trace-level diagnosis for the optimization loop, production monitoring for regression detection, and trace mining for eval generation. The flywheel effect—more usage generates more traces, which generate more evals, which improve the harness—creates compounding returns on observability investment.

LangChain plans to publish “model profiles” capturing tuned configurations for different models against their eval suite. The research version is available on GitHub for teams building vertical agents across domains.

Image source: Shutterstock




Source link

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

Social Media

Get The Latest Updates

Subscribe To Our Weekly Newsletter

No spam, notifications only about new products, updates.

Categories