Tony Kim
Apr 11, 2026 18:06
Three winning projects from the Synthesis hackathon showcase how cryptographic attestation solves AI agent trust problems for DeFi, art, and multi-agent coordination.
The Synthesis hackathon just wrapped with three winners that demonstrate why verifiable AI agents matter for crypto. The online competition, judged by AI agents themselves, challenged builders to create agents that can cryptographically prove their actions—a requirement that sounds academic until you realize most AI agents today operate as complete black boxes.
The $5,000 prize pool attracted projects tackling fundamentally different problems, but each one arrived at the same conclusion: agents handling real value need to prove what they did.
An Artist That Can Prove Its Own Work
Bob Is Alive won attention as an autonomous digital artist running inside an EigenCompute TEE on Intel TDX hardware. The agent creates biology-inspired art, sells it on Starknet, completes tasks for credits, and trades on DeFi protocols—all without human intervention.
What separates Bob from typical AI art generators? Every action gets attested by the trusted execution environment. When Bob sells a piece, buyers can verify the art came from Bob’s actual model, not from someone swapping outputs or manipulating the creative process externally. The agent maintains its own onchain identity and economic activity in a loop anyone can audit.
Machine-to-Machine Deals Without Blind Trust
DealForge addresses a coordination problem that gets worse as agents handle more value. When two AI agents want to transact, they currently have no way to verify each other’s identity, enforce agreed terms, or settle disputes without human intervention.
Built on Base, DealForge creates a full deal lifecycle for machine-to-machine exchange. Agents arrive with cryptographic identities, negotiate autonomously, and rely on verifiable compute through EigenCloud to ensure neither side cheats on execution. Programmable escrow handles settlement when conditions are met.
The result: agents can transact with strangers—other agents they’ve never encountered—without taking anything on faith.
Coordinating Multiple Agents on One Task
Boss Raid tackles multi-agent orchestration. One request goes in, many agents work on it, one result comes out. The challenge is deciding who breaks down the problem, whose output is good enough, and who gets paid.
Their solution uses an orchestrator agent called Mercenary that splits incoming requests into scoped workstreams, routes them to appropriate providers, evaluates outputs, and synthesizes final results. Only approved contributors get settled, with successful participants splitting payouts equally.
The key innovation: Mercenary’s orchestration decisions are themselves verifiable. Who got routed which task, whose output was accepted—all auditable rather than trusted.
Why This Matters Now
These three projects represent different categories—autonomous art, deal infrastructure, multi-agent coordination—yet each becomes meaningfully more useful when agents can prove their actions. As AI agents increasingly interact with DeFi protocols and handle real assets, the gap between “agent said it did X” and “agent provably did X” becomes a security boundary.
Builders interested in the verifiable compute stack can join EigenCloud’s waitlist for early access. The hackathon demonstrated that the infrastructure exists; now it’s a question of what gets built on top.
Image source: Shutterstock









