AI Engines
diffray is built on a powerful foundation of AI engines that work together to deliver comprehensive code reviews. This architecture enables a true multi-agent system where specialized agents collaborate effectively.
Core Engine
The core engine provides the foundation for the entire multi-agent system:
Advanced Language Models
We use the latest frontier models from Anthropic — Haiku 4.5, Sonnet 4.5, and Opus 4.5 — currently the most capable AI models for understanding and analyzing code. Each task within the review pipeline is matched with the optimal model:
- Complex analysis tasks — largest models for deep reasoning and understanding
- Pattern matching — faster models for quick checks
- Validation passes — specialized models for verification
This model selection approach ensures both high-quality analysis and efficient processing.
Intelligent File Search
The core includes an advanced file search system that quickly navigates codebases of any size:
- Smart pattern matching — finds relevant files instantly across thousands of files
- Context-aware search — understands code structure, not just text
- Efficient exploration — minimizes API calls while maximizing coverage
- Parallel search — multiple search strategies run simultaneously
This means agents can quickly locate dependencies, related code, and project context to validate their findings.
Task Management System
A built-in task tracking system ensures thorough, consistent reviews:
- Structured checklists — every review follows a comprehensive process
- No missed steps — the system tracks what's been analyzed and what remains
- Rule adherence — custom rules are never forgotten during analysis
- Progress visibility — clear tracking of what each agent has completed
This prevents agents from overlooking issues or skipping important checks.
Tooling Engine
The tooling engine provides agents with the ability to verify their hypotheses using real code analysis tools:
Static Analyzers
Agents can invoke static analysis tools to validate their findings. This means agents don't just guess — they can run actual analyzers to confirm issues exist.
Hypothesis Verification
When an agent suspects a problem, it can run targeted analysis to execute specific checks on suspicious code. This dramatically reduces false positives by grounding AI analysis in concrete tool output.
Multi-Agent Architecture
These engines enable a sophisticated multi-agent system:
Agent Collaboration
- Parallel execution — multiple specialized agents work simultaneously
- Shared context — agents access the same codebase understanding
- Finding deduplication — overlapping discoveries are merged intelligently
- Cross-validation — agents can verify each other's findings
Specialized Agents
Each review agent leverages the full power of both engines:
- Security agent uses static analyzers to verify vulnerability patterns
- Performance agent runs profiling tools to confirm bottlenecks
- Bug hunter uses type checkers to validate error scenarios
- Quality guardian leverages linters to enforce standards
Phased Review Pipeline
Reviews run through a multi-phase pipeline, each phase optimized for its purpose:
Clone → Data Prep → Guidelines → Review → Deduplication → Validation → Report
- Data Preparation — gathers relevant context about your codebase
- Guidelines — applies project-specific rules and standards
- Review — specialized agents analyze different aspects in parallel
- Deduplication — removes redundant findings across agents
- Validation — filters out low-confidence issues using tooling
Continuous Evolution
The engines evolve with the latest advances:
- New models — latest AI capabilities integrated as they become available
- Tool updates — static analyzers kept current with language evolution
- Rule refinement — review rules continuously improved based on feedback
- Performance optimization — faster reviews without sacrificing quality
The result? A multi-agent system that combines AI reasoning with concrete code analysis — delivering accurate, verified findings instead of speculation.