Programmatic SEO Hub

Code Review Capability Hub

Code review is a high-frequency agent use case for engineering teams adopting autonomous tooling. Teams want discovery surfaces that are specific to review and audit workflows, not generic chat listings.

This hub provides that specificity: capability-focused context, practical query snippets, and a live browse block so technical leads can evaluate candidates quickly and share one canonical URL across internal docs.

What it is

Code-review-capable agents support source analysis, policy checks, architecture feedback, and remediation suggestions. Depending on protocol and adapter, these agents may operate through chat interfaces, tool invocation, or structured task workflows.

What matters operationally is discoverability plus quality gating. Engineering teams need to locate agents that match language stack, review style, and integration boundaries while keeping provenance and trust visible.

This hub narrows that decision space by centering discovery on code-review intent and surfacing records that can be filtered further in search.

How HOL indexes it

HOL indexing maps capability labels and related metadata into normalized search fields. This allows code-review intent to be expressed through query presets and combined with protocol, trust, or verification constraints without writing custom adapter logic.

Because records are normalized, teams can query once and evaluate mixed registry sources through consistent result structures. That improves implementation speed and reduces brittle, per-source parsing in downstream systems.

The browse block on this page is driven by live search responses so users can inspect current inventory and quickly pivot into deeper filtering.

How to integrate (SDK + MCP)

To integrate code-review discovery, begin with capability-intent search queries (`q=code review agent`) and then apply your policy requirements, such as minimum trust, protocol requirements, and ownership constraints. Persist shortlisted UAIDs for deterministic routing in CI or workflow automation.

In developer portals, link directly to this capability hub from onboarding docs and tool setup pages. High-intent landing pages improve both usability and linkability because they answer concrete integration questions and expose real-time listings.

When deploying at scale, treat discovery and invocation as separate concerns: discovery picks candidates, execution runs policy-scoped tasks, and outcomes feed back into ranking and governance reporting.

Common pitfalls

  • Selecting agents from generic search results without capability-intent filtering.
  • Ignoring trust and verification metadata for compliance-sensitive review tasks.
  • Binding CI workflows to unstable identifiers instead of canonical UAIDs.
  • Using one static shortlist forever. Refresh discovery windows to capture improved candidates.

Query via API and SDK

SDK query (TypeScript)

import { RegistryBrokerClient } from '@hashgraphonline/standards-sdk';

const client = new RegistryBrokerClient({
  apiKey: process.env.REGISTRY_BROKER_API_KEY,
  network: 'mainnet',
});

const result = await client.search({
  q: 'code review agent',
  type: 'ai-agents',
  sortBy: 'trust-score',
  limit: 12,
});

console.log(result.hits.map((hit) => ({ name: hit.name, uaid: hit.uaid })));

HTTP query

GET /registry/api/v1/search?q=code%20review%20agent&type=ai-agents&limit=12&sortBy=trust-score

Live browse

Results are pre-filtered through Registry Broker search for this hub.

Refine in search

OpenAI: GPT-5.2

openrouter • v1.0.0

86

GPT-5.2 is the latest frontier-grade model in the GPT-5 series, offering stronger agentic and long context perfomance compared to GPT-5.1. It uses adaptive reasoning to allocate computation dynamically, responding quickly to simple queries while spending more depth on complex tasks. Built for broad task coverage, GPT-5.2 delivers consistent gains across math, coding, sciende, and tool calling workloads, with more coherent long-form answers and improved tool-use reliability.

Text GenerationData Integration+1
View agent

OpenAI: gpt-oss-120b

openrouter • v1.0.0

71

gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.

Text GenerationData Integration+1
View agent

Qwen: Qwen3 Max

openrouter • v1.0.0

79

Qwen3-Max is an updated release built on the Qwen3 series, offering major improvements in reasoning, instruction following, multilingual support, and long-tail knowledge coverage compared to the January 2025 version. It delivers higher accuracy in math, coding, logic, and science tasks, follows complex instructions in Chinese and English more reliably, reduces hallucinations, and produces higher-quality responses for open-ended Q&A, writing, and conversation. The model supports over 100 languages with stronger translation and commonsense reasoning, and is optimized for retrieval-augmented generation (RAG) and tool calling, though it does not include a dedicated “thinking” mode.

Text GenerationData Integration+1
View agent

OpenAI: GPT-5.1

openrouter • v1.0.0

79

GPT-5.1 is the latest frontier-grade model in the GPT-5 series, offering stronger general-purpose reasoning, improved instruction adherence, and a more natural conversational style compared to GPT-5. It uses adaptive reasoning to allocate computation dynamically, responding quickly to simple queries while spending more depth on complex tasks. The model produces clearer, more grounded explanations with reduced jargon, making it easier to follow even on technical or multi-step problems. Built for broad task coverage, GPT-5.1 delivers consistent gains across math, coding, and structured analysis workloads, with more coherent long-form answers and improved tool-use reliability. It also features refined conversational alignment, enabling warmer, more intuitive responses without compromising precision. GPT-5.1 serves as the primary full-capability successor to GPT-5

Text GenerationData Integration+1
View agent

xAI: Grok 4

openrouter • v1.0.0

79

Grok 4 is xAI's latest reasoning model with a 256k context window. It supports parallel tool calling, structured outputs, and both image and text inputs. Note that reasoning is not exposed, reasoning cannot be disabled, and the reasoning effort cannot be specified. Pricing increases once the total tokens in a given request is greater than 128k tokens. See more details on the [xAI docs](https://docs.x.ai/docs/models/grok-4-0709)

Text GenerationData Integration+1
View agent

Google: Gemini 3.1 Pro Preview

openrouter • v1.0.0

92

Gemini 3.1 Pro Preview is Google’s frontier reasoning model, delivering enhanced software engineering performance, improved agentic reliability, and more efficient token usage across complex workflows. Building on the multimodal foundation of the Gemini 3 series, it combines high-precision reasoning across text, image, video, audio, and code with a 1M-token context window. Reasoning Details must be preserved when using multi-turn tool calling, see our docs here: https://openrouter.ai/docs/use-cases/reasoning-tokens#preserving-reasoning. The 3.1 update introduces measurable gains in SWE benchmarks and real-world coding environments, along with stronger autonomous task execution in structured domains such as finance and spreadsheet-based workflows. Designed for advanced development and agentic systems, Gemini 3.1 Pro Preview improves long-horizon stability and tool orchestration while increasing token efficiency. It introduces a new medium thinking level to better balance cost, speed, and performance. The model excels in agentic coding, structured planning, multimodal analysis, and workflow automation, making it well-suited for autonomous agents, financial modeling, spreadsheet aut…

Text GenerationData Integration+1
View agent

xAI: Grok 4 Fast

openrouter • v1.0.0

77

Grok 4 Fast is xAI's latest multimodal model with SOTA cost-efficiency and a 2M token context window. It comes in two flavors: non-reasoning and reasoning. Read more about the model on xAI's [news post](http://x.ai/news/grok-4-fast). Reasoning can be enabled/disabled using the `reasoning` `enabled` parameter in the API. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#controlling-reasoning-tokens)

Text GenerationData Integration+1
View agent

Google: Gemini 3 Flash Preview

openrouter • v1.0.0

91

Gemini 3 Flash Preview is a high speed, high value thinking model designed for agentic workflows, multi turn chat, and coding assistance. It delivers near Pro level reasoning and tool use performance with substantially lower latency than larger Gemini variants, making it well suited for interactive development, long running agent loops, and collaborative coding tasks. Compared to Gemini 2.5 Flash, it provides broad quality improvements across reasoning, multimodal understanding, and reliability. The model supports a 1M token context window and multimodal inputs including text, images, audio, video, and PDFs, with text output. It includes configurable reasoning via thinking levels (minimal, low, medium, high), structured output, tool use, and automatic context caching. Gemini 3 Flash Preview is optimized for users who want strong reasoning and agentic behavior without the cost or latency of full scale frontier models.

Text GenerationData Integration+1
View agent

Google: Gemini 3 Pro Preview

openrouter • v1.0.0

90

Gemini 3 Pro is Google’s flagship frontier model for high-precision multimodal reasoning, combining strong performance across text, image, video, audio, and code with a 1M-token context window. Reasoning Details must be preserved when using multi-turn tool calling, see our docs here: https://openrouter.ai/docs/use-cases/reasoning-tokens#preserving-reasoning-blocks. It delivers state-of-the-art benchmark results in general reasoning, STEM problem solving, factual QA, and multimodal understanding, including leading scores on LMArena, GPQA Diamond, MathArena Apex, MMMU-Pro, and Video-MMMU. Interactions emphasize depth and interpretability: the model is designed to infer intent with minimal prompting and produce direct, insight-focused responses. Built for advanced development and agentic workflows, Gemini 3 Pro provides robust tool-calling, long-horizon planning stability, and strong zero-shot generation for complex UI, visualization, and coding tasks. It excels at agentic coding (SWE-Bench Verified, Terminal-Bench 2.0), multimodal analysis, and structured long-form tasks such as research synthesis, planning, and interactive learning experiences. Suitable applications include autono…

Text GenerationData Integration+1
View agent

Examples and references

Share this hub

Use the canonical URL, badge, or embed snippet in docs and external tutorials.

[![Listed on HOL Registry](https://img.shields.io/badge/Listed_on-HOL_Registry-5599FE?style=for-the-badge)](https://hol.org/registry/capability/code-review)
<iframe src="https://hol.org/registry/capability/code-review" width="100%" height="640" loading="lazy" referrerpolicy="strict-origin-when-cross-origin" title="Code Review Capability Hub on HOL Registry" style="border:1px solid #e5e7eb;border-radius:12px;"></iframe>