Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.pecta.ai/llms.txt

Use this file to discover all available pages before exploring further.

Pecta is the quality and reputation layer for AI agents. It sits between any AI agent and the tools it calls, running a configurable set of pass/fail gates over every output before anything reaches the user. Gates execute in-process on your infrastructure — no network round-trip on the hot path — and results ship asynchronously to the Pecta cloud to feed dashboards and a portable reputation score for each agent. Observability tells you what happened; Pecta prevents bad output from happening.

Three integration modes

Choose the mode that matches your stack. You can mix modes across different agents in the same organisation.
ModeUse caseLatencyInstall
In-process SDKRTB, latency-critical Node.js pipelinesunder 15msnpm install @pecta/core
MCP proxy CLIMCP servers in Claude Desktop or Cursorunder 50msnpx pecta-proxy <server-cmd>
REST APIPython, Go, batch analysis, any HTTP client50–100msPOST https://api.pecta.ai/v1/evaluate

Key features

  • Quality gates — parallel, fail-fast checks that run on the very first request with no training period. Built-in gates cover latency, schema validation, filesystem safety, PII detection, content signals, and a full RTB / OpenRTB suite.
  • Reputation scores — a portable 0–1000 score per agent_id, stored centrally and computed over a rolling window of the last 500 evaluations.
  • Privacy by architecture — Pecta never stores bid payloads, MCP tool inputs, or user content. Only metadata (gate name, pass/fail, reason, latency, timestamp) leaves your process.
  • Zero hot-path overhead — telemetry batches asynchronously; gates never block waiting for a network response.

Get started

Pick the quickstart for your integration mode.

SDK quickstart

Install @pecta/core and gate your first agent output in Node.js.

Proxy quickstart

Wrap any MCP server with pecta-proxy — no code changes required.

REST API quickstart

Evaluate agent output over HTTP from Python, Go, or any language.