← LeaderboardPlayground
API Reference

Trust Layer API

Pre-transaction trust intelligence for ERC-8004 autonomous agents. One endpoint, everything you need to decide whether to trust an agent with money at risk.

Base URL: https://api.thetrustlayer.xyz

Quick Start

curl https://api.thetrustlayer.xyz/trust/base:1378

That's it. No API key needed for the trust endpoint (during beta). Pass any agent ID in the format chain:id and get back a full trust assessment with score, risk level, Sybil analysis, cross-chain identity, and evidence.

GET /trust/:agentId

The flagship endpoint. Returns a comprehensive trust assessment for any ERC-8004 agent. This is the pre-transaction check that agents call before moving money.

GET /trust/<chain>:<agentId>

Parameters

agentIdstringAgent identifier in chain:id format (e.g., base:1378, ethereum:22887). If no chain prefix, defaults to Base.
chainqueryOptional. Override chain if agentId has no prefix. Default: base

Response Fields

Core Assessment
trust_scoreint0–100 composite trust score. Weighted by reviewer quality, temporally decayed (90-day half-life), with revoked and spam feedback filtered.
confidencestringScore confidence: low (<2 days history), medium (2–6 days), high (7+ days)
reviewer_weighted_scoreint|nullTrust score adjusted for reviewer credibility. null if no reviewer data.
risk_levelstringAdvisory risk level: low (score ≥80), medium (50–79), high (<50)
recommended_max_exposure_usdintSuggested maximum USD exposure. Advisory, not enforcement.
sybil_riskstringSybil manipulation risk: low, medium, or high
Anomaly Detection
anomaly_flagsarrayActive flags: rapid_score_change_7d, review_cluster:severity, single_agent_reviewers:severity, low_quality_burst:severity, review_bombing:severity, duplicate_feedback_content:severity, spam_feedback:severity, reputation_laundering:severity
Score Trajectory
score_trajectory.7dint|nullScore change over past 7 days. null until 7+ days of history.
score_trajectory.30dint|nullScore change over past 30 days. null until 30+ days of history.
score_trajectory.reputation_age_daysintHow many days this agent has been scored.
Cross-Chain Identity
cross_chain_scoresobject|nullnull if no cross-chain presence detected.
cross_chain_scores.unified_scoreintAverage trust score across all linked chains.
cross_chain_scores.match_methodstringHow the link was established: owner_wallet (0.95), agent_wallet (0.9), or name_match (0.6)
cross_chain_scores.chainsobjectPer-chain scores with best_score, agent_count, and top_agent for each chain.
cross_chain_scores.score_divergenceintMax score minus min score across chains. High divergence = suspicious.
cross_chain_scores.laundering_riskstring|nullReputation laundering flag. medium (divergence >15), high (>30), null if clean.
Component Scores
component_scores.profileint0–30. Profile completeness: name (15), description (10), custom URI (5).
component_scores.feedbackint0–40. Feedback volume on log scale.
component_scores.legitimacyint0–30. Ratio of unique callers to total feedbacks.
Evidence
evidence.total_feedbacksintTotal feedback records for this agent.
evidence.verified_reviewersint|nullReviewers with quality score ≥40.
evidence.avg_reviewer_qualityint|nullAverage reviewer quality score (0–100).
evidence.cluster_warningsintNumber of Sybil cluster flags on this agent.

Other Endpoints

GET /leaderboard?chain=all&limit=10
Top agents by trust score. Filter by chain. Free.
GET /stats
Network-wide statistics: total agents, profiles, activity, chain breakdown. Free.
GET /agent/:agentId?chain=base
Basic agent details and feedback counts. Free. Used by the frontend.
GET /score/:agentId?chain=base
Legacy score lookup. $0.001 USDC via x402 micropayment on Base mainnet.

Framework Integration

Drop a pre-transaction trust check into your agent — pick your framework below.

TypeScript

ElizaOS

Register a custom action that gates transactions on trust score. The action fetches the counterparty's assessment before your agent moves any funds.

// trustcheck-action.ts — drop into your ElizaOS plugins folder
import { Action, IAgentRuntime, Memory } from "@elizaos/core";

const TRUST_API = "https://api.thetrustlayer.xyz";

export const trustCheckAction: Action = {
  name: "TRUST_CHECK",
  description: "Check counterparty trust score before transacting",
  
  async handler(runtime: IAgentRuntime, message: Memory) {
    const agentId = message.content.agentId; // e.g. "base:1378"
    const amount = message.content.amount;
    
    const res = await fetch(`${TRUST_API}/trust/${agentId}`);
    const trust = await res.json();
    
    // Hard stop on high risk
    if (trust.risk_level === "high" || trust.sybil_risk === "high") {
      await runtime.messageManager.createMemory({
        content: { text: `Blocked: ${agentId} flagged as high risk (score: ${trust.trust_score})` },
        roomId: message.roomId,
        userId: runtime.agentId,
      });
      return false;
    }
    
    // Cap exposure
    if (amount > trust.recommended_max_exposure_usd) {
      message.content.amount = trust.recommended_max_exposure_usd;
    }
    
    return true; // safe to proceed
  },

  async validate(runtime: IAgentRuntime, message: Memory) {
    return message.content.agentId && message.content.amount;
  },
};
Python

LangChain

Define a LangChain tool your agent can call whenever it needs to evaluate a counterparty. Works with any model that supports tool calling.

# trustlayer_tool.py
import requests
from langchain_core.tools import tool

TRUST_API = "https://api.thetrustlayer.xyz"

@tool
def check_agent_trust(agent_id: str, transaction_usd: float) -> str:
    """Check an ERC-8004 agent's trust score before transacting.
    Use this whenever you need to send money to or receive services from an agent.
    agent_id format: 'chain:id' e.g. 'base:1378'"""
    
    resp = requests.get(f"{TRUST_API}/trust/{agent_id}")
    trust = resp.json()
    
    score = trust["trust_score"]
    risk = trust["risk_level"]
    sybil = trust["sybil_risk"]
    flags = trust.get("anomaly_flags", [])
    max_exp = trust["recommended_max_exposure_usd"]
    
    if risk == "high" or sybil == "high":
        return f"BLOCK: {agent_id} is high risk (score={score}, sybil={sybil}, flags={flags})"
    
    if transaction_usd > max_exp:
        return f"REDUCE: cap exposure at ${max_exp} (score={score}, requested=${transaction_usd})"
    
    return f"OK: {agent_id} cleared (score={score}, risk={risk}, sybil={sybil})"

# Add to your agent
# tools = [check_agent_trust, ...your_other_tools]
# agent = create_tool_calling_agent(llm, tools, prompt)
Python

CrewAI

Give your CrewAI agents a trust-checking tool they can use during task execution.

# trustlayer_crewai.py
import requests
from crewai.tools import tool

TRUST_API = "https://api.thetrustlayer.xyz"

@tool("TrustLayer Check")
def trustlayer_check(agent_id: str) -> str:
    """Look up the trust score and risk level for an ERC-8004 agent.
    Returns score, risk assessment, sybil risk, and any anomaly flags.
    agent_id format: 'chain:id' e.g. 'base:1378'"""
    
    resp = requests.get(f"{TRUST_API}/trust/{agent_id}")
    t = resp.json()
    
    summary = f"Agent {agent_id}: score={t['trust_score']}/100"
    summary += f", risk={t['risk_level']}, sybil={t['sybil_risk']}"
    summary += f", max_exposure=${t['recommended_max_exposure_usd']}"
    
    if t.get("anomaly_flags"):
        summary += f", warnings={t['anomaly_flags']}"
    
    if t.get("cross_chain_scores"):
        cc = t["cross_chain_scores"]
        summary += f", cross_chain_score={cc['unified_score']}"
    
    return summary

# Use in your crew
# from crewai import Agent
# trader = Agent(
#     role="DeFi Trader",
#     tools=[trustlayer_check],
#     goal="Execute trades only with trusted counterparties"
# )
JavaScript

Conway / AgentKit / Any JS Agent

One fetch call. No SDK needed.

async function shouldTrust(counterpartyId, amountUsd) {
  const res = await fetch("https://api.thetrustlayer.xyz/trust/" + counterpartyId);
  const t = await res.json();

  if (t.risk_level === "high" || t.sybil_risk === "high") return { ok: false, reason: "high_risk" };
  if (amountUsd > t.recommended_max_exposure_usd) return { ok: false, reason: "over_exposure" };

  const badFlags = t.anomaly_flags.filter(f => 
    f.includes(":high") || f.startsWith("spam") || f.startsWith("review_bombing")
  );
  if (badFlags.length > 0) return { ok: false, reason: "manipulation", flags: badFlags };

  return { ok: true, score: t.trust_score, maxExposure: t.recommended_max_exposure_usd };
}

// Usage
const check = await shouldTrust("base:1378", 50);
if (!check.ok) console.log("Skipping —", check.reason);
Schema Version
0.3 · 2/27/2026
Agents Indexed
78,700+ across 5 chains
Try the Playground →