Dennis

2026-03-09

Build Blockchain Data Pipelines with Your AI Coding Agent

# Build Blockchain Data Pipelines with Your AI Coding Agent You tell your agent what onchain data you need. It builds the pipeline. No dashboard, no configur...


You tell your agent what onchain data you need. It builds the pipeline. No dashboard, no configuration files, no context switching.

The Indexing Co MCP (Model Context Protocol) server connects AI coding agents directly to the Indexing Co API. Whether you use Claude Code, Codex CLI, or another MCP-compatible agent, you get a conversational workflow for creating filters, writing transformations, deploying pipelines, and streaming live blockchain events.

What You Get

Three components work together:

MCP Server — Connects your agent to the Indexing Co API and live event stream via WebSocket. Gives it direct access to pipeline operations, transformation testing, and event subscriptions.

Skill — Teaches the agent the full pipeline workflow: filter patterns, transformation syntax, destination adapters, event signatures, and debugging steps. This is how the agent knows what questions to ask and what validations to run. Currently available for Claude Code, with Codex CLI task support coming soon.

Preview CLI — Streams live pipeline events to your terminal with colorized output. Addresses appear in magenta, transaction hashes in blue, numbers in yellow. Nested objects render with recursive colorization and indentation. Press Ctrl+C to see a summary with total event count, duration, and average throughput.

How It Works

You describe what you want in natural language:

The agent walks through the workflow: creates a filter with the contract addresses, writes a transformation function using the event ABI, tests it against a recent block, sets up the destination adapter, deploys the pipeline, and backfills historical data.

Each step runs against live data. When the agent tests a transformation, it fetches actual transaction data from the chain, runs your function, and shows the output. If the schema needs adjustment or an event signature is wrong, you iterate in the conversation.

Example Session

You: "I want to track all USDC transfers on Base"

Agent: Creates a filter with the Base USDC contract address (0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913)

Agent: Writes a transformation using utils.evmDecodeLogWithMetadata for Transfer events

Agent: Tests the transformation against block 25384920, shows decoded output

Agent: Asks where you want the data — Postgres? Webhook? Direct stream?

You: "Postgres, same schema as our other transfers table"

Agent: Generates the delivery config, creates matching database schema, deploys the pipeline

Agent: Backfills the last 100 blocks and confirms data arrives in your database

You: "Stream the next events so I can see them"

Agent: Launches the preview CLI with your channel name, triggers a backfill — live events appear in your terminal as they process

What the Agent Can Do

The MCP server exposes these capabilities to any connected agent:

Pipeline management — List, create, read, and delete pipelines. Check status, view configuration, pause and resume.

Filter management — Create address filters for specific contracts, list existing filters, add or remove addresses from filters.

Transformation handling — Write transformation code, test transformations against live blocks, view and update existing transformations.

Backfill control — Backfill historical blocks for a pipeline, monitor backfill progress.

Event streaming — Subscribe to live event channels, stream events to the terminal, query stored events with SQL.

Data exploration — Run SQL queries against indexed data, describe table schemas, preview results.

Each operation runs through the API with your credentials. The agent sees the same data you'd see in the dashboard, but you never leave the terminal.

Live Event Streaming

Pipelines using the DIRECT adapter send events to a named channel. You stream those events with the preview CLI:

node /path/to/indexing-co-mcp/dist/cli/preview.js my-channel-name

The DIRECT adapter only sends events when at least one subscriber is connected. This keeps the stream lightweight — no persistent queue, no storage overhead.

You can run DIRECT alongside any other adapter. Set up two pipelines with the same filter and transformation: one delivers to Postgres, the other to your preview channel. Both process the same events. One gives you queryable history, the other gives you a live terminal view.

Setup

Claude Code

1. Install the MCP server

git clone https://github.com/indexing-co/indexing-co-mcp.git
cd indexing-co-mcp
npm install && npm run build

Register it with Claude Code:

claude mcp add --scope user indexing-co -- node /path/to/indexing-co-mcp/dist/index.js

2. Add your credentials

Create ~/.indexing-co/credentials with your API key:

API_KEY=your_api_key

Sign up at accounts.indexing.co if you need an API key.

3. Install the skill

claude skill add --scope user https://github.com/indexing-co/indexing-co-pipeline-skill

This loads the pipeline patterns, event signature database, and debugging workflows into the agent's context.

Codex CLI

1. Install the MCP server

Same installation steps as above — clone, build, and add your credentials.

2. Register with Codex

codex mcp add indexing-co -- node /path/to/indexing-co-mcp/dist/index.js

Codex picks up the MCP tools automatically. The skill system differs from Claude Code, but the MCP server works the same way — same tools, same API access, same live event streaming.

Other MCP-Compatible Agents

Any agent that supports the Model Context Protocol can connect to the Indexing Co MCP server. Point it at the built server entry point and provide your API credentials. The tool interface is standardized — pipeline operations, filter management, transformation testing, and event subscriptions all work through MCP tool calls.

What This Enables

Conversational debugging — Describe the problem. The agent checks pipeline status, reviews recent events, tests the transformation against a failing block, and suggests fixes.

Schema iteration — Change your transformation logic, test it against historical data, see the new output, deploy when it looks right.

Multi-chain setup — "Index the same events on Ethereum, Base, and Arbitrum" — the agent creates three pipelines with the same transformation logic but different chain configurations.

Ad-hoc queries — "Show me all transfers above $100k from the last hour" — the agent writes and runs the SQL query against your indexed data.

Live monitoring — Stream events from a new deployment, watch for anomalies, kill the stream when you're confident it's working.

The skill teaches the agent how to chain these operations. It knows to test a transformation before deploying, to backfill a small range before a large one, to check delivery status after a backfill completes.

Workflow Patterns

New pipeline from scratch: 1. Describe the onchain data you need 2. The agent identifies the contract(s) and event(s) 3. The agent writes a transformation and tests it against a recent block 4. You review the output, request adjustments if needed 5. Choose your destination (Postgres, webhook, direct stream) 6. The agent deploys and backfills a test range 7. Verify data arrives, then backfill historical blocks if needed

Modify existing pipeline: 1. "Update the USDC pipeline to include the token symbol" 2. The agent retrieves the current transformation 3. The agent adds the new field and tests against a known block 4. You confirm the output looks correct 5. The agent updates the transformation and reprocesses recent blocks

Debug delivery issues: 1. "My Aave pipeline stopped delivering events" 2. The agent checks pipeline status, reviews error logs 3. The agent tests the transformation against the block where delivery failed 4. Identifies the issue (schema mismatch, network timeout, malformed event) 5. Suggests a fix, implements it after confirmation

When to Use This

This workflow fits if you:

The MCP server runs locally. Your API key stays on your machine. The agent operates with the same permissions as your API key — it can create and delete pipelines, backfill historical data, and query indexed events. Treat the MCP connection with the same security posture as your API credentials.

Full documentation: docs.indexing.co

MCP server repository: github.com/indexing-co/indexing-co-mcp

Skill repository: github.com/indexing-co/indexing-co-pipeline-skill

Questions: hello@indexing.co