VOICE

2026-04-23

How to Build Blockchain Data Pipelines in Minutes with AI Agents

Indexing Co ships two interfaces for AI coding agents, an MCP server and a Claude skill. Both spin up production-grade pipelines in under 2 minutes from a plain-English prompt.


Indexing Co ships two interfaces for AI coding agents: an MCP server and a Claude skill. Both spin up production-grade blockchain data pipelines in under 2 minutes from a plain-English prompt.

Watch on YouTube

The Problem with Traditional Blockchain Indexing

Your subgraph just desynced during a token launch. Traders are making decisions on data that's 45 seconds old.

This happens constantly. Building blockchain data pipelines means juggling tools: writing subgraph schemas, deploying to hosting platforms, configuring API endpoints, debugging when something breaks. Context-switching kills momentum. A pipeline that should take minutes to test takes hours to deploy.

Indexing Co ships two interfaces that collapse the workflow into one conversation with your AI coding agent: an MCP (Model Context Protocol) server and a Claude skill. Describe the onchain data you need in plain language. The agent handles the rest.

What MCP Does for Blockchain Data

MCP streams blockchain data directly to your development environment.

Traditional blockchain APIs require you to pull data through fixed endpoints. You configure filters, write transformation logic, deploy to a separate platform, query the results. Each step introduces latency and context-switching.

Indexing Co's MCP server runs as a persistent background process. It connects directly to Claude or any MCP-compatible agent. Tell the agent what data you need, and it:

  1. Writes the filter configuration for specific contracts and events
  2. Creates JavaScript transformation functions that reshape raw block data
  3. Tests the transformation against live blockchain data
  4. Deploys the pipeline to production
  5. Streams results into a local SQLite database via WebSocket

The entire flow takes under 2 minutes. In a recent demo, Indexing Co set up a pipeline tracking all USDC transfers on Base (including a 624-block backfill) in 1 minute 57 seconds.

The Live Demo: USDC Transfers in Under Two Minutes

Dennis and Brock demonstrated the full workflow during a workshop. Starting with a blank Claude session, they asked the agent to track USDC transfer volume on Base.

Claude handled the complete pipeline setup:

Within seconds, the local SQLite database contained over 5,000 USDC transfers. The agent identified a spike in the data (a $200M flash loan to Morpho) without any manual analysis.

This workflow removes configuration files, separate deployment platforms, and API polling. The agent operates entirely within your terminal environment. Data flows continuously via WebSocket, so you see results immediately rather than waiting for historical indexing to complete.

How the Claude Skill Differs from MCP

The Claude skill gives you similar capabilities with a lighter runtime.

MCP runs as a persistent background process. It holds WebSocket connections for streaming and stores up to 10,000 records locally. That opens multi-agent orchestration: multiple Claude sessions can query the same live dataset.

The Claude skill wraps the Indexing Co API into natural-language patterns. It teaches Claude how to create pipelines, define transformations, set filters, and manage deployments. No background process needed, which keeps it lighter for workflows that don't need persistent streaming.

Both approaches use JavaScript for transformations. Transformations stay readable and programmable. The logic you prototype against local SQLite ships to production unchanged. You swap the destination, not the code.

Real-Time Iteration Without Leaving Your Terminal

The streaming architecture turns debugging into a conversation.

Traditional indexers force a deploy-wait-discover-bug loop. You fix, redeploy, wait again. Each iteration costs minutes.

With the MCP, you describe a problem and the agent validates immediately. Example from the workshop:

"Check the volume chart. There's a spike at block 17,234."

The agent queries the local SQLite store. It flags the $200M transfer, confirms it's a Morpho flash loan from embedded protocol knowledge, and suggests filtering flash loans out of the volume calculation.

The feedback loop is tight because data streams continuously into your local environment. No separate dashboards. No API polling. No context-switching between tools.

Agent Assistance Through Pattern Recognition

LLMs already understand most blockchain primitives.

ERC-20 transfers, Uniswap swaps, Aave lending events: these concepts are well-represented in training data. The agent knows event signatures, ABI structures, and common transformation patterns.

Agents struggle with custom contracts or niche protocols. The Indexing Co MCP ships helper functions for those cases:

The agent needs 1-2 iterations to get custom contract indexing working. Streaming validates each attempt on the spot, so you see results as soon as the transformation compiles.

Next up: Indexing Co is integrating RAG (retrieval-augmented generation) to suggest transformation patterns from similar pipelines. That gives new users a starting point when they're not sure how to shape their data.

Why This Works Better Than API Wrappers

Most blockchain data APIs constrain what you can query.

You get predefined endpoints for common patterns: token transfers, NFT mints, DEX swaps. Custom queries require manual configuration or aren't supported at all.

Indexing Co's MCP inverts that model. You define transformations in JavaScript, so you can reshape data however your application needs it. The MCP handles:

That flexibility compounds during prototyping. In the workshop, Dennis and Brock refined the initial USDC pipeline on the fly:

"Only track USDC transfers on Aave, not all transfers."

"Deploy the same transformation to Ethereum mainnet."

"Run parallel pipelines for USDC, USDT, and DAI."

These are single-line changes in a conversation with Claude. The agent handles redeploying with the updated configuration.

From Prototype to Production Without Rewriting

The transformation logic you prototype locally is production-ready.

JavaScript transformations written during development behave identically against a production database. You're not translating between a local testing format and a production schema. The schema gets defined once, then reused.

Indexing Co processes 47 billion blockchain events daily across 100+ chains on this architecture. Median block-to-storage latency: 2.54 seconds. The same transformation functions that power production pipelines run in local SQLite during development.

That continuity cuts a major friction point. Teams typically prototype with one toolset (simplified for speed) then rebuild for production (optimized for scale). The gap creates bugs and delays. Indexing Co collapses the gap. You iterate fast without building technical debt.

Multi-Agent Orchestration Through Shared Data

The MCP lets agents share a live blockchain data source.

Data streams into local SQLite as a background process, so multiple Claude sessions can query the same dataset at the same time. One agent handles pipeline setup while another runs analytics or builds APIs on top of the streamed data.

The pattern fits how specialized agents are starting to collaborate on complex workflows. The MCP acts as infrastructure: a persistent data layer that any agent can hit through natural language.

For teams building multi-agent systems, that removes a major bottleneck. Agents stop polling external APIs or coordinating access to shared resources. Data flows continuously, and any agent with MCP access can query it.

What You Need to Get Started

For the Claude skill:

  1. Sign up for an Indexing Co API key at accounts.indexing.co
  2. Install via GitHub: claude skill add --scope user https://github.com/indexing-co/indexing-co-pipeline-skill
  3. Tell Claude what onchain data you need

For the MCP server:

  1. Get an API key (same as above)
  2. Clone the MCP repository https://github.com/indexing-co/indexing-co-mcp
  3. Follow the instructions in the repo for installation
  4. Connect Claude to the running MCP instance

The skill loads pipeline patterns, an event signature database, and debugging workflows into Claude. The MCP adds persistent data streaming and local storage.

Both approaches assume technical fluency with blockchain concepts. If you're working with custom contracts, have the ABI or Solidity source available. The agent will prompt you if it needs additional context.

The Infrastructure Advantage

A point Dennis made in the workshop: most AI wrappers won't last.

Public blockchain knowledge gets commoditized into LLM training data. Competitors wrapping third-party APIs inherit those APIs' limits. What holds up long-term is owning infrastructure that an API can't replicate.

Indexing Co owns the indexing pipeline from chain nodes to final data delivery. That vertical integration ships features API-based competitors can't match:

The MCP and Claude skill expose this infrastructure through a conversational interface. You get the flexibility of custom indexing without running the infrastructure yourself.

Next Steps and Community Builds

Indexing Co is seeking feedback on what to build next.

If you need functionality the current MCP doesn't support, email hello@indexing.co or have your agent contact them. The team plans to integrate Perplexity for fast search and enrichment directly into pipeline transformations.

GitHub links for the MCP server and Claude skill live in the Indexing Co docs. The team wants to see what developers build with this tooling. Share your pipelines and use cases.

For multi-agent workflows, treating the MCP as a central data source for onchain information is the recommended pattern. Run the MCP server as infrastructure, then connect multiple agents that query and process the streamed data.

Key Takeaways