Building Onchain Data Pipelines Through Claude Code
Claude Code can now set up production onchain data pipelines for you. Just tell it what onchain data you need. The Indexing Co Pipeline skill shipped this week. Install it once, then ask Claude Code to index Uniswap swaps, track token transfers, or monitor lending events. It handles the entire workflow: writing filters, building transformation functions, generating SQL schemas, and deploying to production.
What it does
The skill teaches Claude Code the Indexing Co pipeline API. Three components:
Filter — which contracts or wallets to watch
Transformation — JavaScript that reshapes raw block data into structured output
Destination — where to send the results (PostgreSQL, Kafka, webhooks, S3, 16+ other adapters)
Claude walks through each step with you. It writes the transformation logic, tests it against real blocks, generates matching database schemas, and deploys the pipeline. The entire conversation happens in natural language.
Example prompts that trigger the skill: - "Index all Uniswap V2 swaps on Ethereum and send them to my Postgres database" - "Set up a pipeline to track ERC-20 transfers for this contract: 0x..." - "Monitor Aave lending events across Ethereum and Arbitrum" - "Backfill historical token transfers for the last 6 months"
How the workflow runs
Start a session in Claude Code. Ask for a data pipeline. The skill activates automatically.
Claude first confirms what you want to track. It asks about contract addresses, event types, and which chains to monitor. From there, it drafts a filter configuration.
Next comes the transformation function. Claude writes JavaScript that extracts specific fields from raw transaction data and reshapes them into your target schema. It shows you the code. You can adjust the logic or ask it to add computed fields.
Before deployment, Claude tests the transformation against a live block. It fetches actual transaction data, runs your function, and shows the output. If something breaks or the schema needs adjustment, you iterate right there in the conversation.
Once the transformation works, Claude generates the SQL schema that matches your output format. It handles data types, indexes, and constraints based on what your transformation returns.
Final step: deployment. Claude Code calls the Indexing Co API to create the pipeline. Within seconds, data starts flowing to your database. You can monitor it, backfill historical data, or set up additional pipelines using the same pattern.
Coverage and destinations
The skill supports 100+ networks: EVM chains (Ethereum, Base, Arbitrum, Optimism), Solana, Aptos, Bitcoin, Cosmos, and more. You specify the chain when setting up your filter.
For destinations, the skill integrates with:
PostgreSQL, MySQL, SQLite, MongoDB, BigQuery, Firestore, Neo4j, ArangoDB, Webhooks (HTTP), WebSocket, Kafka, Kinesis, Pulsar, GCP PubSub, AWS S3, Google Cloud Storage
Pick the adapter that fits your infrastructure. Claude handles the configuration.
Prerequisites and installation
You need Claude Code installed and an Indexing Co API key. Sign up at accounts.indexing.co to get your key.
Clone the repository and copy the skill folder into your Claude Code skills directory:
git clone https://github.com/indexing-co/indexing-co-pipeline-skill.git
cp -r indexing-co-pipeline-skill/skills/indexing-co-pipelines ~/.claude/skills/
Claude Code picks up the skill on your next session. No additional configuration required.
Why this approach works
Most indexing tools force you to context-switch: write configuration files, deploy to a separate platform, debug errors in isolation from your development environment. The skill keeps everything in the conversation. For teams building dApps, analytics dashboards, or AI training datasets, this removes the friction of setting up data infrastructure. Just state your requirement and iterate with Claude until the pipeline works.
The skill turns Claude Code into a direct interface for the Indexing Co API. You describe what you need. Claude writes the code. You see the output before it goes live. The entire pipeline development cycle compresses into a single session.