Indexing Co vs SQD (Subsquid)

SQD is a decentralized data lake with ZK-proof verification and impressive enterprise logos. Indexing Co is a real-time managed pipeline that delivers data directly to your own database as blocks arrive.


Your protocol needs to trigger a risk engine within three seconds of an on-chain event. You evaluate SQD: it has enterprise clients including Deutsche Telekom, strong ZK-proof data guarantees, and 200+ chains. You dig into the architecture and find that the Portal is built around a batch data lake model. Data is ingested, archived, and made available for queries. Freshness is measured in batches, not in seconds. For historical analytics that's fine. For a real-time alert system, it's the wrong shape.

That's the core distinction: a data archive built for query throughput versus a pipeline built for low-latency delivery.

Architecture

SQD: Decentralized Data Lake and Query Gateway

SQD's infrastructure is built around a decentralized data lake: 2,500+ nodes holding 2.1 petabytes of indexed blockchain data across 200+ networks. The Portal is SQD's high-performance query gateway that sits in front of this lake, serving around 5 million queries per day. Data integrity is secured with ZK proofs, which gives enterprises a verifiable guarantee that the data hasn't been tampered with.

The batch processing model is the right choice for SQD's use case: deep historical queries, full dataset scans, analytics workloads where you're asking questions across millions of blocks. Enterprise clients like Deutsche Telekom and Morpho are using it in that context. SQD was acquired by Rezolve AI (NASDAQ: RZLV) in October 2025 and is now part of a publicly traded company focused on AI and commerce infrastructure, a shift worth factoring into a long-term vendor assessment.

Enterprise billing is available through Portal Revenue Pools via fiat and stablecoins, launched in January 2026.

Indexing Co: Real-Time Managed Pipelines

Indexing Co runs as a managed pipeline service. You define your data sources: chains, contracts, events, blocks, wallet addresses, add TypeScript transforms to shape the output, and choose where it lands: PostgreSQL, BigQuery, or a webhook. The pipeline runs continuously. New blocks arrive, get processed, and appear in your database at sub-500ms block-to-storage on dedicated infrastructure.

There's no data lake to query. There's no Portal to configure. The data is yours, in your schema, in your infrastructure, as it happens.

Feature Comparison

Feature SQD Indexing Co
Architecture Decentralized data lake + query gateway Managed real-time pipeline
Processing model Batch (archive and query) Continuous delivery to your database
Block-to-database delivery Batch intervals via Portal sub-500ms (dedicated infra)
Chain support 200+ networks 100+ chains
Data destination Query via Portal Your PostgreSQL, BigQuery, or webhook
Data verification ZK-proof secured Managed service guarantee + SLA
Infrastructure to run Portal setup required None, fully managed
Enterprise billing Fiat/stablecoins via Revenue Pools Standard fiat subscription
Enterprise clients Deutsche Telekom, Morpho, PancakeSwap Available, contact for case studies
Data volume 2.1 PB indexed, ~5M queries/day 1B+ events/day processed
Custom transforms Squid processor (TypeScript) TypeScript transforms
Company status Acquired by Rezolve AI (NASDAQ: RZLV) Independent

When to Use Each

Use SQD if
  • You need deep historical data access across a broad set of chains
  • Your workload is analytics-heavy: large scans, retrospective queries, full dataset analysis
  • ZK-proof data verification is a requirement for your compliance or trust model
  • You're comfortable with a Portal-based query model rather than direct database delivery
  • You're already in the Subsquid ecosystem and the Rezolve AI direction aligns with your roadmap
Use Indexing Co if
  • You need data delivered within seconds of a block being finalized
  • Your system reacts to on-chain events, risk engines, alerts, real-time dashboards, automated workflows
  • You want data in your own PostgreSQL or BigQuery with no query gateway in the middle
  • You don't want to manage Portal infrastructure
  • You need a straightforward fiat subscription with guaranteed latency SLAs
SQD's data lake is purpose-built for historical depth and query throughput, and the ZK-proof model gives it a credibility story that's hard to match. The tradeoff is that batch architecture isn't the right fit for latency-sensitive use cases. If your system needs to know what happened on-chain in the last two seconds, you're working against the grain of SQD's design. If you need to query what happened across the last two years, it's worth a serious look.

Talk to the Team | Open the Console