Published
18 October, 2024
by
@aperturecrypto
Indexing for Interoperability: Modular Chains
Recently the term modular has got a lot more attention in the world of blockchains.

In the recent year the term modular has gotten more attention in crypto. Modular chains aim to solve the scalability trilemma. The trilemma claims that a blockchain can only have two of the following three features: decentralization, scalability, security. Modular chains try to solve this trilemma by separating blockchain functions into distinct components. Each component can be picked by developers to optimize for their chain needs, whether that is building a new L1, L2, L3 or dApp-chain. The components are:
Execution layer: processes the transactions and computes state changes
The Consensus layer: ensures the agreement on the order and validity of transactions
The Settlement layer: provides finality and security guarantees
The Data Availability layer: ensures that transaction data is accessible to network participants
While data availability is crucial for the operations of the network participants, it only addresses the needs of those inside the network. When data needs to be available outside of the network for triggers on user interfaces, activity on other chains or analytics, indexing needs to be done to retrieve that on-chain data and make it available elsewhere. This article explores how The Indexing Company is building a data marketplace to meet the unique challenges and opportunities presented by modular chain architectures. The article should be relevant for various modular and interoperability chains and protocols like Celestia, Avail, Cosmos, the Superchain (the chains building with the OP Stack), etc.
Indexing Chains With different Virtual Machines
Since data lives on multiple chains, developers need to connect to multiple RPCs to get that data. These RPC endpoints can be different because every VM can be different. For example the Ethereum Virtual Machine (EVM) chains already can have different types of RPCs, which results in data having varying features or structures. The differences become even more clear when you add other VMs to the mix like the Solana Virtual Machine or the Move Virtual Machine. These differences come in the form of the language, speed, data structures, on-chain storage, etc. This is one of the reasons why most indexers and data providers focus only on the EVM chains, which leaves a (upcoming) part of the market underserved in their data needs. Since the infrastructure from The Indexing Company takes the data in raw form we can cache that data as a chunk (most cases this is a block) and then look for any data in that block without any assumptions on that data or the contents. This architecture design allows fast onboarding of new chains regardless of their VM, but also ensures fast processing of the data from RPC endpoint to database.
Modular Data Pipelines
Typically developers working in a modular stack will ingest data from multiple chains, which means that for these developers merging data from multiple chains will be essential for their dApps and protocols to function. Merging this data normally would be a hassle because either this comes from multiple sources or various data models have to be transformed into a desired model. In The Indexing Company products we strive to make this process as easy as possible since even developer tools should have a good UX. When ingesting the raw data from a single chain or multiple chains, there is no model being applied. Since the starting point is always that raw data, the data can be transformed freely into a desired format. The data pipelines The Indexing Company is building and the new Console give developers the freedom to express their wishes regarding the resulting data model without having to worry about the format of the raw data. Transformations and templates can be applied to get the desired data quickly. The data pipelines deployed are not static either, since configurations can be altered on the fly by calling APIs. For example if additional contracts or events have to be indexed (backfill and real time) the API could be called to add this new data.
The configurability does not stop only with adding data from a single chain. For example a developer can create a unified schema across chains or filter out data only relevant for their application. In addition, developers who already have a pipeline running for a chain, can easily deploy that pipeline for additional chains if they want to expand their product. This unique approach reduces tedious data engineering work done by the developer, while also reducing data processing time and processing/storage costs.
Even when raw data is being used as a starting point, the Console and the broader Data Marketplace can provide templates made by The Indexing Company or developers themselves. For example, these templates could be configurations for getting data from DEXs, specific protocols or NFT/ERC20 transfers. Applying these templates makes it easier for developers to quickly configure the data, since they could filter on specific contracts, ERC20s or NFTs (etc.) to get more granular data. Roll-up as a Service (RaaS) providers could also add easy and 1-click indexing to their offering, since data pipelines could be spun up automatically and get specific templates automatically applied.
Eventually the Data Marketplace will unlock various templates and even re-selling of data by developers. The blockchain data itself is neutral, but developers and companies can have an opinion on that data and how that is processed. Their work and expertise will unlock new datasets, metrics and context that comes from and can be merged with blockchain data. The marketplace will enable other companies to provide their data, while others can tap into that expertise and ingest that data.
Conclusion
Modular chains are reshaping the blockchain landscape, while they also introduce new challenges in data availability and indexing. The Indexing Company addresses these challenges head-on with our flexible, VM-agnostic data pipelines. Our approach enables developers to easily work with data across multiple chains, reducing complexity and costs.
As the modular ecosystem evolves, robust data infrastructure will be crucial. At The Indexing Company, we're committed to empowering Web3 businesses with next-generation indexing solutions. Ready to optimize your blockchain data strategy? Contact us to explore how we can support your project in this new era of modular chains.
Published
18 October, 2024
by
@aperturecrypto
Indexing for Interoperability: Modular Chains
Recently the term modular has got a lot more attention in the world of blockchains.

In the recent year the term modular has gotten more attention in crypto. Modular chains aim to solve the scalability trilemma. The trilemma claims that a blockchain can only have two of the following three features: decentralization, scalability, security. Modular chains try to solve this trilemma by separating blockchain functions into distinct components. Each component can be picked by developers to optimize for their chain needs, whether that is building a new L1, L2, L3 or dApp-chain. The components are:
Execution layer: processes the transactions and computes state changes
The Consensus layer: ensures the agreement on the order and validity of transactions
The Settlement layer: provides finality and security guarantees
The Data Availability layer: ensures that transaction data is accessible to network participants
While data availability is crucial for the operations of the network participants, it only addresses the needs of those inside the network. When data needs to be available outside of the network for triggers on user interfaces, activity on other chains or analytics, indexing needs to be done to retrieve that on-chain data and make it available elsewhere. This article explores how The Indexing Company is building a data marketplace to meet the unique challenges and opportunities presented by modular chain architectures. The article should be relevant for various modular and interoperability chains and protocols like Celestia, Avail, Cosmos, the Superchain (the chains building with the OP Stack), etc.
Indexing Chains With different Virtual Machines
Since data lives on multiple chains, developers need to connect to multiple RPCs to get that data. These RPC endpoints can be different because every VM can be different. For example the Ethereum Virtual Machine (EVM) chains already can have different types of RPCs, which results in data having varying features or structures. The differences become even more clear when you add other VMs to the mix like the Solana Virtual Machine or the Move Virtual Machine. These differences come in the form of the language, speed, data structures, on-chain storage, etc. This is one of the reasons why most indexers and data providers focus only on the EVM chains, which leaves a (upcoming) part of the market underserved in their data needs. Since the infrastructure from The Indexing Company takes the data in raw form we can cache that data as a chunk (most cases this is a block) and then look for any data in that block without any assumptions on that data or the contents. This architecture design allows fast onboarding of new chains regardless of their VM, but also ensures fast processing of the data from RPC endpoint to database.
Modular Data Pipelines
Typically developers working in a modular stack will ingest data from multiple chains, which means that for these developers merging data from multiple chains will be essential for their dApps and protocols to function. Merging this data normally would be a hassle because either this comes from multiple sources or various data models have to be transformed into a desired model. In The Indexing Company products we strive to make this process as easy as possible since even developer tools should have a good UX. When ingesting the raw data from a single chain or multiple chains, there is no model being applied. Since the starting point is always that raw data, the data can be transformed freely into a desired format. The data pipelines The Indexing Company is building and the new Console give developers the freedom to express their wishes regarding the resulting data model without having to worry about the format of the raw data. Transformations and templates can be applied to get the desired data quickly. The data pipelines deployed are not static either, since configurations can be altered on the fly by calling APIs. For example if additional contracts or events have to be indexed (backfill and real time) the API could be called to add this new data.
The configurability does not stop only with adding data from a single chain. For example a developer can create a unified schema across chains or filter out data only relevant for their application. In addition, developers who already have a pipeline running for a chain, can easily deploy that pipeline for additional chains if they want to expand their product. This unique approach reduces tedious data engineering work done by the developer, while also reducing data processing time and processing/storage costs.
Even when raw data is being used as a starting point, the Console and the broader Data Marketplace can provide templates made by The Indexing Company or developers themselves. For example, these templates could be configurations for getting data from DEXs, specific protocols or NFT/ERC20 transfers. Applying these templates makes it easier for developers to quickly configure the data, since they could filter on specific contracts, ERC20s or NFTs (etc.) to get more granular data. Roll-up as a Service (RaaS) providers could also add easy and 1-click indexing to their offering, since data pipelines could be spun up automatically and get specific templates automatically applied.
Eventually the Data Marketplace will unlock various templates and even re-selling of data by developers. The blockchain data itself is neutral, but developers and companies can have an opinion on that data and how that is processed. Their work and expertise will unlock new datasets, metrics and context that comes from and can be merged with blockchain data. The marketplace will enable other companies to provide their data, while others can tap into that expertise and ingest that data.
Conclusion
Modular chains are reshaping the blockchain landscape, while they also introduce new challenges in data availability and indexing. The Indexing Company addresses these challenges head-on with our flexible, VM-agnostic data pipelines. Our approach enables developers to easily work with data across multiple chains, reducing complexity and costs.
As the modular ecosystem evolves, robust data infrastructure will be crucial. At The Indexing Company, we're committed to empowering Web3 businesses with next-generation indexing solutions. Ready to optimize your blockchain data strategy? Contact us to explore how we can support your project in this new era of modular chains.