# Indexing Co --- title: 404: Rabbit hole not found url: /404.md --- # 404: Rabbit hole not found This isn't the page you were looking for. Let's go back [home](/) and wave to Dex on the way.
--- title: Welcome to The Neighborhood url: /the-neighborhood.md description: The Neighborhood is a distributed protocol designed to process data wherever it lives: onchain, offchain, on-device, or at the edge. Fast, efficient, and composable infrastructure. image: /assets/articles/the-neighborhood.png --- At its core are three key components: - **Federated Storage** - Keep data where it belongs. The network reads from distributed sources without centralizing or duplicating. - **Distributed Compute** - Tasks are executed across a global mesh of nodes, moving compute to the data for speed, locality, and cost efficiency. - **Incentivization Layer** - Supply meets demand through a tokenized system that rewards participants for contributing compute and storage. Whether you're building real-time pipelines or large-scale analytics, The Neighborhood gives you the flexibility to scale without overbuilding; and the economics to make it sustainable. --- [Learn More](https://pipelines.indexing.co) [Read the Docs](https://docs.indexing.co) [Read the Lite Paper](https://docsend.com/view/vpbawiqg5kvz9yv2) --- title: Terms of Service url: /terms-of-service.md --- --- title: Pricing url: /pricing.md description: Built to scale with your workloads — from small experiments to enterprise-grade deployments. --- # Pricing {: .text-display-regular }

Starter

For teams getting started or building internal tools

$0 to start
$1 / 1,000 blocks processed

Includes:

  • Shared network compute
  • Access to all supported chains
  • Self-serve setup in minutes
  • Real-time delivery at sub-second latency

Best for: prototypes, dashboards, analytics apps

Get Started

Advanced

For projects running consistent workloads

$4,500 / month

Includes everything in Starter, plus:

  • Unlimited usage on shared infra
  • Priority backfills & custom transformations
  • Optional setup support
  • Slack support channel
  • Eligible for annual billing

Best for: production-grade apps, multi-chain usage

Get Advanced

Premium

For teams needing dedicated performance or isolation

$8,500 / month

Includes everything in Advanced, plus:

  • Dedicated processing node
  • Custom transformations & offchain integrations
  • Private caching & network isolation
  • Hybrid deployments (dedicated + shared)
  • Optional on-prem or VPC

Best for: high-throughput, low-latency applications

Get Premium

Enterprise

Fully managed infrastructure at global scale

Starts at
$15,000+ / month

Includes everything in Premium, plus:

  • Multi-node redundancy
  • Custom SLAs & compliance
  • Private deployments (VPC / on-prem)
  • Dedicated solutions engineering
  • Guaranteed throughput and uptime

Best for: exchanges, analytics platforms, and large enterprises

Contact Sales

Add-ons & Services

cloud

Setup Fee

Starting at $1–2K (waived on annual plans)

history

Backfills

Billed per request or included in Advanced+

settings

Custom Transformations

$2–5K one-time or included in Premium+

support_agent

Dedicated Support

Optional 24/7 escalation for enterprise customers

Need help finding
the right fit?

--- title: Farcaster Data url: /articles/farcaster-data.md description: Indexing Co integrates Farcaster into its indexing pipelines, offering free public access to casts, profiles, reactions, and verifications data. date: 2024-02-23 image: /assets/articles/farcaster-data.png author: Stephen King tags: Announcement --- Farcaster, a Web3 social protocol, has seen a surge in user engagement since launching Frames. From 3,000 daily users in late January to nearly 370k users today, with daily "casts" jumping from 200,000 to 2.9 million. This impressive growth offers valuable insights into where social media is going. Farcaster's architecture is a blend of on-chain and off-chain systems, which includes registry contracts deployed on the Ethereum network via Optimism. Messages are cast to a network of servers called 'Hubs', ensuring the reliability of user data. This hybrid architecture is spurring some innovative ideas like Frames. Frames allow developers to embed interactive experiences within Farcaster posts, known as Casts. There are [fun and different](https://topframes.xyz/) frames being created, including one for Girl Scout cookies. This allows you to shop within the Frame and then checkout via Coinbase without ever leaving Warpcast. [Of course we had to create our own frame : )](https://warpcast.com/runninyeti.eth/0xefcbfecd) ![](https://images.mirror-media.xyz/publication-images/z4Iy44rNwWTrDblWT_knw.png?height=812&width=1202) We’re committed to the Farcaster ecosystem and are excited about its potential. We integrated Farcaster into our indexing pipelines, positioning it alongside prominent chains like Base, Syndicate, and Solana. We're proud to offer this dataset entirely free as a public good, fueled by our passion for easy and open access to data. Our approach with this dataset has been to take a semi-opinionated stance on normalizing hub events into views that more easily work for products and analytics (e.g. "user_data" from Hubs maps to "profiles" in BigQuery). To start, we have 5 tables available: casts, links, reactions, verifications, and profiles. > #### Looking where to start? Try seeing who the most [active reply guys are](https://console.cloud.google.com/bigquery?sq=867429816176:87fef9a0cc334c199363075701d50e74). Currently, [BigQuery](https://console.cloud.google.com/bigquery?project=glossy-odyssey-366820&ws=!1m4!1m3!3m2!1sglossy-odyssey-366820!2sfarcaster) is updated every hour with the latest snapshot of data. Depending on feedback, we'll likely try to make this truly real-time using some materialized views. A huge shout out and thanks to @pinatacloud for their public hub support! Looking where to start? Try seeing who the most [active reply guys are](https://console.cloud.google.com/bigquery?sq=867429816176:87fef9a0cc334c199363075701d50e74). If you have questions or other data needs, contact us! Warpcast: [Indexing Co](https://warpcast.com/~/channel/indexing) - [Brock](https://warpcast.com/runninyeti.eth) - [Stephen](https://warpcast.com/stephenking) Email: [hello@indexing.co](hello@indexing.co) --- title: Indexing Chainlink's Price Oracles url: /articles/indexing-chainlink-price-oracles.md description: A guide to indexing Chainlink price oracles and transforming the data into useful metrics like moving averages and spot prices. date: 2023-01-30 image: /assets/articles/indexing-chainlink-price-oracles.png author: Stephen King tags: Guide --- ## Introduction to Chainlink and Its Significance ![](https://images.mirror-media.xyz/publication-images/5qxPpCYFBzut35aQesuxt.png?height=396&width=1174) Chainlink is one of the most ambitious projects in web3, providing everything from price oracles to external API adapters. With the ability for anyone to easily connect to any web or off-chain API and use it as an input or output for smart contracts on the blockchain, Chainlink has become a go-to for many organizations in the crypto industry. At the Indexing Company, we love Chainlink and indexing systems, so we decided to index their price oracles and run them through our transformation engine. ## Understanding Chainlink's Price Oracles First, let's cover the basics. What are Chainlink's price oracles, and why are they important for the crypto industry? In the simplest terms, a price oracle is a source that provides up-to-date, real-time pricing data to software running smart contracts on a blockchain network. It can be compared to stock exchange tickers of companies in the past that supplied real-time pricing data to people. Without accurate and reliable sources for crypto prices, smart contract software would not be able to execute correctly. Chainlink's price oracles offer a reliable alternative to traditional methods of data collection and market assessment. The decentralized, secure protocol provides access to a range of leading sources that can be used to make informed price predictions. Prices are managed via an open-source aggregation algorithm, which minimizes the risk of falsified or inaccurate results. This is extremely beneficial compared to other price oracle solutions on the market, as it ultimately reduces the need for manual response checks and lowers financial exposure in potential errors. Chainlink's security data privacy and compliance measures also maintain top-level protection against external threats such as hacking and performance issues, giving users confidence in their pricing decisions. ## The Technology Behind Chainlink's Price Oracles Chainlink's Price Oracles aren't just a manifestation of advanced technology, they also embody a system of record that's been painstakingly curated for its reliability and seamless functionality. By engineering this intricate system, Chainlink ensures that the transaction data it sources maintains a high level of accuracy. The superior infrastructure underpinning Chainlink's Price Oracles plays a pivotal role in their operation, making them an invaluable asset within the broader cryptocurrency ecosystem. ## Chainlink's Decentralized Data Aggregation Chainlink has harnessed the power of decentralized data aggregation to fortify the reliability of its price oracles. This method dramatically enhances data accuracy and reliability, providing an unimpeachable source of information for users. The decentralized nature of blockchain data aggregation serves as a buffer against data manipulation, further strengthening the credibility of the data Chainlink oracles deliver. ## Chainlink's Oracle Reputation System To ensure the consistent performance of its network, Chainlink has implemented an Oracle Reputation System. This system objectively evaluates the performance of various oracles, assigning ratings based on reliability and accuracy. These ratings offer valuable insights into the quality of data each oracle provides and directly contribute to the overall performance and credibility of Chainlink's network. ## Data Quality and Chainlink In the realm of operations of price oracles, data quality is non-negotiable. Chainlink has established stringent protocols to guarantee high-quality data from its price oracles. By placing an unwavering emphasis and focus on data quality, Chainlink sets a high standard for the reliability and usability of the data it provides. ## Understanding Price Oracle Attacks The cryptosphere is not immune to security concerns and challenges, and price oracle attacks are one such issue that warrants attention. These attacks can have significant implications for the integrity of a blockchain network, reinforcing the importance of Chainlink's commitment to security and robust design. By understanding these attacks, we can better appreciate the safeguards Chainlink has in place to prevent such vulnerabilities. ## Chainlink's Role in the DeFi Ecosystem Chainlink's price oracles have carved out a pivotal role in the burgeoning DeFi ecosystem. As a key player in the DeFi landscape, Chainlink has made significant strides in enhancing the reliability and accessibility of decentralized financial services. Its influence and importance in shaping the DeFi market are unequivocal, marking it as a game-changer in the world of decentralized finance. ## The Process and Advantages of Indexing Chainlink's Price Oracles So, why index Chainlink's price oracles? By indexing Chainlink's price oracles, we can configure the Indexing Company's transformation engine to generate some interesting data points. ![Litecoin](https://images.mirror-media.xyz/publication-images/ZKW7AcG5QeUSyuG6Qr6kb.png?height=376&width=588&size=medium) First, we indexed the feeds and merged labels (e.g. ETH/USD) to compare and contrast price data. Next, using real time data we ran different transformations to get things like spot, 1-day, 7-day, 15-day, and 30-day moving averages. Lastly, to visualize the above data points, we exposed an API via GraphQL and routed it into ReTool. ![](https://images.mirror-media.xyz/publication-images/c_TTZOGkdmAcWRE83oO2y.png?height=152&width=1110) ![](https://images.mirror-media.xyz/publication-images/26ELGLKC5X1jzGHNqcyPM.png?height=580&width=1216) ## Testing and Comparing with Traditional Exchange Test it yourself! You might ask yourself, "I can already get this data on Coinbase. What's the big deal?" The difference is in how the data is obtained and disseminated. for example, Exchanges like Coinbase get crypto prices by matching buyers and sellers in transactions on their platform, and the prices on the exchange reflect the supply and demand of the crypto assets being traded on that particular platform. The prices of crypto assets on different exchanges can vary due to differences in trading volume, order book depth, and other factors. Chainlink price oracles, on the other hand, obtain crypto prices from multiple, decentralized data sources, such as other decentralized exchanges (DEXs), off-chain data aggregators, and oracles. The Chainlink network is decentralized, meaning that it is a distributed ledger not controlled by any single entity or organization, making it resistant to manipulation and tampering. The Chainlink network uses a consensus mechanism to ensure that the data provided by the oracles is accurate, and it uses smart contracts to ensure that the data is tamper-proof. If the goal of web3 is decentralization, using tools like Chainlink's price oracles is pivotal. ## Upcoming Advancements and Opportunities in Blockchain Data Indexing We're finishing documentation this week then we'll make the current version of the API publicly available to developers (for free!) The above exercise, in addition to whatsmynameagain.xyz and mirrormirror.page, are examples of what can be created with The Indexing Company's technology. Our mission is to provide the infrastructure so you can become the data company or surface to your business, partners and end users. Email us today at [hello@indexing.co](mailto:hello@indexing.co), and we'll get you set up with your own configurable indexers and transformations to create fun tools like this. Final Thoughts The indexing Chainlink's price oracles facilitates digital transformation by leveraging advanced technologies like artificial intelligence and multiple blockchains. This process generates valuable data points, enabling informed decision-making and enhancing the reliability of blockchain data. By utilizing decentralized data aggregation, Chainlink's price oracles play a crucial role in the decentralized finance (DeFi) ecosystem, providing reliable and accessible pricing information. This advancement opens new opportunities and drives innovation in the evolving landscape of DeFi and beyond. At The Indexing Company, we're excited to contribute to this journey by offering our indexing technology to enterprises around the world. As we look forward to what's next, we're not just observers; more than a decade in we're active participants shaping the future of data in a decentralized world. Join us on this journey - the future of data awaits you. --- title: Network Update: Hyperliquid, SUI, MegaETH and more url: /articles/network-update-hyperliquid-sui-megaeth-and-more.md description: Indexing Co adds support for 11 new networks.

[View a complete list of supported networks](https://docs.indexing.co/networks) author: Jake Horn date: 2025-04-01 image: /assets/articles/network-update-hyperliquid-sui-megaeth-and-more.avif tags: Announcement --- --- title: Mesh Partners with Indexing Co to Unlock New Possibilities in Crypto Transfers url: /articles/mesh-unlocks-new-possibilities-in-crypto-transfers.md description: How Mesh leveraged Indexing Co's webhook infrastructure to monitor onchain transfers across multiple chains, improving transaction accuracy and transparency. date: 2024-04-04 author: Mesh Connect image: /assets/articles/mesh-unlocks-new-possibilities-in-crypto-transfers.png tags: Case Study --- In the rapidly evolving world of digital finance, the ability to seamlessly manage and track on-chain transactions is paramount for both businesses and their customers. Recognizing this need, Mesh has partnered with Indexing Co, a trailblazer in blockchain data solutions, to revolutionize the way crypto transfers are monitored and processed. **The Challenge: Bridging the Data Gap in Centralized Exchanges** Centralized exchanges, while offering a plethora of services, often fall short when it comes to providing comprehensive data for onchain transfers. This limitation poses a significant challenge for platforms like Mesh that aim to deliver an exceptional user experience by ensuring the accuracy and transparency of every transaction. **The Solution: A Unified Approach to Onchain Data** To address this challenge, Indexing Co stepped in with its innovative solution - dedicated webhook infrastructure capable of monitoring onchain transfers with a dynamic set of parameters. This groundbreaking technology allows for complex filtering of transactions across multiple chains, including Bitcoin, Ethereum, and Solana, all through a unified interface. > Capturing onchain transaction records from centralized exchanges posed a significant challenge for us. However, with Indexing Co's innovative infrastructure, we were able to swiftly implement dynamic webhooks that function seamlessly across multiple chains. Their web3 data infrastructure is truly transformative—it's like the AWS of web3 data. > -Arjun Mukherjee, CTO **The Value: Streamlining Operations and Enhancing User Experience** By leveraging Indexing Co's specialized infrastructure, Mesh can now access the critical data it needs without the substantial cost and overhead of building indexing infrastructure in-house. This collaboration not only streamlines Mesh's operations but also significantly enhances the user experience by providing more detailed and accurate transaction information. **The Impact: A New Era in Crypto Transfers** The partnership between Mesh and Indexing Co marks a significant milestone in the crypto industry. It sets a new standard for how onchain data can be utilized to improve transaction monitoring and processing. This collaboration is a testament to the power of innovation and strategic partnerships in driving the future of finance. **Looking Ahead: Expanding Horizons** As Mesh continues to innovate and expand its services, the partnership with Indexing Co will play a crucial role in enabling more advanced features and capabilities. The ability to seamlessly integrate data from various blockchain networks opens up a world of possibilities for developing new financial products and services that cater to the evolving needs of users. **Conclusion: A Partnership for the Future** In conclusion, the collaboration between Mesh and Indexing Co is a prime example of how leveraging cutting-edge technology can solve complex challenges in the crypto space. By joining forces, these two companies are not only enhancing their own offerings but also contributing to the broader advancement of the digital finance ecosystem. Stay tuned as Mesh and Indexing Co continue to push the boundaries and unlock new possibilities in the world of crypto transfers. --- Read the original post [here](https://www.linkedin.com/pulse/unlocking-new-possibilities-crypto-transfers-mesh-partners-89kjc/) --- title: Avalanche Indexing with The Neighborhood url: /articles/avalanche-indexing-with-the-neighborhood.md description: How to index Avalanche L1s with The Neighborhood, creating unified pipelines that span multiple chains for real-time and historical data. date: 2025-09-02 author: Dennis Verstappen tags: Guide --- Avalanche is home to a growing set of high-performance L1s, each optimized for its own community and use case. This design opens the door to faster innovation but also creates a challenge: how can builders and product teams access clean, reliable data across all these L1s without maintaining custom infrastructure for each one? The Indexing Company built **The Neighborhood**, a distributed compute network for high-performance indexing. The Neighborhood can onboard any Avalanche L1 into its network and provide pipelines that span multiple L1s at once. This makes it possible to stream real-time activity from several chains into a single schema, while also running complete historical backfills from genesis. ## Solving the Indexing Problem on Avalanche Today, many teams rely on brittle systems like subgraphs or centralized APIs. These approaches come with rigid schemas, long reindexing times, and high infrastructure costs. They are especially limiting in a multi-chain ecosystem like Avalanche, where each L1 may introduce its own set of contracts and events. The Neighborhood removes these bottlenecks. By ingesting raw block data directly from Avalanche RPCs, it lets developers apply programmable transformations in JavaScript and stream structured output into any database, warehouse, or webhook. The result is flexible, chain-aware pipelines that can handle multiple L1s side by side. ## Examples of Avalanche Data You Can Index Here are some of the datasets teams can build pipelines for today: - Token transfers: Track ERC20 and stablecoin movements such as AVAX, USDC, and USDT across several Avalanche L1s. - DEX activity: Capture swaps and liquidity events from protocols including Dexalot, Blackhole, LFJ, and Uniswap deployments. - Lending and borrowing: Stream deposits, borrows, repayments, and liquidations from Avalanche-native money markets. - NFT activity: Index NFT transfers, mints, and marketplace events to power dashboards or wallets. - Wallet-level analytics: Monitor balances, positions, and historical activity for specific addresses. - Custom contracts: Add your own L1 contracts and decode events specific to your application. By combining these pipelines, developers can merge data from multiple Avalanche L1s into a single dataset—removing the complexity of stitching together siloed sources. ## Benefits for Builders **Programmable pipelines** Developers define what gets indexed, from specific contracts to wallet sets. Pipelines can be adjusted on the fly without costly reindexing. **Real-time and historical coverage** Support for both sub-second data streams and full historical backfills ensures complete datasets for analytics, compliance, and research. **Local indexing options** Neighborhood nodes can run close to Avalanche validators or inside your own infrastructure, delivering lower latency and giving teams full control over reliability and security. **Cost and efficiency** The distributed design of The Neighborhood processes Avalanche workloads more efficiently than centralized cloud systems, reducing costs for data-heavy use cases like DeFi analytics or AI model training. ## Why This Matters for Avalanche Avalanche’s multi-L1 ecosystem is only going to grow. Without robust indexing, developers risk spending more time maintaining ETL pipelines than building products. The Neighborhood provides a unified data layer that adapts as new L1s launch, allowing product managers to ship faster, CTOs to cut costs, and data engineers to focus on insights instead of infrastructure. ## Getting Started The Neighborhood already supports Avalanche and its expanding family of L1s. Developers can define pipelines through our console and APIs, stream data into Postgres, BigQuery, Kafka, or webhooks, and build analytics that span multiple chains. For teams building on Avalanche, The Neighborhood makes it possible to go from raw chain data to production-ready pipelines—whether you’re indexing a single L1 or unifying activity across the entire Avalanche ecosystem. If your team needs faster, cheaper, and more flexible access to Avalanche and Avalanche L1 data, The Neighborhood is ready to help. Explore our platform at [indexing.co](https://www.indexing.co) or contact us at [hello@indexing.co](mailto:hello@indexing.co) to discuss your data needs. --- title: Data Pipelines for Prediction Markets url: /articles/data-pipelines-for-prediction-markets.md description: A guide to indexing EVM-compatible chains with The Neighborhood, enabling unified data pipelines across Ethereum, Base, Arbitrum, and more. date: 2025-11-13 author: Dennis Verstappen tags: Guide --- #### Indexing Co powers real-time, high-precision data for prediction markets. Prediction markets depend on perfect information. Whether you’re building something like Polymarket or Kalshi, running your own onchain venue, arbitraging markets across chains, or integrating external prediction feeds into an analytics product, the foundation is always the same: fast, clean, reliable data. Indexing Co delivers that data layer. Built on **The Neighborhood**, our distributed compute network, we index onchain and offchain market events with sub-second latency across 125+ chains. We stream liquidity changes, order flow, collateral movements, token transfers, settlement events, and user activity directly into your backend, dashboards, agents, or pricing models. Why it matters for prediction markets: #### Trading and Arbitrage Latency determines edge. Firms running cross-market or cross-chain arbitrage need deterministic, high-speed indexing that doesn’t degrade when markets spike. Our dedicated Neighborhood nodes colocate with RPCs or validators to deliver the fastest possible feed. This setup is outperforming standard indexers or Subgraphs.. #### Your Own Prediction Market If you're launching a new venue, you need more than block data. You need decoded events, stablecoin flows, oracle updates, and settlement logic delivered exactly as you define it. Our pipelines give you full control over transformations and schema, so you can design the mechanics of your market instead of fighting data infra. #### Analytics & Insights Products If you’re building dashboards, forecasting tools, or modeling engines, you need structured, query-ready data without ETL overhead. We transform raw network activity into clean streams you can plug directly into your warehouse, AI system, or API. #### Integrating External Markets Aggregators and research platforms that want to pull in Polymarket, Kalshi, or chain-native markets can unify everything through a single pipeline. Mix offchain APIs with onchain settlement and liquidity data in one transformation layer. Prediction markets thrive on clarity, speed, and truth. Indexing Co gives you all three with enterprise-grade reliability, customizable pipelines, and nodes tuned for sub-second performance. If you're building trading systems, arbitrage bots, new markets, or analytics on top of prediction-market data, we can power the full stack end-to-end. Reach out to us at [mailto:hello@indexing.co](hello@indexing.co) for your custom setup. --- title: Introducing: What's My Name Again? url: /articles/introducing-whats-my-name-again.md description: A free search engine for ENS profiles that allows users to find ENS names by email, text records, or wildcard patterns. date: 2022-10-26 image: /assets/articles/introducing-whats-my-name-again.png author: runninyeti.eth tags: Announcement --- We’ve talked previously about Ethereum Name Service and how to get started with your own ENS name ([original post here](/articles/ens-a-practical-guide)). In that post we touched very briefly on the concept of _text records_ and being able to tie information beyond a wallet address to your ENS name. By adding data points like email addresses, avatars, Twitter addresses, etc to an ENS name, we can effectively start crafting what we’ll call an “ENS Profile”. Because these profiles are stored on-chain, they’re publicly accessible yet fully controlled by **you**, the owner. We could compare this to Facebook or Twitter profiles, but without the ads, centralized control, and authentication requirements that these platforms require in service of their bottom line. In short, the potential power of ENS Profiles is enormous. One thing that’s been missing though is the ability to _search_ these profiles. Sure, tools like [ens.domains](https://ens.domains/) and [ens.vision](https://www.ens.vision/) exist, but these are focused on managing and purchasing ENS names, respectively. Neither service is intended to help ENS users communicate with one another or use the power of a “profile”. Meanwhile, we’ve been working hard at The Indexing Company to build out our Indexing as a Service infrastructure. Recognizing the recent adoption of ENS and being long-time supporters ourselves, we decided to tune our indexing service towards ENS. As a result, we’re happy to introduce [What’s My Name Again?](https://www.whatsmynameagain.xyz/) 🎉 This is a freely available service to search both ENS names _and_ entire ENS Profiles. Some examples: - [Wildcard name matches like nick\*.eth](https://www.whatsmynameagain.xyz/#nick*.eth) - [ENS name by email](https://www.whatsmynameagain.xyz/#hello@indexing.co) - [ENS names with Mirror.xyz links](https://www.whatsmynameagain.xyz/#*mirror.xyz*) - [Who wants to be contacted?](https://www.whatsmynameagain.xyz/#*contact%20us*) - [Anyone putting "coffee" on-chain](https://www.whatsmynameagain.xyz/#coffee) (hint: searches _not_ starting with a wildcard, `*`, are much faster!) ![](https://images.mirror-media.xyz/publication-images/KqVcKrxpXTj8Et1QZC_tY.png?height=2334&width=3824) Give it a whirl and let us know what you think! --- title: Welcome to The Neighborhood: Syndicate $SYND url: /articles/welcome-to-the-neighborhood-syndicate.md description: We are putting our $SYND to work. By staking to Syndicate appchains and powering their data infra with The Neighborhood, we are aligning with builders where it matters: usage, rewards, and growth. author: Jake Horn date: 2025-09-22 image: /assets/articles/welcome-to-the-neighborhood-syndicate.avif tags: Announcement --- Today marks the launch of $SYND, the native gas token of Syndicate. From the start, SYND is used for all sequencing transactions and to pay gas when deploying or managing appchain sequencers. As the network matures, fees transition to a decentralized model where operators and stakers directly earn from usage. This important feature ties token rewards to real activity. Transactions drive fees, more fees means more rewards for the operators securing the network. The Indexing Company was chosen as one of the beneficiaries of the $SYND airdrop. Our unique data infrastructure fits well into the design of Syndicate and the related appchains. With our network The Neighborhood we can fetch data from any appchain and serve builders and analytic platforms with that data. We turn raw block data into streams of data to power front-ends, analytics, DeFi Dashboards and cross-chain products. The more activity an appchain has, the more data is processed and the more users benefit from our data infra. Hence, why we are planning to put our $SYND to good use and will align our stake with builders using The Neighborhood. The unique design from SYND enables us to stake to specific appchains, which causes emissions to flow to the appchain generating activity. By staking SYND and directing our stake to the Syndicate appchains using The Neighborhood, we are aligning with them both in our service and onchain. We support them in becoming a successful appchain, supported by our SYND stake and in return we will share in their growth. Together we create a flywheel where active chains generate more data, stronger economics and a better experience for the end user. The Neighborhood and Syndicate alignment in practice means: when chains grow, builders thrive, data and chain infra scales, and value flows back to the communities creating it. If you are a team building an appchain on Syndicate or a builder in need of Syndicate or appchain data reach out to us. We both support you with our $SYND stake and our data infra. --- title: BitCourier - Indexing Co: Addressing Challenges in The On-chain Data Space url: /articles/bitcourier-indexing-co-addressing-challenges-in-the-on-chain-data-space.md description: BitCourier's review of Indexing Co and how it addresses challenges in the on-chain data space. author: BitCourier date: 2025-09-28 image: /assets/articles/bitcourier.avif tags: Social --- [https://bitcourier.co.uk/blog/indexing-co-review](https://bitcourier.co.uk/blog/indexing-co-review) --- title: The Evolution Of Blockchain Indexing url: /articles/evolution-of-indexing.md description: In a recent livestream, we explored the evolution of blockchain data indexing with three industry experts. author: Jake Horn date: 2025-02-20 image: /assets/articles/evolution-of-indexing.avif tags: Social --- Fuel Network invited Indexing Co to discuss evolution of blockchain data indexing with industry experts. The discussion delved into how data indexing has evolved from traditional EVM approaches to modern high-throughput AltVM solutions, highlighting the challenges and opportunities in this rapidly evolving space. View the complete talk [here](https://x.com/i/broadcasts/1djGXVBqDNPxZ). --- title: Indexing for Interoperability: Modular Chains url: /articles/indexing-for-interoperability-modular-chains.md description: Recently the term modular has gotten a lot more attention in the world of blockchains. author: Dennis Verstappen date: 2024-10-18 image: /assets/articles/indexing-for-interoperability-modular-chains.avif tags: Post --- In the recent year the term modular has gotten more attention in crypto. Modular chains aim to solve the scalability trilemma. The trilemma claims that a blockchain can only have two of the following three features: decentralization, scalability, security. Modular chains try to solve this trilemma by separating blockchain functions into distinct components. Each component can be picked by developers to optimize for their chain needs, whether that is building a new L1, L2, L3 or dApp-chain. The components are: - Execution layer: processes the transactions and computes state changes - The Consensus layer: ensures the agreement on the order and validity of transactions - The Settlement layer: provides finality and security guarantees - The Data Availability layer: ensures that transaction data is accessible to network participants While data availability is crucial for the operations of the network participants, it only addresses the needs of those inside the network. When data needs to be available outside of the network for triggers on user interfaces, activity on other chains or analytics, indexing needs to be done to retrieve that on-chain data and make it available elsewhere. This article explores how The Indexing Company is building a data marketplace to meet the unique challenges and opportunities presented by modular chain architectures. The article should be relevant for various modular and interoperability chains and protocols like Celestia, Avail, Cosmos, the Superchain (the chains building with the OP Stack), etc. #### Indexing Chains With different Virtual Machines Since data lives on multiple chains, developers need to connect to multiple RPCs to get that data. These RPC endpoints can be different because every VM can be different. For example the Ethereum Virtual Machine (EVM) chains already can have different types of RPCs, which results in data having varying features or structures. The differences become even more clear when you add other VMs to the mix like the Solana Virtual Machine or the Move Virtual Machine. These differences come in the form of the language, speed, data structures, on-chain storage, etc. This is one of the reasons why most indexers and data providers focus only on the EVM chains, which leaves a (upcoming) part of the market underserved in their data needs. Since the infrastructure from The Indexing Company takes the data in raw form we can cache that data as a chunk (most cases this is a block) and then look for any data in that block without any assumptions on that data or the contents. This architecture design allows fast onboarding of new chains regardless of their VM, but also ensures fast processing of the data from RPC endpoint to database. #### Modular Data Pipelines Typically developers working in a modular stack will ingest data from multiple chains, which means that for these developers merging data from multiple chains will be essential for their dApps and protocols to function. Merging this data normally would be a hassle because either this comes from multiple sources or various data models have to be transformed into a desired model. In The Indexing Company products we strive to make this process as easy as possible since even developer tools should have a good UX. When ingesting the raw data from a single chain or multiple chains, there is no model being applied. Since the starting point is always that raw data, the data can be transformed freely into a desired format. The data pipelines The Indexing Company is building and the new Console give developers the freedom to express their wishes regarding the resulting data model without having to worry about the format of the raw data. Transformations and templates can be applied to get the desired data quickly. The data pipelines deployed are not static either, since configurations can be altered on the fly by calling APIs. For example if additional contracts or events have to be indexed (backfill and real time) the API could be called to add this new data. The configurability does not stop only with adding data from a single chain. For example a developer can create a unified schema across chains or filter out data only relevant for their application. In addition, developers who already have a pipeline running for a chain, can easily deploy that pipeline for additional chains if they want to expand their product. This unique approach reduces tedious data engineering work done by the developer, while also reducing data processing time and processing/storage costs. Even when raw data is being used as a starting point, the Console and the broader Data Marketplace can provide templates made by The Indexing Company or developers themselves. For example, these templates could be configurations for getting data from DEXs, specific protocols or NFT/ERC20 transfers. Applying these templates makes it easier for developers to quickly configure the data, since they could filter on specific contracts, ERC20s or NFTs (etc.) to get more granular data. Roll-up as a Service (RaaS) providers could also add easy and 1-click indexing to their offering, since data pipelines could be spun up automatically and get specific templates automatically applied. Eventually the Data Marketplace will unlock various templates and even re-selling of data by developers. The blockchain data itself is neutral, but developers and companies can have an opinion on that data and how that is processed. Their work and expertise will unlock new datasets, metrics and context that comes from and can be merged with blockchain data. The marketplace will enable other companies to provide their data, while others can tap into that expertise and ingest that data. #### Conclusion Modular chains are reshaping the blockchain landscape, while they also introduce new challenges in data availability and indexing. The Indexing Company addresses these challenges head-on with our flexible, VM-agnostic data pipelines. Our approach enables developers to easily work with data across multiple chains, reducing complexity and costs. As the modular ecosystem evolves, robust data infrastructure will be crucial. At The Indexing Company, we're committed to empowering Web3 businesses with next-generation indexing solutions. Ready to optimize your blockchain data strategy? Contact us to explore how we can support your project in this new era of modular chains. --- title: Mint to the Future url: /articles/mint-to-the-future.md description: A project that recycles Ethereum-based NFTs and transforms them into new NFTs on the Flow blockchain using AI-generated art. date: 2023-08-22 image: /assets/articles/mint-to-the-future.png author: Stephen King tags: Announcement --- **Mint to the Future: Breathing New Life into NFTs** In the ever-evolving world of Web3, NFTs (Non-Fungible Tokens) have taken center stage. But as with any gold rush, there's an inevitable oversaturation. Many NFTs now sit dormant in digital wallets, their value diminished or even non-existent. This posed a question to our team: How can we breathe new life into these forgotten digital assets? Enter "Mint to the Future," a project that emerged from our desire to rejuvenate the NFT landscape and introduce users to the Flow blockchain. The concept was simple: a recycle contract that would allow users to repurpose their Ethereum-based NFTs, which had lost their value, and in return, receive brand-new, vibrant NFTs minted on the Flow blockchain. **The Perfect Blend of Design and Tech** Our team, a blend of design and technical expertise, was uniquely positioned to bring this vision to life. We weren't just looking to create another NFT platform; we aimed to revolutionize the user experience, making it seamless, enjoyable, and rewarding. **Why Flow and Cadence?** Flow, with its robust documentation and user-friendly approach, was a natural choice. We crafted a smart contract using Cadence, Flow's native programming language, ensuring a smooth and secure transition for NFTs from one blockchain to another. **Sustainability** Flow’s commitment to energy efficiency and sustainability is admirable. By using a proof-of-stake consensus mechanism, Flow significantly reduces its energy consumption. To put it into perspective, the network’s annual energy usage is astoundingly low at 0.18 GWh. To compare, minting an NFT on Flow requires less energy than making a simple Instagram post, making it a leader in energy efficiency, especially when juxtaposed with other blockchains. This conscious choice not only bolsters the platform's commitment to a greener environment but also empowers users and developers to make an eco-friendly choice. In the context of “Mint to the Future,” this alignment with sustainability means that while users are rejuvenating their NFTs, they are also actively participating in a greener and more responsible digital ecosystem. The assurance that each NFT minted isn't draining vast amounts of energy or contributing significantly to carbon emissions is invaluable in today's conscientious digital age. **Embracing AI, a tool that will greatly enhance web3** The tech world is currently witnessing an AI explosion, but the actual value can be clouded by mainstream headlines. Headlines aside, there are practical things AI excels at, a major one being generative art. We leveraged OpenAI's DALLE via a feature that generates unique NFT images in real-time. The new image is generated by collecting keywords words from the NFT’s being recycled and combining them into a prompt that generates a new image. This not only enhances the user experience but also significantly reduces our design costs. It was a win-win! A Glimpse into the Future Imagine a world where your once-forgotten NFTs are given a fresh lease of life. With "Mint to the Future," that's precisely what we offer. As our website aptly puts it, "Help Marty McFlow move to a more decentralized, secure, and innovative future by transforming your old NFTs into something dynamic and new." We were excited to create Flow to the Future because it shows how AI will impact the web3 ecosystem while also demonstrating new and creative user onboarding tools. By combining strengths from [Wovn](https://www.wovn.xyz/) & [Indexing Co](https://indexing.co), we were able to design and deploy the app in about a month. We had a great experience learning about and building in the Flow ecosystem. Overall, the Flow ecosystem was the perfect choice for Mint to the Future and we’re excited follow its continued growth & success! --- title: Accessing Data 3.0: Storage Options url: /articles/accessing-data-3-0-storage-options.md description: A guide to decentralized storage solutions in web3, comparing IPFS, Filecoin, Arweave, and Storj for different use cases. date: 2022-09-29 image: /assets/articles/accessing-data-3-0-storage-options.png author: runninyeti tags: Post --- _This is an entry in our long running series, “Accessing Data 3.0”, where we talk about the “whats” and the “hows” of working with data in web3. Enjoy!_ There’s an often forgotten question in Data 3.0 - where do we actually put all our large data? That image of your favorite cat, videos from the last family trip, the unpublished book you’re working on - what is “home” for all that data? It’s easy to think “well if it’s web3, then it must be on-chain”, but that’s not always true, nor does it need to be! There’s a whole, growing world of decentralized data that has no tie back to a blockchain. ![](https://www.moneyunder30.com/wp-content/uploads/2021/05/nyan-cat.gif) The simplest explanation is that putting data on-chain is expensive. Blockchains are, well, chains of “blocks”. Each of those blocks has a set of transactions, which in turn can include some amount of data. Each server participating in the network must then store _all_ data for the blocks they help decentralize. For example, in Ethereum the default for many servers is to store the last one year of blocks. Furthermore, all data added to a block gets hashed to secure the given blockchain. The combination of these two requirements leads to limits on the amount of raw data within a block. This in turn creates a competitive effect for “block space”. Resulting in web3 users having to pay fees in correlation to the total number of users (often referred to as “gas”). Now, it’s important to circle back to that “large” term we used at the beginning. What is does it mean to have a large piece of data? Take for instance the data required to reference moving funds from one account to another. This is often measured in a unit called “bytes” and is plenty small enough to keep on-chain. After all, this is the original use case blockchains! As you move into say a school paper though, you begin measuring data in “kilobytes”, or thousands of bytes. An image often runs into “megabytes” (millions) and videos get well into the “gigabyte” (billions) range. As a web3 user, it’s safe to assume that anything measured in more than bytes is too large. It's either impossible to store on-chain (due to block limits) or it's too expensive to do so. #### Soo, where do we store all those cat photos then? Thankfully, the innovation in Data 3.0 hasn’t left us high and dry. Let’s take a look at a few of the popular solutions for storing “large” data today: #### IPFS The Interplanetary File System is a free, “peer-to-peer” protocol for decentralizing data. IPFS was one of the earliest adopted solutions for storing data in web3 and continues to be a favorite for many. Getting started is easy (they even have a browser extension!) and the broader network improves the more users it has. That reliance on adoption has been both a defining factor and a sort of achilles heel for the project. Users are only required to store data that they actually want to use themselves. For instance, there’s likely no \[good\] reason for me wanting to store _your_ family videos, and so I won’t. But! if a meme is going viral and shared via IPFS, then every single viewer of that meme would also be sharing that data. From a practical perspective, this makes IPFS decentralized, but only temporarily so. In short, IPFS provides an easy, decentralized way to share data with others who want it. Explore the desktop application and other ways to get started [here](https://docs.ipfs.tech/install/ipfs-desktop/). Or try out a hosted “pinning” provider like [Pinata](https://www.pinata.cloud/). #### Filecoin Also built by [Protocol Labs](https://protocol.ai/), Filecoin aims to solve the "temporary" nature of IPFS by providing "contract-based" storage. Servers offer their storage capacity to the network and users pay to host their data for a fixed period of time. Fees get determined by the size of the data stored and the length of the “contract”. This storage market is then powered by a dedicated blockchain and currency, [$FIL](https://coinmarketcap.com/currencies/filecoin/). And behind this marketplace, the servers paid to store your data are all doing so via IPFS. That means that adoption of Filecoin is also adoption of IPFS. Check out [web3.storage](https://web3.storage/) or [Fleek](https://fleek.co/storage/) to explore early consumer applications for Filecoin. #### Arweave Much like Filecoin, Arweave has a storage market with a dedicated blockchain and token ([$AR](https://coinmarketcap.com/currencies/arweave/)). Rather than doing fixed term contracts though, Arweave promises _permanent_ data storage. One upfront fee, storage forever. Arweave accomplishes this permanence by gamifying data storage (details in [the yellow paper](https://t.co/LMxwLjtcVN)). Each server of the network can choose to store whatever data they want. For instance, they could avoid storing illegal content by censoring what's stored. But those servers are also incentivized to store data that isn’t sufficiently decentralized. In other words, it's worth more to store data decentralized to only a few servers vs thousands. And over the span of the network, this results in _all_ data always stored. Check out their [ArDrive](https://app.ardrive.io/) to give it a spin. (fun fact - this blog is hosted on Arweave via [Mirror.xyz](https://mirror.xyz/)) #### Storj Storj is another competitor in the web3 storage space, but focuses on developers. It boasts full compatibility with [AWS](https://aws.amazon.com/) S3 so most developers can leverage decentralized storage out of the box. The end result being fast, reliable cloud storage that's also decentralized. In general, servers in solutions like Filecoin and Arweave are only rewarded if they store the _full_ chunk of data (e.g. an image). Storj is different. It takes a given chunk of data, encrypts it, and then shares smaller pieces with its network of servers. When a user wants to retrieve data, only 29 of those pieces are required to reconstruct the full chunk of data. In this way, servers are incapable of being aware of the data they’re storing. This in turn allows Storj to control data at a network level; optimizing for speed and privacy along the way. Storj does have a [hosted interface](https://us1.storj.io/signup) for consumers to leverage their network. And of course there's [documentation for developers](https://docs.storj.io/dcs/) to get started as well. ## What should I use? ![](https://media.giphy.com/media/WsNbxuFkLi3IuGI9NU/giphy.gif) Here’s the skinny on when to use different Data 3.0 storage solutions today: 1. IPFS - free, easy to use, and temporary file sharing 2. Filecoin - fixed-length storage for a fee 3. Arweave - permanent storage for an upfront cost 4. Storj - developer-centric alternative to AWS S3 --- title: Indexing Mirror.xyz url: /articles/indexing-mirror-xyz.md description: A technical guide on how to index Mirror.xyz content from Arweave, including code examples for fetching and parsing posts. date: 2022-09-30 image: /assets/articles/indexing-mirror-xyz.png author: runninyeti tags: Announcement --- If you read content online you’ve probably at least heard of publishing services like [Medium](https://medium.com/) and [Substack](https://substack.com/). These are centralized, web2 companies that make money with views; subscriptions, ads, etc. Thankfully, as we transition into the web3 space we are already seeing some promising alternatives. The largest of these web3 publishers is [Mirror](https://mirror.xyz/). The beauty of protocols like Mirror is that they don’t _own_ any of the data. They still have a login, a clean text editor, and shareable links just like Medium or Substack. But the content that flows through Mirror is entirely decentralized on a network they don’t control, [Arweave](https://www.arweave.org/). We’ve spoken briefly about Arweave [previously](https://mirror.xyz/indexingco.eth/FDyv8i8c15ATs_KIpAtEdeMP20WZ00FfssPYOj3EZRY), but the gist is that any content stored on it is stored _forever_ thanks to unique incentive models built into the network. This has two important ramifications: 1. There are 0 paywalls in Mirror. You can still _choose_ to support individual creators, and Mirror helps facilitate this, but it’s entirely optional. 2. You don’t even have to use Mirror to participate in the broader ecosystem. That second point is what we’re going to focus on in this post. Specifically, because the data is freely available forever, we can choose to use this data in any way we want. For instance, this very blog post is available on [Mirror](https://mirror.xyz/indexingco.eth/iuT8DiYiDTq5lcx1JxOgQ8g9hn9hsnRJZVynFiHzrPk), but it’s also available on our [company’s website](https://www.indexing.co/articles/iuT8DiYiDTq5lcx1JxOgQ8g9hn9hsnRJZVynFiHzrPk). Any updates to this post are instantly available on both sites because they both use the **exact same data source**, Arweave. Let’s unpack this a bit: 1. Content is written in an editor such as Mirror 2. That content is added to a transaction on Arweave 3. A user visits a related link on _either_ mirror.xyz or indexing.co 4. The site fetches the content from Arweave, formats it, and displays it to the end user 5. Content is shared from creator to reader ![](https://media.giphy.com/media/WoWm8YzFQJg5i/giphy.gif) ## The Nitty Gritty Now that we have an overview of the steps required to load an individual post, let’s look at the technical details for indexing _all_ of a given user’s posts from Mirror. For this, we’re going to focus on using Typescript alongside the `arweave` [package on npm](https://www.npmjs.com/package/arweave). Mirror helps structure the content stored on Arweave with what are known as `tags`. These are pretty much what you might expect: key <> value pairs representing arbitrary strings tied to a piece of content. For instance, these are the tags for our post on [web3 storage options](https://mirror.xyz/indexingco.eth/FDyv8i8c15ATs_KIpAtEdeMP20WZ00FfssPYOj3EZRY): ``` { 'Content-Type': 'application/json', 'App-Name': 'MirrorXYZ', Contributor: '0x0317d91C89396C65De570c6A1A5FF8d5485c58DC', 'Content-Digest': 'B1ytOURSn75aACoOHmVHrV31bl0tL4ffWHEtl4JeGUE', 'Original-Content-Digest': 'FDyv8i8c15ATs_KIpAtEdeMP20WZ00FfssPYOj3EZRY' } ``` For our purposes we’re most interested in the `App-Name` and `Contributor` tags. We’ll use the combination of these two to pull all of the content published on Mirror by our given writer. Alright, time for some code. We’re first defining our `arweave` instance and pointing it at the publicly hosted `arweave.net` provider. If you want to run your own node, check out their docs [here](https://docs.arweave.org/info/mining/mining-guide). ``` import Arweave from "arweave"; const arweave = Arweave.init({ host: "arweave.net", port: 443, protocol: "https", }); ``` Since we’re using Typescript, we can define our `Post` structure. This roughly reflects what we’ll get from Arweave directly with the addition of the `originalDigest` key. That `originalDigest` will be pulled from the `Original-Content-Digest` and is important because that’s what Mirror uses in their URLs (i.e. why you can edit a post without having to share a new link). ``` type Post = { authorship: { contributor: string; }; content: { body: string; timestamp: string; title: string; }; digest: string; originalDigest: string; }; ``` Finally, we get to the meat of this whole shebang. We first query Arweave for the transactions matching our given `Contributor` tag and then fetch the full transaction for each identifier, including its data. Since the current `arweave` package only allows us to search by one tag, we filter by the `App-Name: MirrorXYZ` piece further down after we parse out the `tags`. Now that we’ve filtered down to only those transactions that match our `Contributor` and `App-Name`, we can pull out the `data` and turn it into a `Post`. Mirror adds all of their content as structured JSON strings, so we can readily parse that out and typecast to our `Post` type. Of course, `null` checks and error handling would be welcomed additions as well. ``` async function getPostsForContributor(address: string): Promise { const arweaveTransactionIds = await arweave.transactions.search( "Contributor", address ); const arweaveTransactions = await Promise.all( arweaveTransactionIds.map((txId) => arweave.transactions.get(txId)) ); const postsByOriginalDigest: Record = {}; for (const transaction of arweaveTransactions) { const tags: Record = {}; for (const tag of transaction.tags) { const name = tag.get("name", { decode: true, string: true }); const value = tag.get("value", { decode: true, string: true }); tags[name] = value; } const appName = tags["App-Name"]; if (appName !== "MirrorXYZ") { continue; } const originalDigest = tags["Original-Content-Digest"]; if (postsByOriginalDigest[originalDigest]) { continue; } const rawData = transaction.get("data", { decode: true, string: true }); postsByOriginalDigest[originalDigest] = { ...JSON.parse(rawData), originalDigest, }; } return Object.values(postsByOriginalDigest); } ``` And that’s really all there is to it! You can view all of the code above, together in this [gist](https://gist.github.com/brock-haugen/f67e71a8fc3c27cd9ecb5b3f64bbcff9). It’s worth noting that the current `arweave` package does _not_ support subscriptions. Because of this, we have to regularly check for new Arweave transactions in a manual way. This can be done via polling, or in the case of indexing.co, simply at request time. Lastly, if you want to render a given post, the `Post.content.body` parameter is stored as markdown and can be roughly converted to HTML using a package like [markdown-it](https://www.npmjs.com/package/markdown-it). ``` import md from "markdown-it"; function PostView(post: Post) { return (
); } ``` Happy indexing! ![](https://media.giphy.com/media/upg0i1m4DLe5q/giphy.gif) --- title: Control Your Data; a Dicso and Indexing Co Case Study url: /articles/disco-control-your-data.md description: How Disco leveraged Indexing Co's Just-In-Time Indexing to power their identity protocol, enabling users to control their multi-chain data. date: 2024-10-07 author: Dennis Verstappen image: /assets/articles/disco-control-your-data.png tags: Case Study --- Crypto has the unique characteristic of putting the user back in control of their own data, which enables the opportunity for data to be made available wherever the user goes. This context - in which the subject of data has the most control over its data - is often described as Self-Sovereign Identity. The user determines their on-chain identity, controlling which data and assets can be provided to other parties, alongside off-chain data created throughout their digital journeys. Managing, and even viewing, all this data from various chains and digital platforms can be a tedious task. Identity protocols like Disco have the vision that users can consolidate their online presence into a logically centralized, physically decentralized data package, controlled with their existing wallet. Data indexing and methodology is critical to this identity layer. Most businesses in crypto struggle with accessing and organizing complex multi-chain data, spending valuable time and resources on building indexers, data engineering and constructing data pipelines. The Indexing Company is of the opinion that crypto businesses would be able to provide more value to users when focusing on their expertise and not worrying about indexing and its associated challenges. In our collaboration with Disco, they could fully focus on building the future of seamless access, instant rewards and all of the other fun parts of the on-chain world enabled by our unique identities. With Disco, over a million users had a product where they could have control over their data and have privacy when preferred. To expand the utility for users to express their online identity, Disco was acquired by [Privado.ID](https://www.privado.id/). The complementary teams join forces to build an identity network providing next level utility for users, enterprises and governments. The Indexing Company is happy to support this newly formed team in their future ventures. #### The Setup The process started by understanding which types of data are the most important to users. For example, Disco used specific data from chains like Ethereum, Arbitrum, Optimism, & Base, like significant tokens and NFTs. The dataset scaled to accommodate popular queries over time. The Indexing Company’s unique data pipelines can fetch data from any chain while also making it possible to fit this data into a single unified data schema. Such transformations helped Disco with building a data model from which they easily expanded the number of chains or on-chain assets indexed. The data was directly streamed from the RPCs to Disco’s database reducing latency and ensuring data quality for both Disco and the end user. The Indexing Company gives projects working with the data pipelines ownership over their data. Since the public RPC data was streamed directly to Disco’s backend all data was fully composable for application in products and redistribution in numerous forms. #### Just In Time Indexing Disco's user base continuously expanded as more crypto users recognized the value of their own data. This growth presented challenges in data retrieval, as such a system must integrate historical data for new users while simultaneously accommodating incoming data. Historically, projects had to index entire chains, incurring high processing and storage costs, despite only a small fraction of the data being relevant to their users. Drawing from experience, The Indexing Company recognized the impact of this industry-wide problem and developed Just In Time Indexing (JITI). JITI enables getting both historical data and real-time data whenever the data is needed. For Disco, this involved invoking the indexer with a wallet address, which triggered JITI to perform a targeted backfill of all historical transactions. It also ensures data remains current by monitoring new transactions for those wallet addresses in real time. All chain and top token/NFT holdings were served to current and new users whenever they wanted to use Disco. At the peak of our collaboration, Indexing Co. processed data from more than 1.2 million wallets for Disco using JITI. ![](/assets/articles/disco-jiti.jpg) #### Contextual data Without context, on-chain data is hard to access, interpret and use. On-chain data consists mostly of numbers and contract addresses which do not tell you basic information on the protocol or user. For users, this is a positive feature creating pseudo-anonymity, but for interpreting data around which protocols are being used, it creates a problem for analysts who are trying to create valuable insights for their business and users. For Disco, understanding the platforms and actions users engaged with enriched the context in their Data Backpack. Disco users had the option to carry this valuable information with them to new platforms and to expose which data they wanted. The Indexing Company is continuously grabbing labels from various data sources. First, various data platforms like Etherscan and Dune provide labels. Second, on-chain logic can be used to generate labels. For example token-meta data and factory contracts can be indexed to automatically label Uniswap pools. When this data is matched with on-chain activity we can see if a user is a Uniswap user and which types of tokens they swap. This logic can be applied to a variety of (DeFi) protocols to get a better understanding of transactions. Through an iterative process where analysts and developers from both Disco and The Indexing Company continued to collaborate, more labels and contextual information were gathered over time. #### Control your data Disco and The Indexing Company found each other in the value that you should control your data. Disco users benefitted from taking control over their data through the Data Backpack, while the raw blockchain data was fully owned for free analysis by Disco themselves. The Indexing Company is happy to support builders like Disco in their journey to bring a better user experience in crypto. Our work enables businesses to focus on what actually matters: their product. We wish the Disco and Privado.ID team good luck in the next iteration of building their vision: a complete multi-chain, multi-device identity protocol for every user. [Get in touch](mailto:hello@indexing.co) with us to see how we can help build your data infrastructure so you can focus on building user-centric products. Your data, your way. --- Read the original post [here](https://theindexingcompany.substack.com/p/control-your-data-a-disco-and-the) --- title: Introducing: Mirror Mirror url: /articles/introducing-mirror-mirror.md description: A free search engine for Mirror.xyz content, indexing all posts stored on Arweave to enable discovery across the decentralized publishing platform. date: 2022-11-07 image: /assets/articles/introducing-mirror-mirror.png author: runninyeti.eth tags: Announcement --- Hot of the presses - we’re happy to present the latest free service from The Indexing Company, [Mirror Mirror](https://www.mirrormirror.page/) 🎊 ![searching the important topics of today](https://images.mirror-media.xyz/publication-images/YdB8DNV6ObJNpbKrg_TJ_.png?height=2474&width=3824) In short, this is a search engine for [Mirror.xyz](https://mirror.xyz/). For those that don’t know, Mirror is a web3 publishing platform akin to [Medium](https://medium.com/) in the web2 world. Unlike Medium though, Mirror is merely an interface to help writers put their content out into Data 3.0. That is, everything written on Mirror is ultimately stored on [Arweave](https://www.arweave.org/). _(For those curious, we spoke in more depth on Data 3.0 storage options like Arweave [here](https://blog.indexing.co/articles/FDyv8i8c15ATs_KIpAtEdeMP20WZ00FfssPYOj3EZRY))_ This intentional decentralization of data has important implications: 1. Writers _own_ their own data - only they can add, modify, and sign their data 2. Mirror does _not_ control the data - theoretically someone else could come along and build a competitor to Mirror, leveraging the exact same underlying data source 3. All of the posts from Mirror are publicly available, forever thanks to Arweave And we at Indexing Co have been able to leverage the 3rd point there to successfully index every Mirror post written - i.e. creating a “mirror” of Mirror if you will 😎 And now that we’ve got a running index of posts ([here’s the skinny on how that’s done](https://blog.indexing.co/articles/FDyv8i8c15ATs_KIpAtEdeMP20WZ00FfssPYOj3EZRY)), we can expose them via an API and ultimately create the simple UI that you see today. Let us know what you think! --- title: ENS: A Practical Guide url: /articles/ens-a-practical-guide.md description: A comprehensive guide to Ethereum Name Service, covering how to buy, set up, and use ENS names for both users and developers. date: 2022-09-23 image: /assets/articles/ens-a-practical-guide.png author: runninyeti tags: Guide --- So you’ve made it to this site and this particular blog post. Great! But do you know _how_ you actually got here? _(For those wanting to dive in, feel free to scroll past this to “The DIY Section” below)_ The internet today (web2) works, and more importantly gained adoption, in part thanks to the magical world of the Domain Name System (DNS for short). At the highest level, DNS effectively allows users of web2 to say “I want to visit [longdogechallenge.com](https://longdogechallenge.com/)” and reliably be taken to the content (website) that lives there. Let’s break down how that works a bit: 1. A user enters the domain name, `longdogechallenge.com`, in their browser’s address bar 2. A _DNS lookup_ occurs to resolve that domain name to a server’s IP address (an Internet Protocol address is a way to uniquely identify every device on the internet) 3. The browser is given the IP address for the requested domain name and asks the server behind that IP address for content 4. The server returns the content and the browser renders that to the end user (aka the resulting website) _(The above is admittedly a major simplification of DNS, and for those wishing to dive deeper, [Cloudflare has a great technical write up](https://www.cloudflare.com/learning/dns/what-is-dns/))_ For our purposes though, the general idea is that users of web2 can use DNS to get content behind an IP address while only knowing a domain name. This abstraction is enormously powerful. For starters, IP addresses are _not_ easy to remember - the “common” IPv4 format looks something like `192.168.1.1` and the newer IPv6 format, `2001:0db8:85a3:0000:0000:8a2e:0370:7334`, certainly isn’t any easier. IP addresses are designed for uniquely identifying the billions of internet devices and facilitating communication between _them_; not with humans. And of equal importance, because humans are accessing content via domain names, the server providing that content can be swapped out simply by changing which IP address a domain name points to. #### Okay, but what does this have to do with web3? Allowing mere humans to remember just `google.com` instead of `142.250.113.102` was a massive accessibility win for web2. The Ethereum Name System, ENS, provides a similar system for the Ethereum ecosystem. And while Ethereum is of course _not_ equivalent to all of web3, it does represent a large portion of the current web3 user base in some capacity (even if you aren’t directly using Ethereum, many protocols and services are built atop the same basic technologies - more on that another time). ENS, in short, allows a web3 user to say “I want to send funds to alice.eth” without actually knowing the wallet address behind `alice.eth`. ## The DIY Section Alright, we’ve covered what ENS is and why you should have one. Now let’s dig into how to acquire and use ENS; for both web3 users and developers. #### For Users Go buy an ENS! Seriously, go right now and buy one if you haven’t already. Some steps to get you started: **Setup your wallet** If you haven’t already, make sure you have a web3 wallet. [MetaMask](https://metamask.io/) is a great place to get started if you haven’t already joined web3. From there, make sure you have “some” Ethereum in your wallet. A quarter (0.25) ETH should be plenty to get started (unless you’re trying to get a [highly prized ENS](https://espressoinsight.com/2022/05/15/most-expensive-ens-sales-ever/)). If you don’t have any ETH yet, look towards exchanges like [Coinbase](https://www.coinbase.com/) to get you started. **Buy an ENS** Visit a site like [https://app.ens.domains](https://app.ens.domains/) to look up names you’re interested in. There’s no wrong answer here - it’s a lot like choosing an email or a Twitter handle. If your name is available, then great! Follow the registration process (yes, this will involve 2 transactions) and secure your shiny new ENS. If the name you like is already registered, then you have a few options for purchasing it on the “secondary markets”. ENS, at least in part, is officially a standard NFT collection. That means you can use your favorite NFT marketplaces to trade registered ENS items. Some options in no particular order: - - - **Tie your ENS to your wallet address** This is the important piece. Now that you own your ENS, you need to make sure that it’s setup to point to your wallet address. There are two different pieces to this. Go back to once again and click “My Account” in the upper right-hand corner and then do the following: **Define the ENS => wallet address lookup** -- Select the ENS you just purchased and set the “Record” for “ETH” address to your wallet address. During this step, feel free to set as many of the other text records you see there as you want - this is all part of your new web3 profile. ![Setting ENS records](https://images.mirror-media.xyz/publication-images/ddserphWtEKHUx3K9ZIMs.png?height=2062&width=3150) **Define the wallet address => ENS reverse lookup** -- Go back to your “My Account” page and you’ll have an option to select your “Primary ENS Name”. Submit the corresponding transaction to make it permanent 🔥 ![Setting a Primary ENS Name](https://images.mirror-media.xyz/publication-images/7zElgz3C7-UA7Er-8X4UH.png?height=1690&width=3150) _NOTE: Both of the above steps will also require submitting Ethereum transactions and paying a small amount of ETH in “gas”. This is to pay the network to store the new data you just added._ And just like that, you’ll now have an official, named identity in web3 🎉 A simple way you can try it out is by looking up your name or wallet address on [Etherscan](https://etherscan.io/) and seeing that the two are linked in the results. **BONUS: Add an NFT as your avatar** Thanks to the wonderful world of web3, you can also link pieces of decentralized data together. Specifically in this case, you can say that the photo for your shiny new ENS name should be another NFT that you own. For instance, the avatar field for runninyeti.eth is set to `` `eip155:1/erc1155:0x495f947276749ce646f68ac8c248420045cb7b5e/87433597745683365960201176492736871205018189775129059226749288698845216112641` `` which points to this lovely image: ![](https://lh3.googleusercontent.com/6Jd7V99bdOdRlUOQ-9V44oYfoRVGU3OMnsbzG6uxWQ1nWX2naO-OheqJwEFJBb9FbVqUziaebWMm6cq9iq4GUN5zXZZJtUrHMP8vkA) For a good overview of how to do this, check out [this post](https://medium.com/the-ethereum-name-service/step-by-step-guide-to-setting-an-nft-as-your-ens-profile-avatar-3562d39567fc) by the ENS team. #### For Developers Also go buy an ENS! For yourself, for your project, whatever it might be, just start participating. See the “For Users” section above and follow along. Now that that’s done, here’s some how-to’s for adding ENS functionality to your projects. We’re going to focus on using [web3.js](https://www.npmjs.com/package/web3) since it’s usable on both frontend and backend codebases, but these same principles can be used with any language. We’re also going to focus on some of the low-hanging fruit based on what is available to the developer community _today_, but we encourage everyone to go far beyond this (e.g. “ENS Profiles” or [deploying ENS on a private chain](https://docs.ens.domains/deploying-ens-on-a-private-chain)). There are also existing packages to obfuscate working with ENS (mainly JavaScript based, like [ensjs](https://www.npmjs.com/package/@ensdomains/ensjs)), but for this post we’ll be looking at how we can do it ourselves and handle all data directly. **Look up the current owner of an ENS** _tl;dr - ask the NFT contract for ENS who the current owner is_ We’ll start with an easy one. To grab the address of the current owner of a given ENS name: 1. Define a web3.js contract instance with the `ownerOf` [ABI input](https://docs.openzeppelin.com/contracts/2.x/api/token/erc721#IERC721-ownerOf-uint256-) 2. Convert the `name` to a `tokenId` 3. Ask the contract who owns that `tokenId` ``` async function getOwnerAddressForENSName(name) { // do a basic null check if (!name) { return null; } // define a new contract instance for ENS NFTs const nftContract = new web3.eth.Contract( [ { inputs: [{ internalType: 'uint256', name: 'tokenId', type: 'uint256' }], name: 'ownerOf', outputs: [{ internalType: 'address', name: '', type: 'address' }], stateMutability: 'view', type: 'function', }, ], ENS_NFT_ADDRESS ); // convert the name to a tokenId const tokenId = new BigNumber(Web3.utils.sha3(name.replace(/\.eth$/, ''))).toString(); // ask the contract for the current owner and return the result return nftContract.methods .ownerOf(tokenId) .call() .catch(() => null); } ``` **Swap ENS names for wallet addresses** _tl;dr - ask the ENS resolver for the address of a given ENS name_ Let’s kick things up a notch. In order to get the resolved address for a given ENS name, we’ll need a few things: 1. First we want to define a contract instance for the ENS resolver with the `addr` [method](https://docs.ens.domains/ens-improvement-proposals/ensip-9-multichain-address-resolution). The current contract address is `0x4976fb03C32e5B8cfe2b6cCB31c09Ba78EBaBa41` 2. We then leverage the `namehash`[ algorithm](https://docs.ens.domains/ens-improvement-proposals/ensip-1-ens#namehash-algorithm) to convert the given `name` to a sha3 `node` 3. Ask the contract for the current `addr` value of the given `node` ``` async function getAddressFromENSName(name) { // some basic validity checks // @NOTE: non .eth names are valid, but may require a different resolver contract address if (!name || !name.endsWith('.eth')) { return null; } // define a new contract instance for our ENS resolver const resolverContract = new web3.eth.Contract( [ { constant: true, inputs: [{ internalType: 'bytes32', name: 'node', type: 'bytes32' }], name: 'addr', outputs: [{ internalType: 'address', name: '', type: 'address' }], payable: false, stateMutability: 'view', type: 'function', }, ], ENS_RESOLVER_ADDRESS ); // convert our name to a node hash const node = namehash(name); // ask the contract for the current address and return the result return resolverContract.methods .addr(node) .call() .catch(() => null); } ``` **Swap wallet addresses for ENS names** _tl;dr - ask the ENS reverse resolver for the ENS name of a given address; double check against the name => address method above_ A final example here, getting the ENS name for a given wallet address is almost identical to the other two methods above: 1. Again, start by defining a new contract instance for the ENS reverse resolver with the `getNames` [method](https://docs.ens.domains/ens-improvement-proposals/ensip-3-reverse-resolution). The current contract address is `0x3671aE578E63FdF66ad4F3E12CC0c0d71Ac7510C` 2. Ask the contract for the name that corresponds to our given address 3. Double check that the name => address mapping matches. While this step isn’t _necessary_, it’s best practice to ensure no one is manipulating the reverse resolver - more info [from ENS here](https://docs.ens.domains/dapp-developer-guide/resolving-names#reverse-resolution) ``` async function getENSNameFromAddress(address) { // some basic validity checks if (!address || address.length !== 42) { return null; } // make sure our address is the checksum version address = Web3.utils.toChecksumAddress(address); // define a new contract instance for our ENS reverse resolver const reverseResolverContract = new web3.eth.Contract( [ { inputs: [{ internalType: 'address[]', name: 'addresses', type: 'address[]' }], name: 'getNames', outputs: [{ internalType: 'string[]', name: 'r', type: 'string[]' }], stateMutability: 'view', type: 'function', }, ], ENS_REVERSE_RESOLVER_ADDRESS ); // ask the contract for the name that maps from the address // @NOTE: you can pass multiple addresses in a single call with this method const [name] = await reverseResolverContract.methods .getNames([address]) .call() .catch(() => []); // @NOTE: ideally we double check that the reverse resolver is correct // this can be done by comparing against the name => address mapping if (address !== (await getAddressFromENSName(name))) { return null; } return name; } ``` #### Wrapping up There you have it, that’s a quick run down on working with ENS! Both as a web3 user and as a developer in the space. For deeper dives, definitely check out the official [ENS documentation](https://docs.ens.domains/dapp-developer-guide/ens-enabling-your-dapp) and feel free to explore the code from this post on [Github](https://gist.github.com/brock-haugen/e2fe9920b9d2069912b77fe5f0826733). And please reach out if you find this post interesting or just want to chat more about ENS, indexing, and Data 3.0. ![](https://media.giphy.com/media/KctrWMQ7u9D2du0YmD/giphy.gif) --- title: The Road to Data 3.0 url: /articles/the-road-to-data-3-0.md description: Exploring the challenges of accessing decentralized data in web3 and why Data 3.0 infrastructure is essential for the future of the ecosystem. date: 2022-08-31 image: /assets/articles/the-road-to-data-3-0.png author: runninyeti tags: Post --- ## Welcome! If you are reading this, congratulations on being a part of the next generation of innovation we like to call “Web 3.0” (or simply “web3”). You could be an artist looking to leverage the decentralized economy via an NFT launch, a builder working to further on-chain protocols with smart contracts, a business looking to leverage blockchain technologies to level up your internal logistics, or even just an innocent bystander looking to learn more about _the next big thing_ - whatever the reason, we are glad you are here! As the [2022 State of Crypto Report by a16z](https://a16zcrypto.com/state-of-crypto-report-a16z-2022/) summarizes, the adoption of web3 is growing, here to stay, and is empowering the next wave of creators. And while all of that is extremely exciting, the road to web2 scale (i.e. billions of users) is far from paved. For instance, the estimated 7-50 million Ethereum users have all had to learn about concepts like private keys, wallets, gas fees … and that’s just to get started in the ecosystem. Creators in web3 don’t fair any better. Building Decentralized Applications (dApps) often requires a working knowledge of blockchain data types, smart contract deployment and management strategies (hint: it’s not like traditional software development), gas fees, tokenomics, pseudo-anonymous “users", etc, etc. There is a silver lining for creators and users alike though: the growth of web3 can largely be attributed to trailblazing individuals and companies \*continuing\* to work on solving everything from user identity to transaction throughput of the underlying networks. ![](https://media.giphy.com/media/l0MYGb1LuZ3n7dRnO/giphy.gif) ## Data 3.0 One often overlooked aspect of the entire ecosystem though is the accessibility of data itself. As builders we’ve grown \[relatively\] accustomed to sending data out into the ether (🥁) without knowing how we will get it back in any sort of usable form. This leads to reliance on centralized sources to figure that out on our behalf, and charge us for that service - at least partially defeating the original purpose of decentralizing the data to begin with. To truly enable the future of decentralized data we must rethink our approach to Data 3.0. #### An Example Let’s take a look at one of the most common examples in the space today: tracking NFT project owners. NFT ([Non-Fungible Token](https://www.theverge.com/22310188/nft-explainer-what-is-blockchain-crypto-art-faq)) projects have significantly contributed to the rise of adoption in web3 over the last couple of years. [OpenSea](https://opensea.io/) for instance, the largest NFT marketplace, has facilitated up to $4.8 billion USD in sales _in a single month_ at the height of the last crypto cycle ([source](https://dune.com/rchen8/opensea)). What makes NFT projects particularly interesting for our Data 3.0 example though is their relative ease to develop (relative to web3 in general that is) combined with the repeated centralization these projects utilize. At their base layer, NFTs are smart contracts (permanent, unchangeable software) on a given blockchain (often Ethereum) that enable users to own, and transfer, a fixed number of items. NFT project creators can write and deploy their smart contracts, distribute the initial set of items (whether through giving them away or letting web3 users buy them from the smart contract directly), and then leverage a marketplace like OpenSea to enable trades between web3 users. An overly simplified lifecycle of an NFT project may go something like this: 1. Creator A launches the highly successful NFT Project X 2. Thousands of web3 users trade NFT Project X items as owners 3. … time passes … 4. Creator A now wants to give each of the _current_ owners of Project X a gift for being such a great community 5. Creator A realizes they can’t readily get a list of current owners from the blockchain directly and is therefore presented with the following choices: 1. Scrap the gift idea 2. Write a script to check the current owner of every item against the blockchain. This requires web3 development knowledge _and_ would only generate point in time snapshots making it difficult to keep up with items that may be rapidly switching hands. 3. Learn how to build an _indexer_ to crawl and subscribe to updates from the blockchain, replaying every trade sequentially, to determine the current owner of each item. This solves the problem, but requires even more web3 development knowhow and the ongoing maintenance of the indexer itself. 4. Use a readily available API from a centralized source (that has built their own indexer) to provide this data. This is by far the easiest solution, but requires reliance on a 3rd party provider that may or may not be generally reliable (looking at you OpenSea…) and often incurs a service cost. 6. Creator A chooses a 3rd party API provider 7. Owners of NFT Project X get their gifts and everyone is happy #### What’s the catch? This all sounds fine and dandy right? And it mostly is. Generally the 3rd party providers can be trusted to provide real time, consistent data. But what happens when that 3rd party isn’t reliable or changes the way it serves that data? Or more importantly, what happens when Creator A has a use case no longer supported by the standard offering (e.g. they want to only give gifts to owners that have held their NFTs more than 90 days)? And all of this is just the tip of the iceberg. While providers are rapidly appearing to fill the holes in Data 3.0, most of them are doing so in use case specific ways; [OpenSea](https://opensea.io/) for NFT data, [Alchemy](https://www.alchemy.com/supernode) for raw blockchain data, etc. [Ethereum’s developer docs](https://ethereum.org/en/developers/docs/) - which are honestly great - don’t even mention “indexing” except to point at a 3rd party provider, [The Graph](https://thegraph.com/). And The Graph, which describes itself as “an indexing protocol for querying networks like Ethereum and IPFS”, has limited support for customizations and forces participation in something akin to an indexing marketplace. ![](https://images.mirror-media.xyz/publication-images/uUF4ZsAYbivrv3uPml6FC.png?height=1295&width=2128) There’s a missing link in the Data 3.0 lifecycle and the impacts are only just starting to be felt. Web3 creators should be empowered to own, leverage, and define their data as they see fit. Join us in shaping the future of Data 3.0. --- title: Indexing EVM Chains with The Neighborhood url: /articles/indexing-evm-chains-with-the-neighborhood.md description: A guide to indexing EVM-compatible chains with The Neighborhood, enabling unified data pipelines across Ethereum, Base, Arbitrum, and more. date: 2025-09-02 author: Dennis Verstappen tags: Guide --- Ethereum and its expanding family of EVM-compatible chains have become the backbone of Web3. From Ethereum Mainnet to Base, Arbitrum, Optimism, Polygon, Avalanche, and many others, each chain carries its own developer community and data-rich ecosystem. For product teams and data engineers, this growth means one thing: **how do you reliably index and unify data across all these chains without the burden of maintaining custom infrastructure for each one?** The Indexing Company built **The Neighborhood**, a distributed compute network for high-performance indexing. It supports all major EVM chains today and makes it possible to create pipelines that span multiple chains, combining real-time streams and historical backfills into one coherent dataset. ## Why Indexing EVM Chains Is Hard EVM chains share the same virtual machine, but in practice they differ in RPC implementations, throughput, and ecosystem activity. Data engineers often face: Fragmented datasets across multiple RPCs and providers - High costs of running custom indexers for every chain - Rigid APIs or subgraphs that require reindexing when contracts change - Latency issues for applications that need real-time data The Neighborhood solves this by ingesting raw block data from any EVM RPC, applying programmable JavaScript transformations, and streaming the results directly into databases, warehouses, or webhooks. Developers stay in full control of the schema and transformations while skipping the overhead of building their own infrastructure. ## EVM Chains Supported The Neighborhood is live across all major EVM chains. The full, always-updated list of supported networks can be found at: [docs.indexing.co/networks](https://docs.indexing.co/networks). Some of the most popular chains include: **Ethereum, Base, Arbitrum, Optimism, Polygon, Avalanche, BNB Chain, Linea, ZkSync, and Scroll**. This breadth of coverage allows teams to unify data across the EVM ecosystem with a single pipeline approach. And while The Neighborhood also supports non-EVM ecosystems such as Solana, Aptos, and Sui, we’ll cover those in a separate article. ## Examples of EVM Data You Can Index Here are some practical pipelines developers build with The Neighborhood: - Token transfers: ERC20 transfers across Ethereum, Base, Optimism, and others for balance tracking and portfolio tools. - DEX activity: Swaps and liquidity events from Uniswap, Curve, Balancer, and chain-native DEXs. - Lending and borrowing: Events from Aave, Compound, Morpho, and other money markets. - NFT activity: Transfers, mints, and marketplace sales across EVM chains. - Wallet analytics: Transaction history, token balances, and activity classification for specific addresses. - Custom contracts: Protocol-specific contracts on any EVM chain, decoded into your own data model. ## Low-Latency and Local Indexing for High-Performance Chains Not all EVM chains are equal when it comes to speed. New high-performance networks like **RISE, MegaETH, and Base** are pushing the limits of block production and transaction throughput. For these chains, latency becomes critical: traders, DeFi protocols, and real-time dApps cannot afford multi-second delays. The Neighborhood addresses this with **local indexing**. Nodes can run close to validators or inside your own infrastructure, processing data directly at the source. This setup provides: - Sub-second latency for real-time trading and MEV strategies - Local control over uptime and data quality, reducing reliance on third-party APIs - Programmable pipelines to decode events and normalize them for immediate downstream use For developers building on RISE or MegaETH, where performance is the differentiator, local indexing ensures your data pipelines keep up with the chain itself. On Base, where scale and consumer apps drive adoption, local indexing helps wallets and DeFi platforms deliver faster UX without sacrificing accuracy. ## Benefits for Builders **Programmable pipelines** Customize exactly what you index and transform, without rigid schemas. **Cross-chain unification** Combine Ethereum Mainnet activity with Base, Optimism, Arbitrum, and more into one dataset. **Real-time and historical coverage** Stream new blocks in sub-second latency while also backfilling history from genesis. **Cost and efficiency** Leverage a horizontally scalable compute network that processes workloads faster and cheaper than centralized cloud setups. **Local deployment options** Run Neighborhood nodes alongside validators or within your own infra for maximum control and performance. ## Why This Matters for the EVM Ecosystem As Ethereum expands through its rollup-centric roadmap and more L2s and sidechains launch, indexing becomes the hidden infrastructure problem that every product team eventually faces. Building and maintaining your own indexers is costly and brittle. The Neighborhood provides a unified solution that scales with the ecosystem and lets teams focus on product, not pipelines. ## Getting Started The Neighborhood supports all major EVMs today and can onboard new ones rapidly. Developers can configure pipelines through our console and APIs, stream data into Postgres, BigQuery, Kafka, or webhooks, and unify analytics across chains. For product managers, CTOs, and data engineers building on EVM chains, The Neighborhood is the fastest way to go from raw chain data to production-ready pipelines. _Explore our documentation at [docs.indexing.co](https://docs.indexing.co) or contact us at [hello@indexing.co](mailto:hello@indexing.co) to start indexing your EVM data today._ --- title: Introducing: The ENS Profile API url: /articles/introducing-the-ens-profile-api.md description: A free, public GraphQL API for accessing ENS profiles, including addresses, text records, and content hashes without signup or tracking. date: 2022-12-28 image: /assets/articles/introducing-the-ens-profiles-api.png author: runninyeti.eth tags: Announcement --- Here at Indexing Co we’ve been spending the holiday season onboarding our first customers and getting our infrastructure into production 🎉 And as a holiday gift for web3 builders, we’re releasing our ENS Profile API to the public. No sign up required, no tracking, and completely free. Seriously! This is the same API that powers our ENS Profile search tool, [What’s my name again](https://www.whatsmynameagain.xyz), and is backed by our real-time indexing engine. You can access it via GraphQL at `https://query.indexing.co/graphql` with the following schema: ``` type ENSProfileAddresses { address: String coinType: Int } type ENSProfileAttributes { textKey: String textValue: String } type ENSProfile { addresses: [ENSProfileAddresses!] attributes: [ENSProfileAttributes!] contenthash: String name: String node: String owner: String tokenId: String } input ENSProfileFilter { name: String node: String owner: String textValue: String tokenId: String } type Query { ensProfiles(filters: ENSProfileFilter): [ENSProfile!]! } ``` For example, to grab the current profile for `vitalik.eth` you could simply do: ``` query { ensProfiles( filters: { name: "vitalik.eth" } ) { addresses { address coinType } attributes { textKey textValue } contenthash name owner } } ``` Which would return: ``` { "data": { "ensProfiles": [ { "addresses": [ { "address": "0xd8da6bf26964af9d7eed9e03e53415d37aa96045", "coinType": 60 } ], "attributes": [ { "textKey": "avatar", "textValue": "eip155:1/erc1155:0xb32979486938aa9694bfc898f35dbed459f44424/10063" }, { "textKey": "url", "textValue": "https://vitalik.ca" } ], "contenthash": "0xe3010170122081e99109634060bae2c1e3f359cda33b2232152b0e010baf6f592a39ca228850", "name": "vitalik.eth", "owner": "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045" } ] } } ``` --- Give it a try and let us know how it goes! And if you find yourself needing anything extra / different, don’t hesitate to reach out - we’re always looking for ways to improve our offerings and empower the builders in web3. --- title: Accessing Data 3.0: Indexing 101 url: /articles/accessing-data-3-0-indexing-101.md description: An introduction to indexing in web3, explaining what indexing is, why it matters, and how it enables access to decentralized data. date: 2022-09-14 image: /assets/articles/accessing-data-3-0-indexing-101.png author: runninyeti tags: Post --- _This is an entry in our long running series, “Accessing Data 3.0”, where we talk about the “whats” and the “hows” of working with data in web3. Enjoy!_ Remember libraries? The walls of books and the fearless librarians somehow always knowing exactly where everything is. Well, two things: 1) libraries still exist, 2) those libraries are each _indexed_. Librarians around the world categorize all of the books under their purview into what are known as a “[library catalogs](https://en.wikipedia.org/wiki/Library_catalog)”. These catalogs serve as a means of quickly finding books by given keywords: genres, authors, titles, etc. The internet works in much the same way. There are [billions of websites](https://www.internetlivestats.com/total-number-of-websites/) on the internet (our books in the example above) and each contains some amount of content. In order to find anything online we ask our almighty catalogs for direction - we “Google” a question or search someone’s name on Facebook. And for any of that to be possible, our trusted librarians (Google, Facebook, etc) had to first sort through the sea of content, categorize it, and then build intelligent indexes (catalogs). Now when you [search “weather today”](https://www.google.com/search?q=weather+today), Google is aware of sites that are relevant to the keyword “weather” and presents those. Of course, Google is also aware of your physical location, today’s date, your previous search history, and which website is paying the most to be matched on the keyword “weather” … but we’ll leave the darker details of the indexing industry for another day. The point is, **indexing is everywhere** - a book’s table of contents, your phone’s set of contacts, the grocery list pinned to the fridge … you get the gist. Any time a given data set is too large to be easily consumed, indexing of some sort is employed to aid in its digestion. Let’s refocus with some definitions: 1. [Data](https://www.dictionary.com/browse/data): bits of information 2. [Decentralization](https://en.wikipedia.org/wiki/Decentralization): the distribution of control and ownership away from a central authority 3. [Index](https://www.dictionary.com/browse/index): a catalog to help find data more quickly 4. [Web3](https://www.dictionary.com/e/web3/): the decentralized web; empowering individuals to _own_ their data From the above we can surmise that “**indexing**”, as it relates to web3, **is the act of cataloging decentralized information**. Simple as that. Generally speaking, indexing, in all scenarios, is done to make data more easily searched, and therefore to make that data more accessible. When the data itself is _decentralized_ though, it opens the door to entirely new models of indexing. Take for instance the basic flow of indexed data in web2: ![Web2 Data Indexing](https://images.mirror-media.xyz/publication-images/XKzIQY6AIgSau6X4CZCew.png?height=429&width=1302) In the current, web2 world, centralized authorities **control** the flow of data. They are responsible for discovering, aggregating, indexing, and ultimately serving data. When a user performs a “search” for instance, Google chooses which information is relevant and provides it back to the end user. Historically this has been “okay”, but when those centralized authorities start to [drop their “don’t be evil” mottos](https://gizmodo.com/google-removes-nearly-all-mentions-of-dont-be-evil-from-1826153393), it’s worth pausing to rethink this centralized model. Enter the world of web3 and decentralized data: ![Web3 Data Indexing](https://images.mirror-media.xyz/publication-images/mxrT-BhVsto3TBcKTakNA.png?height=959&width=1790) There’s two important pieces to call out in this web3 scenario: 1. Users are adding their data directly to decentralized networks (Ethereum, IPFS, Arweave, etc) 2. Because these networks are decentralized, _anybody_ can go through and index the data That second point has some significant ramifications. For starters, that means that the individual user, whether that’s a single human or a company, can ultimately index and access their own data directly from the network; no middlemen deciding what information is “right”. Furthermore, this model doesn’t stop centralized authorities from _also_ indexing that data and providing it to users - and that’s also good! By enabling open data access, decentralized networks effectively create an incentive strategy for truly providing what’s best for the _users._ Because the centralized authorities no longer control the influx of data, they must cater to the needs of their users. Otherwise, a new competitor will come along to meet those needs. And, the best part of all of this is, that new competitor could simply be the users themselves. Although this access isn’t always _simple_ today in many decentralized networks, the barriers to entry are lowering and the potential continues to increase. For those interested in following along, this series on “Accessing Data 3.0” will dive deeper into the various aspects of data in web3 and how we can all start participating in it. --- title: Web3 Data is NOT the Problem url: /articles/web3-data-is-not-the-problem.md description: An argument that web3's data accessibility challenges stem from infrastructure problems, not the data itself, and how composable infrastructure solves this. date: 2023-02-09 image: /assets/articles/web3-data-is-not-the-problem.png author: Stephen King tags: Post --- **Web3 Data is NOT the Problem** ## **An In-depth Look into the World of Web3 Data** ![](https://images.mirror-media.xyz/publication-images/V79JhhmfFQPKa6MDawfAx.png?height=512&width=512) Welcome to the forefront of the digital revolution. In the world of web3, with its vast data, blockchain networks, protocols, and tokens, accessing and understanding information can be challenging. Developers often build custom solutions due to the lack of accessible options. This piece addresses the infrastructure problem underlying web3 data accessibility, exploring strategies for transformative success. Let's dive in. **The Challenges of Accessing Web3 Data** The world of web3 data is vast and ever-growing, but it can be incredibly frustrating to access. With all the different blockchains, protocols, and tokens, navigating and understanding this complex data ecosystem requires an upfront investment of time, patience, and money. The data is raw, unindexed, and brutal to merge with third-party data sets. Dapp providers often find themselves building custom tools over leveraging open source and paid solutions. The current attempts to address these issues are futile as they focus on treating the symptom instead of the disease. #### **Identifying the Real Problem** The data accessibility problem in web3 is not a _data_ problem, it’s an _infrastructure_ problem. ![](https://images.mirror-media.xyz/publication-images/IhxoX--Hd-k1X3s38gebp.png?height=512&width=512) Now that we’ve properly defined the problem, we can implement a strategy that leads to success. Throughout the last two years, we used several web3 indexing products. Some, open-source, provided value in the short term. Over time the value was reversed through unannounced deprecated services and the additional resources we invested in making the product work. We later found centralized products that charge for their APIs. It was like having two versions of our town library next to each other. One makes you pay but gives you access to their card catalog. The other is free but requires more than a decade bit more time and creativity to find your book. After a few times navigating and getting comfortable in the free library, there is zero value in paying for admission next door. Like the soon-bankrupt library, companies charging for API access will soon reevaluate their business models. #### **The Future: An Ecosystem with Free Indexing APIs and Transformers** As we transition into an ecosystem with free indexing APIs, there is an additional hurdle that needs to be addressed. Simplifying access to indexed data is great but not useful if it's not easily configurable into different forms. Please welcome, transformers. Transformers transform the raw, indexed data into configurable forms. The result is a composable infrastructure that any individual, company, industry, or government building in web3 can configure to their needs. Looking at data from a single NFT project via an indexed API can give us insight into transaction data. Who is the largest holder? When did they buy? Where did they buy? Powerful, yes. However, scaling across thousands of transactions or adding web2 data into the model is hard. Transformers give builders the ability to overcome these challenges, create, and provide valuable datasets to their users. #### **The Promise of Composable Infrastructure** ![](https://images.mirror-media.xyz/publication-images/_2LkbtLN5MPh08DNveaji.png?height=512&width=512) Leveraging this infrastructure, let's use the Ethereum, Tezos and Polygon NFT Indexers to get a few baseline stats on chain data. Next, we’ll configure the transformer to compare net new purchases from September 2022-December 2022 on Solona, Ethereum and Polygon using wallet addresses. 1. There are over 7.5 million wallets on Ethereum that have ever owned an NFT 2. The average NFT price in 2022 on Ethereum was $343 3. There are 45,000 ERC721 and 30,000 ERC1155 NFT contracts on Ethereum. 4. During 2022, Tezos NFT sales (XTZ) were up 115% 5. NFT Market place volume peaked at 1.7 billion on Ethereum in August 2021 6. NFT wash trading peaked in Jan 2022 on Ethereum with 4.1 billion in wash trades versus 1.1 billion in organic trading volume. 7. From Sept 2022-Dec 2022, new users buying NFTs on Solona dropped 63% and 36% on Ethereum. During that same time, new users buying NFTs on Polygon grew more than 500%. 8. With composable infrastructure, developers reduce superfluous costs and technical debt while providing more value to their users. 9. To realize Web3's goal of surpassing web2, the applications must be significantly superior to their web2 counterparts. Developers require the same, if not better, tools and infrastructure as those available in web2. With seven years of experience building custom data solutions and paying for what should be free, we’ve gone all in on solving web3’s data infrastructure problem. #### **The Real-World Impact of Configurable Data** Taking our understanding of configurable infrastructure a step further, we can begin to see its profound implications on real-world blockchain applications. Consider, for example, the way in which DeFi applications can leverage this technology to provide their users with highly personalized and detailed financial metrics. In fact, a study from the[ University of Cambridge](https://www.jbs.cam.ac.uk/faculty-research/centres/alternative-finance/publications/2nd-global-enterprise-blockchain-benchmarking-study/) highlights how adopting a more configurable data approach can lead to enhanced user experiences and improved decision-making capabilities in financial markets. #### **Looking Ahead: The Future of Web3 Data Infrastructure** Moving forward, we must remember that the full potential of web3 data will only be unlocked when we prioritize building and maintaining a robust, scalable, and flexible data infrastructure. This sentiment is echoed in a recent report by Deloitte, which emphasizes the crucial role of a strong data infrastructure in harnessing the transformative power of blockchain technology. By fostering an ecosystem of free and efficient indexing APIs, we are setting the stage for digital transformation and a future where web3 data is not only more accessible but also more impactful. #### **The Best Web3 Indexing Tools for Strategic Advantage** In today's increasingly decentralized digital landscape, effective web3 indexing tools can provide a strategic edge. These web 3 indexing tools not only provide access to vast amounts of blockchain data, but they also simplify the process of filtering and interpreting this data. Here, we highlight some of the top indexing tools designed to navigate the web3 data universe effectively: #### **[The Graph](https://thegraph.com/)** The Graph is a powerful protocol that revolutionizes blockchain by enabling easy access to data through a network of open APIs called subgraphs. #### **[Covalent](https://www.covalenthq.com/)** Covalent is a unified API that offers visibility into vast amounts of blockchain data, providing comprehensive insights for web3 strategies the blockchain network. #### **[Nansen](https://www.nansen.ai/)** Nansen offers blockchain analytics for Ethereum-based finance protocols, providing insights into DeFi, NFTs, and more. Their sophisticated tools simplify navigation, understanding, and decision-making based on blockchain data. #### **[QuickNode](https://www.quicknode.com/)** QuickNode is an exceptional web3 indexing tool that enhances blockchain development efficiency. It offers reliable, fast, and scalable access to indexes, to Ethereum, Bitcoin, and other blockchain networks. With QuickNode, there's no need to run your own nodes, eliminating operational hassle and overhead. Embracing these innovative web3 indexing tools can help stakeholders derive meaningful insights from blockchain data in real time, fueling informed strategies and decisions in the web3 ecosystem. #### **Overcoming Challenges in Web3 Data Indexing** Despite the significant advantages of web3 indexing tools, it's important to acknowledge the challenges that remain. One key hurdle is the sheer volume of blockchain data. As blockchains grow and proliferate, storing, indexing, and querying this data become increasingly resource-intensive. As a result, developing scalable solutions multiple blockchains remains a top priority. Moreover, issues of data privacy and security in the distributed ledger and web3 space pose significant concerns. Public blockchains are inherently transparent, which may lead to unwanted disclosures of sensitive information. Ensuring the privacy-preserving computation and storage of blockchain data is thus a crucial area of ongoing research and development. ## **Final thoughts** ![](https://images.mirror-media.xyz/publication-images/gUANST_LBs-Ugph9YQfZH.png?height=200&width=200&size=large) In conclusion, the world of web3 data presents immense opportunities and challenges. While accessing and understanding this vast and complex web of data landscape may seem daunting, solutions offered by indexing companies like "[The Indexing Company](https://www.indexing.co/)" provide a way forward. By simplifying complex data systems and offering tailored indexing tools, businesses can harness the power of web3 data for strategic advantage. "[The Indexing Company](https://www.indexing.co/)" stands out as a leading provider in this space, offering innovative solutions that enable faster build times, cost savings, and streamlined side chain workflows. Their expertise in web3 data indexing empowers businesses to navigate the intricacies of blockchain networks, protocols, and tokens with ease, unlocking valuable insights and informed decision-making. For businesses seeking to stay at the forefront of the digital revolution, embracing the capabilities and technologies of "[The Indexing Company](https://www.indexing.co/)" is a call to action. By partnering with them, businesses can simplify their data systems, gain a competitive edge, and unleash the transformative potential of web3 data. Don't miss out on the opportunity to revolutionize your data strategy and drive success in the evolving landscape of web3. Contact "The Indexing Company" today to embark on this journey towards streamlined and impactful web3 data utilization. --- title: Serving AI with Data Infrastructure Fit for Web3 url: /articles/serving-ai-with-data-infra.md description: Web3 technology is perfectly positioned to ensure AI operates on trustworthy data while making AI accountable, transparent, and interconnected. author: Dennis Verstappen date: 2025-08-07 image: /assets/articles/serving-ai-with-data-infra.avif tags: Post --- Web3 technology is perfectly positioned to ensure AI operates on trustworthy data while making AI accountable, transparent, and interconnected. Blockchain can verify data through the network, guaranteeing that the inputs to and outputs from AI models are reliable. While the current Web3 landscape largely centers around financial data, blockchain technology has the potential to extend far beyond, encompassing personal information, scientific data, and government records. Currently, developers of AI, AI agents, bots, and ML models in Web3 are working to determine the necessary data infrastructure and data for training, inference, monitoring, and retraining their models. Before diving into the entire data pipeline required for these processes, let's look at a few examples of model types that can be supported: - Large Language Models (LLMs): this type of model is at the center of attention regarding AI. In Web3, these models can be used to have users interact with the blockchain and perform actions without those users requiring to understand the complexities of the technology. On its own, LLMs are not the best fit for blockchain data, since they mostly require text data (vs. the transactional data on the blockchain). However, when transactions or wallets are labeled and given context through embeddings and systems using Retrieval Augmented Generation (RAG), blockchain data can be served back to users. - Document or Vector Search: this type of model utilizes embeddings to find similarities between documents or vectors. In blockchain an embedding could exist for a protocol or for a wallet address and then compared to other embeddings. This type of model can be very useful for search engines, marketing and growth tooling and analytics. - Prediction models: since most activity in Web3 is related to financial transactions, predicting prices to speculate on these prices is a popular exercise. However, prediction models can be used to predict other useful metrics like transaction activity, gas costs, user retention, sybil users, etc. A few challenges exist in the current Web3 environment with this data with the main problems being: - Data is scattered across multiple chains and the number of chains continues to increase. Most data providers only serve a certain number of chains. - Data on the blockchain is unstructured which requires custom transformations and feature engineering to make it useful for building AI. - If data served to developers is structured it takes a certain format (because of APIs or query systems). This format or data schema is highly likely not the exact schema needed for the models, which requires developers to build out infrastructure to load and transform data before it can be used in their models. - Data regarding address labels is scattered across multiple data providers, without standardization or automation to do correct data labeling (like contract labels)
We will now take a look at the processes needed to put AI into production and how the unique data infrastructure from The Indexing Company can serve the builders and AI in Web3. #### Training To train models, a vast amount of data is needed. The training of these models happens often in a local environment with easy access to the data. The data is fed to these models in batches so these models can learn from those inputs. Historical on-chain data has to be fetched and can come from multiple chains. Ideally this data is transformed into a unified data schema, regardless of chain (EVM or non-EVM), while data is enriched with off-chain data like contract labels. Since the data pipelines built by The Indexing Company are chain agnostic and allow custom transformations, the data can be put into a unified data schema before it hits the training database or data lake. Since the data pipelines are highly configurable, data like contract labels or pricing data can be added to ensure a more complete feature set. The parallel processing network utilized by The Indexing Company ensures that backfilling this historical data is pushed fast to the target data infrastructure. #### Inference Inference is a term that covers the process trained AI models use to make predictions and decisions based on new incoming data. Ideally this data reaches the model in the same schema and with the same features as in the training stage. Data needs to be frequently updated to have the AI serve the user or act on its own. Data can be streamed in real time to a database which can trigger the AI based on certain thresholds. If the AI needs to pull data, the AI can query the database or can call an API which is hosted on top of the database. Since the pipelines from The Indexing Company can be configured in a way where it does not matter if the data is historical or real time, the same infrastructure can be used to both train the AI and serve the data for inference purposes. Basically, setting up these pipelines for historical data ensures that the data pipelines for inference are already in place too. These data pipelines can furthermore be optimized for low latency to have the AI act as fast as possible after blocks are confirmed on the blockchain. #### Monitoring Once an AI or a swarm of multiple agents begin transacting on the blockchain, they should be monitored to ensure performance. The data resulting from the agent's actions can also be indexed and used for real-time alerts, monitoring and analytics, giving users the ability to disable or reconfigure the agent in real time. We designed our infrastructure to be responsive (vs. a static approach to configurations), automatically indexing new data based on the data coming in and/or the reconfigured logic (either on events emitted on the blockchain or when a trigger is sent to the pipelines). This ensures that every new action by the bot or every new bot added to the swarm gets monitored. One example of this responsive data infrastructure is Just In Time Indexing (JITI). In a previous article, we described how Just In Time Indexing can work to continuously backfill and index new transactions from new addresses. For example, when a new agent is registered to the network, it would do so from a Factory Contract. JITI would be triggered to now monitor this new address and all transactions related to this address. This process ensures data completeness without manual intervention by developers. #### Retraining Models need to be retrained frequently to stay up to date with changes in the environment, to improve performance or to add new chains the bots need to be active on. With new types of data coming in, the chance that this data is in a different schema and requires new transformations is high. This is both true when new data from protocols or chains is added, since the smart contract structure or event structure might be different. Luckily, since we designed our data pipelines to be highly configurable, these transformations can happen before the data hits the target data infrastructure. Even if data comes from different sources or chains (EVM vs. Non-EVM) the resulting data schema can be unified. The unification of the data ensures continuity in the data schemas needed to calculate the features. This reduces the additional data engineering needed to integrate new data. #### Conclusion We welcome the opportunity AI brings to Web3. The potential is promising to both improve UX for users or automate tasks with settlement on a blockchain. The data infrastructure The Indexing Company provides is fully ready to help developers in AI and Web3 build a next generation of products. With fast and complete historical data, real time data streaming and responsive data pipelines, any type of model and AI can be (re-)trained, served and monitored. We are happy to spar with developers and businesses on their data needs. If you want to chat or need support, [reach out to us](/contact). --- title: Building the Data Economy Layer for Your Chain url: /articles/building-the-data-economy-layer-for-your-chain.md description: How chains can build a data economy layer using The Neighborhood to turn raw blockchain activity into monetizable data products and validator yield. author: Dennis Verstappen date: 2025-10-23 tags: Post --- #### Why Data Is the Next Competitive Edge Every blockchain competes for more than just transactions; it competes for data. As chains evolve beyond simple value transfer, the demand for structured, real-time, and AI-ready data is accelerating. Networks like **Story** and **Peaq** are leaning into this reality by aligning infrastructure around data and intelligent agents. In this new era, the value of a chain is not defined by blockspace alone but by its ability to power a thriving data economy on top of it. #### The Hidden Cost of Ignoring Data Infrastructure Most chains still treat data as a byproduct, not a resource. Developers struggle to access usable onchain information without building costly indexing stacks. Validators process large volumes of raw data but do not earn from it. AI builders and analytics teams repeatedly leave the chain to find, clean, and reshape data. The result is underutilized compute, fragmented ecosystems, and missed economic potential. Without a native data economy layer, valuable activity remains invisible, and so does the opportunity to generate volume and yield for the network. #### How The Neighborhood Turns Data Into an Economy **The Neighborhood** by **The Indexing Company** is a distributed data layer designed for real-time processing and streaming across any blockchain. Builders configure pipelines to filter contracts, decode events, and fuse onchain and offchain data. Lightweight nodes run alongside validators and use idle compute to power these pipelines. Builders pay in the chain’s token for usage. Node operators earn new yield for processing. The network gains transaction volume and native utility. For **Story**, this model strengthens provenance, licensing, and AI workflows with programmable, verifiable data streams. For **Peaq**, it turns machine and device outputs into monetizable, verifiable data products that agents can consume. Any chain that integrates The Neighborhood can transform raw activity into a self-sustaining loop of usage, revenue, and growth. #### Lead the Future of Onchain Data Chains that adopt a data economy layer unlock a new source of onchain volume: data volume. They create durable yield for validators, better tools for developers, and stronger network effects for the ecosystem. If your chain focuses on AI, data, or autonomous infrastructure, now is the time to build your data economy layer. **Contact The Indexing Company** to see how **The Neighborhood** can power it today. --- title: Devcon VI and the State of EVM Data url: /articles/devcon-vi-and-the-state-of-evm-data.md description: Reflections on Devcon VI and the state of EVM data, covering EIPs 4488, 4444, and 3668 and their implications for Ethereum's future. date: 2022-10-21 image: /assets/articles/devcon-vi-and-the-state-of-evm-data.jpg author: runninyeti.eth tags: Post --- By all measures, Devcon VI was a huge success. Over 6000 participants from around the world met in Bogota, Colombia to build, network, and celebrate together in _the_ official Ethereum conference. This is coming about a month after The Merge in which Ethereum switched from Proof of Work (PoW) to Proof of Stake (PoS). That transition worked far better than any could have hoped and has lead to ETH even being deflationary at times 🔥 ![https://ultrasound.money/](https://images.mirror-media.xyz/publication-images/vM0lJq0K4AcSfcqTz_yT0.png?height=720&width=1180) So what’s next for Ethereum and its ecosystem? In short, far too much is happening to cover in a single post, but we’re going to touch on “the state of data”. ## Today Let’s take a step back and remember how we got to today. Ethereum launched in 2015 with a vision of being the world’s computer. At its core, Ethereum processes transactions in its Ethereum Virtual Machine (EVM) and reaches consensus with its nodes (network of servers). In order to do this, each node in the ecosystem must keep track of a history of _all_ blocks and transactions that have ever existed. Fast forward to today, there are well over 15 million blocks and over 1 billion transactions on the Ethereum mainnet. And these numbers don’t even reflect the growing ecosystem of secondary blockchains on and around Ethereum such as Polygon, Optimism, Arbitrum, Starknet, etc. Point being, there’s a lot of data out there and it’s only continuing to grow. In order for Ethereum, and its ecosystem, to truly reach “internet scale”, we need to drastically increase adoption. That adoption, though, inevitably comes with a sharp increase in data and we need to be ready for this. ![https://a16zcrypto.com/state-of-crypto-report-a16z-2022/](https://images.mirror-media.xyz/publication-images/pX3soQBZf8TaKYGhRH96V.png?height=1163&width=2048) #### Today’s Problems Focusing primarily on solving for adoption, some of the common themes in Ethereum today are: 1. Too few transactions per second - not enough support for simultaneous users 2. Too few \[independent\] node operators - not enough decentralization 3. On-chain storage is expensive - and standards are missing for off-chain ## Looking Forward Thankfully, the sharp minds of the industry have already been working on solutions; many of which should be rolling out in the coming months and years. Let’s dig into a few of these Ethereum Improvement Proposals (EIPs): **EIP-4488: Working on Throughput** The leading way to increase throughput on Ethereum is simply to move transactions _off_ of Ethereum. This may sound counter-intuitive, but bear with me. Layer 2’s such as Optimism and Arbitrum offer developers and consumers lower gas fees, fast transaction times, and the full security of the Ethereum blockchain itself. How? By allowing transactions to use their own set of nodes, entirely independent of Ethereum, and then adding _proof_ of those transactions to the Ethereum. Effectively, Layer 2’s keep their data self-contained except for the proof that something has happened (this is all a rough approximation, but close enough for our purposes here). Circling back to EIP-4488, these Layer 2s frequently leverage what’s known as `calldata` to batch add these proofs to Ethereum. `calldata` is a specific type of data in the EVM that’s particularly cheap. That being said, if we’re hoping to reach internet scale, paying [$0.30+](https://public-grafana.optimism.io/d/9hkhMxn7z/public-dashboard?orgId=1&refresh=5m&from=now-90d&to=now) for something as simple as transferring ETH is still too much. EIP-4488 introduces an explicitly lower cost for `calldata` on Ethereum, which will decrease the cost of Layer 2s, and ultimately save users money. **EIP-4444: Everyone Gets a Node** The biggest problem with running your own Ethereum node at home generally isn’t the technical complexity involved. The terminal commands are simple and there are even products like [Dappnode](https://www.dappnode.io/en-us) offer plug’n’play ease. What gets tricky to solve for though is the sheer amount of storage costs you need to run a node. Each node must remember the entire history of the Ethereum blockchain. The storage requirements for that currently sit at \~1 TB and are closer to 6 TBs if you want to run what’s known as an “archive” node. And with proposals like EIP-4488 above, the size requirements are likely to increase even quicker (up to 3 TB _per year_ in the extreme case). EIP-4444 aims to address this by introducing a prune limit on historical data. Ethereum nodes would no longer have to remember the _entire_ history of the blockchain. Instead, they can keep only the last year of data. This makes running Ethereum nodes at home considerably less resource intensive. And, importantly, more nodes directly translates into better decentralization for Ethereum as a whole. If you’re like us, you’re probably wondering where all that old data is going to be stored? That is a fantastic question without an answer unfortunately. There seems to be consensus around some ideas though: - Have a separate “historical” node that people can choose to run - Introduce a P2P protocol for downloading past data (remember the BitTorrent days?) - Rely on centralized authorities to remember all past data - and make it available to the rest of us (likely for a fee…) In any case, more nodes and more decentralization is a net positive for the ecosystem. With time we’ll find an appropriate solution to accessing historical data. **EIP-3668: Accessing Off-Chain Data** Spearheaded by [ENS](https://ens.domains/) and [Chainlink](https://chain.link/), EIP-3668 introduces a standard for Ethereum developers to securely incorporate off-chain data into the ecosystem. This is a potentially _huge_ win for everyone. Ethereum is meant to the world’s computer and it makes sense that we’d want the ability to “plug-in” an external hard-drive to that. This is roughly what the Cross-Chain Interoperability Protocol (CCIP) introduces. CCIP works by allowing a smart contract to say “I don’t have the data, but I can verify it from Source X”. A client (e.g. browser) asking that contract for data can then reach out to Source X, receive raw data, and offer it back to the contract to verify. In this way, the data is _safe_ to use (because it’s validated by the contract), but also cheap to store + access because it’s _not_ stored on-chain. Even better, Source X could be anything - another blockchain, an API gateway, etc - CCIP simply provides us with a standard way of implementing this sort of off-Ethereum communication. Early concepts around CCIP are promising. Chainlink and SWIFT [are partnering](https://chainlinktoday.com/chainlink-and-swift-announce-ccip-proof-of-concept-at-smartcon-2022/) on a way to bridge the web2 banking world with web3. ENS is expanding beyond primarily `.eth` to support any domain, on any chain; [Coinbase has already implemented this](https://help.coinbase.com/en/wallet/managing-account/coinbase-ens-support). ![Devcon IV](https://images.mirror-media.xyz/publication-images/JmLvtWs3LZ6b3Pr2KBVcE.jpg?height=3072&width=4080) In the end, growing pains are a great sign for the Ethereum ecosystem and the future looks bright 🌕 See you all at the next Devcon! --- title: Get In Touch url: /contact.md --- # Get In Touch Ready to see what we can build together?

--- title: Privacy Policy url: /privacy-policy.md --- --- title: Indexing Co Brand Kit url: /brand-kit.md --- # Indexing Co Brand Kit ## Graphics Logo [svg](/assets/square.svg) ![Square Image](/assets/square.svg) Dex Retro [png](/assets/dex-retro.png) [svg](/assets/dex-retro.svg) ![Dex Image](/assets/dex-retro.svg) Dex 2-Bit [svg light](/assets/dex-bit-light.svg) [svg dark](/assets/dex-bit-dark.svg) ![Dex 2-Bit](/assets/dex-bit.svg) ## Colors - Primary: #000 / #FFF
- Secondary: #3C3C43 / #EBEBF5, 60%
- Green: #98f120
--- title: The Indexing Company url: /index.md description: The data layer for programmable payments. --- # Built to make sense of every onchain transaction,
Indexing Co is the data layer for programmable payments. {: .text-headline-regular } [Get Started](https://docs.indexing.co){: .button-link } #### Working with the industry's best {: .text-caption-regular }
Case Study →
MeshPay

"The fastest way to get onchain payments data in the way we want it and when we want it. Probably the only solution on the market with well priced backfills."