OpenLedger builds a new generation of AI chain to create a data-driven intelligent economy.

OpenLedger Depth Research Report: Building a data-driven, model-composable agent economy based on OP Stack + EigenDA

1. Introduction | The Model Layer Leap of Crypto AI

Data, models, and computing power are the three core elements of AI infrastructure, analogous to fuel (data), engine (models), and energy (computing power), all of which are indispensable. Similar to the evolution path of traditional AI industry infrastructure, the Crypto AI field has also undergone similar stages. At the beginning of 2024, the market was once dominated by decentralized GPU projects such as (Akash, Render, io.net, etc., which generally emphasized a rough growth logic of "competing for computing power." However, entering 2025, the industry's focus gradually shifts to the model and data layers, marking that Crypto AI is transitioning from a competition for underlying resources to a more sustainable and application value-oriented mid-layer construction.

) General Large Model (LLM) vs Specialized Model (SLM)

Traditional large language models (LLMs) rely heavily on large-scale datasets and complex distributed architectures, with parameter sizes ranging from 70B to 500B, and the cost of training can often reach several million dollars. In contrast, SLM (Specialized Language Model) is a lightweight fine-tuning paradigm for reusable foundational models, typically based on open-source models like LLaMA, Mistral, and DeepSeek. By combining a small amount of high-quality specialized data and technologies like LoRA, it quickly builds expert models with specific domain knowledge, significantly reducing training costs and technical barriers.

It is worth noting that SLM is not integrated into the LLM weights, but instead collaborates with the LLM through the Agent architecture invocation, dynamic routing via the plugin system, hot-swapping of LoRA modules, and RAG (Retrieval-Augmented Generation). This architecture retains the broad coverage capability of the LLM while enhancing professional performance through fine-tuning modules, forming a highly flexible combinatorial intelligent system.

The value and boundaries of Crypto AI at the model level

The essence of Crypto AI projects is that they are inherently difficult to directly enhance the core capabilities of large language models (LLM), with the main reason being

  • High technical barrier: The scale of data, computing resources, and engineering capabilities required to train a Foundation Model is extremely large, and currently only certain tech giants have the corresponding capabilities.
  • Limitations of the Open Source Ecosystem: Although mainstream foundational models such as LLaMA and Mixtral have been open-sourced, the key to driving breakthroughs in models still lies primarily with research institutions and closed-source engineering systems, with limited participation space for on-chain projects at the core model level.

However, on top of open-source foundational models, Crypto AI projects can still achieve value extension by fine-tuning specialized language models (SLM) and integrating the verifiability and incentive mechanisms of Web3. As the "peripheral interface layer" of the AI industrial chain, it is reflected in two core directions:

  • Trustworthy Verification Layer: Enhances the traceability and tamper-resistance of AI outputs by recording model generation paths, data contributions, and usage on-chain.
  • Incentive mechanism: Utilizing the native Token to incentivize behaviors such as data uploading, model invocation, and agent execution, creating a positive feedback loop for model training and services.

AI Model Type Classification and Blockchain Applicability Analysis

It can be seen that the feasible landing points of model-type Crypto AI projects mainly focus on the lightweight fine-tuning of small SLMs, on-chain data access and verification of the RAG architecture, as well as local deployment and incentives for Edge models. Combined with the verifiability of blockchain and the token mechanism, Crypto can provide unique value for these medium and low-resource model scenarios, forming differentiated value for the AI "interface layer."

The blockchain AI chain based on data and models can provide clear and tamper-proof on-chain records of the contribution sources of each piece of data and model, significantly enhancing the credibility of data and the traceability of model training. At the same time, through the smart contract mechanism, rewards are automatically distributed when data or models are invoked, transforming AI behavior into measurable and tradable tokenized value, thereby building a sustainable incentive system. In addition, community users can also evaluate model performance through token voting, participate in rule formulation and iteration, and improve the decentralized governance structure.

![OpenLedger Depth Research Report: Building a Data-Driven, Model-Combinable Intelligent Economy Based on OP Stack + EigenDA]###https://img-cdn.gateio.im/webp-social/moments-62b3fa1e810f4772aaba3d91c74c1aa6.webp(

2. Project Overview | OpenLedger's AI Chain Vision

OpenLedger is one of the few blockchain AI projects currently on the market that focuses on data and model incentive mechanisms. It is the first to propose the concept of "Payable AI", aiming to build a fair, transparent, and composable AI operating environment that incentivizes data contributors, model developers, and AI application builders to collaborate on the same platform and earn on-chain rewards based on actual contributions.

OpenLedger provides a complete closed-loop from "data provision" to "model deployment" and then to "profit sharing call", with its core modules including:

  • Model Factory: No programming required; customize models using LoRA fine-tuning training and deployment based on open-source LLM.
  • OpenLoRA: Supports coexistence of thousands of models, dynamically loads on demand, significantly reduces deployment costs;
  • PoA (Proof of Attribution): Achieving contribution measurement and reward distribution through on-chain call records;
  • Datanets: A structured data network aimed at vertical scenarios, built and validated through community collaboration;
  • Model Proposal Platform: A composable, callable, and payable on-chain model marketplace.

Through the above modules, OpenLedger has built a data-driven, model-composable "intelligent economy infrastructure" to promote the on-chainization of the AI value chain.

In the adoption of blockchain technology, OpenLedger uses OP Stack + EigenDA as the foundation to build a high-performance, low-cost, and verifiable data and contract execution environment for AI models.

  • Built on OP Stack: Based on the Optimism technology stack, supporting high throughput and low-cost execution;
  • Settle on the Ethereum mainnet: Ensure transaction security and asset integrity;
  • EVM Compatible: Convenient for developers to quickly deploy and expand based on Solidity;
  • EigenDA provides data availability support: significantly reduces storage costs and ensures data verifiability.

Compared to general-purpose AI chains like NEAR that are more focused on the underlying layer and emphasize data sovereignty with the "AI Agents on BOS" architecture, OpenLedger is more focused on building AI-specific chains that are oriented towards data and model incentives. It aims to achieve a traceable, composable, and sustainable value loop for model development and invocation on-chain. It serves as the model incentive infrastructure in the Web3 world, combining HuggingFace-style model hosting, Stripe-style usage billing, and Infura-style on-chain composable interfaces to promote the realization of "models as assets."

![OpenLedger Depth Research Report: Building a Data-Driven, Model-Composable Agent Economy based on OP Stack + EigenDA])https://img-cdn.gateio.im/webp-social/moments-19c2276fccc616ccf9260fb7e35c9c24.webp(

3. Core Components and Technical Architecture of OpenLedger

) 3.1 Model Factory, no-code model factory

ModelFactory is a large language model (LLM) fine-tuning platform under the OpenLedger ecosystem. Unlike traditional fine-tuning frameworks, ModelFactory offers a purely graphical interface for operation, eliminating the need for command line tools or API integration. Users can fine-tune models based on datasets that have been authorized and reviewed on OpenLedger. It realizes an integrated workflow for data authorization, model training, and deployment, with the core processes including:

  • Data Access Control: Users submit data requests, providers review and approve, and data is automatically integrated into the model training interface.
  • Model selection and configuration: Supports mainstream LLMs (such as LLaMA, Mistral), with hyperparameter configuration through GUI.
  • Lightweight fine-tuning: Built-in LoRA / QLoRA engine, real-time display of training progress.
  • Model Evaluation and Deployment: Built-in evaluation tools, supporting export for deployment or ecological sharing calls.
  • Interactive Verification Interface: Provides a chat-style interface for directly testing the model's question and answer capabilities.
  • RAG Generation Traceability: Responses with source citations enhance trust and auditability.

The Model Factory system architecture includes six major modules, encompassing identity authentication, data permissions, model fine-tuning, evaluation deployment, and RAG traceability, creating a secure, controllable, real-time interactive, and sustainable monetization integrated model service platform.

The following is a summary of the capabilities of the large language models currently supported by ModelFactory:

  • LLaMA Series: The ecosystem is the widest, the community is active, and the general performance is strong, making it one of the most mainstream open-source foundational models currently.
  • Mistral: Efficient architecture with excellent inference performance, suitable for flexible deployment in resource-limited scenarios.
  • Qwen: Produced by Alibaba, excels in Chinese task performance, has strong overall capabilities, making it the top choice for domestic developers.
  • ChatGLM: The Chinese conversation effect is outstanding, suitable for vertical customer service and localization scenarios.
  • Deepseek: Excels in code generation and mathematical reasoning, suitable for intelligent development assistive tools.
  • Gemma: A lightweight model launched by Google, with a clear structure, easy to quickly get started and experiment.
  • Falcon: Once a benchmark for performance, suitable for basic research or comparative testing, but community activity has decreased.
  • BLOOM: Strong multilingual support, but weaker inference performance, suitable for language coverage research.
  • GPT-2: A classic early model, suitable only for teaching and validation purposes, not recommended for actual deployment.

Although OpenLedger's model suite does not include the latest high-performance MoE models or multimodal models, its strategy is not outdated; rather, it is a "practical-first" configuration made based on the real constraints of on-chain deployment (inference costs, RAG adaptation, LoRA compatibility, EVM environment).

As a no-code toolchain, Model Factory has built-in proof of contribution mechanisms for all models, ensuring the rights of data contributors and model developers. It boasts advantages such as low thresholds, monetizability, and composability, compared to traditional model development tools:

  • For developers: Provide a complete path for model incubation, distribution, and revenue;
  • For the platform: forming a model asset circulation and combination ecosystem;
  • For users: models or Agents can be combined in the same way as calling an API.

![OpenLedger Depth Research Report: Building a Data-Driven, Model-Composable Agent Economy Based on OP Stack + EigenDA]###https://img-cdn.gateio.im/webp-social/moments-f23f47f09226573b1fcacebdcfb8c1f3.webp(

) 3.2 OpenLoRA, on-chain assetization of fine-tuned models

LoRA (Low-Rank Adaptation) is an efficient parameter tuning method that learns new tasks by inserting "low-rank matrices" into pre-trained large models without modifying the original model parameters, significantly reducing training costs and storage requirements. Traditional large language models (like LLaMA, GPT-3) typically have billions or even hundreds of billions of parameters. To utilize them for specific tasks (such as legal Q&A, medical consultations), fine-tuning is required. The core strategy of LoRA is: "freeze the parameters of the original large model and only train the newly inserted parameter matrices." Its parameters are efficient, training is fast, and deployment is flexible, making it the mainstream fine-tuning method most suitable for Web3 model deployment and composable calls.

OpenLoRA is a lightweight inference framework designed by OpenLedger specifically for multi-model deployment and resource sharing. Its core goal is to address common issues in current AI model deployment, such as high costs, low reusability, and GPU resource wastage, promoting the execution of "Payable AI."

The core components of the OpenLoRA system architecture are based on a modular design, covering key aspects such as model storage, inference execution, and request routing, achieving efficient and low-cost multi-model deployment and invocation capabilities:

  • LoRA Adapter Storage Module ###LoRA Adapters Storage(: The fine-tuned LoRA adapter is hosted on OpenLedger, enabling on-demand loading and avoiding pre-loading all models into the video memory, thus saving resources.
  • Model Hosting & Adapter Merging Layer ): All fine-tuned models share a common base model, and during inference, LoRA adapters are dynamically merged, supporting multiple adapters for ensemble inference, enhancing performance.
  • Inference Engine: Integrates multiple CUDA optimization technologies such as Flash-Attention, Paged-Attention, and SGMV optimization.
  • Request Router & Token Streaming (: Dynamically route to the correct adapter based on the model required in the request, achieving token-level streaming generation through optimized kernels.

The inference process of OpenLoRA belongs to the "mature and general" model service "process" at the technical level, as follows:

  • Base model loading: The system pre-loads basic models such as LLaMA 3, Mistral, etc.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 6
  • Repost
  • Share
Comment
0/400
BtcDailyResearchervip
· 07-17 05:36
Hmm? Another AI disk.
View OriginalReply0
WalletDetectivevip
· 07-15 03:19
Just the urban rhetoric mentioned in this article.
View OriginalReply0
rugpull_survivorvip
· 07-15 03:16
Stop pretending, it's just trading Computing Power Token.
View OriginalReply0
AirdropFatiguevip
· 07-15 03:16
Again creating a new concept to deter newbies.
View OriginalReply0
OnchainDetectivevip
· 07-15 03:06
The yy model economy is here again.
View OriginalReply0
YieldChaservip
· 07-15 03:06
What can this intelligent agent do transparently?
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)