Solution
Last updated
Last updated
To address the aforementioned issues, we propose a decentralized model service solution. This approach enables model providers to circumvent centralized censorship, ensuring the freedom and diversity of model publishing and usage. By introducing decentralized idle computing power and leveraging token-based economic incentives, we can reduce the actual cost for users.
We will implement this concept by adding a new AI shard on TOP AI chain. Utilizing sharding will significantly lower our blockchain development costs, allowing us to focus on developing various deep learning frameworks, decentralized computing resource management, and model management functionalities required by the shard. We will use advanced zero-knowledge proof technology to protect models in untrusted environments, preventing attacks and tampering, and ensuring comprehensive data security and privacy through multi-layered security measures.
Ultimately, anyone can participate as a model provider and freely offer their models. The involvement of computing nodes not only reduces computing costs but also ensures the censorship resistance of the models.
The AI shard of TOP AI will inherit the sharding technology of TOP AI Network and operate in parallel with the other four main shards and one EVM shard. Within the three-layer network architecture, it will also support high-speed, high-volume, and low-cost transactions.
TOP AI Network already has a token incentive mechanism that effectively supervises and promotes the work of nodes within the network. In the AI shard, we will need to enhance the nodes with computing power capabilities to expand their model functionalities, enabling them to meet high-level privacy protection scenarios, and supporting tamper-proof and leakage prevention for input data and output results.
Additionally, we will focus on the following core technical solutions to achieve the necessary functions for the AI shard, enhancing the system's security, scalability, and reliability.
Freely Providing Censorship-Resistant Models: Through decentralized architecture, censorship-resistant computing power, and censorship-resistant models, we ensure that model developers and users are free from control or interference by any single entity, allowing them to freely provide and use models.
Model Censorship Resistance: Model providers can publish models freely, and AI application developers can use them without censorship, ensuring model security and accessibility.
Computing Power Censorship Resistance: Providing censorship-resistant computing resources through a decentralized computing power network ensures the privacy, security, and non-interference of model training and operation.
Data Privacy Protection and Trustworthy Results: Utilizing technologies such as Zero-Knowledge Proofs (ZKPs) and Homomorphic Encryption to ensure data privacy during transmission and processing, as well as the correctness and trustworthiness of model execution results. Fine-tuned models can be independently deployed and used, protecting the privacy of the fine-tuned details.
Permanent Data Sovereignty: Through blockchain technology and decentralized storage, users and developers can fully control and own their data, ensuring its long-term accessibility and security, avoiding data monopolies by centralized platforms and data loss due to platform shutdowns.
Unlimited Model Supply: A decentralized platform that requires no authorization, allowing anyone to participate as a model provider to meet various needs.
Transparency: Utilizing the immutability and public transparency of blockchain to record model relevant operations and transactions, ensuring the system's transparency and traceability, and preventing fraud and misconduct.
Based on the aforementioned solutions and concepts, there are three main participants within the TOP AI shard: AI application developers, model providers, and computing nodes.
AI Application Developers: These are the users of the models, who can call upon any model published on TOP AI.
Model Providers: These are the creators of the models, who can freely publish their models on TOP AI for external use.
Computing Nodes: These are miners providing high-performance computing power, enabling AI models to run and produce the expected data results.
Model providers submit their trained models to AI shard. The models are then automatically distributed to various computing nodes for functionalities reviews and result consensus. AI application developers send requests to the AI shard to invoke these models. After scheduling, the requests are submitted to the appropriate computing nodes for execution, and the final results are returned to the AI application developers.
In AI shard, the participation process for each role is meticulously designed to ensure efficiency, security, and transparency throughout the system. The specific working principles are as follows:
Model Upload by Model Providers:
Model providers first upload their models to AI shard on TOP AI Network. This step is fundamental to integrating the models into the system.
Once uploaded, the models undergo rigorous review and verification by the computing nodes on AI shard to ensure the integrity and security of the models, preventing malicious models from harming the system.
Task Scheduling and Resource Allocation:
After a model is verified, the AI shard receives task requests from AI application developers.
These tasks are then scheduled to the most suitable computing nodes based on algorithm matching and resource availability to ensure efficient execution in a trusted execution environment.
Execution and Result Storage:
The computing nodes execute the requested tasks by invoking the corresponding models and verifying the results.
Once verification is successful, the relevant verification information is recorded on AI shard. This step ensures the transparency and traceability of the results, providing a guarantee for system reliability.
Results Returned to AI Application Developers:
Finally, the AI application developers receive the computation results they requested. These results can be used for further analysis, operations, or decision-making.