SubQuery launches decentralized AI inference hosting at Web3 Summit in Berlin

08/21/2024 16:43
SubQuery launches decentralized AI inference hosting at Web3 Summit in Berlin

SubQuery launches decentralized AI inference hosting at Web3 Summit in Berlin

Singapore, Singapore, August 21st, 2024, Chainwire

At the Web3 Summit in Berlin today, SubQuery made a major announcement, unveiling its newest innovation: decentralized AI inference hosting. In a live demonstration, SubQuery’s COO, James Bayly, showcased how the latest LLama model operates across a fully decentralized network of Node Operators on SubQuery’s internal test network.

SubQuery’s vision is to empower developers to shape the future through decentralization. The company is at the forefront of a movement to build the next wave of Web3 applications for millions of users, with decentralization as the core principle.

The SubQuery Network is an advanced infrastructure layer that underpins this vision. It currently supports decentralized data indexers and RPCs, which are critical components for any developer building decentralized applications (dApps). SubQuery has proven itself as a credible alternative to centralized services, offering an open network where anyone can participate as a node operator or delegator.

The role of AI in transforming industries, including Web3, has become increasingly clear. SubQuery has been closely monitoring these developments and has been working behind the scenes to bring AI capabilities to its decentralized platform. “The Web3 Summit in Berlin, with its focus on decentralization, is the perfect stage for us to launch this new capability and demonstrate it live” said James Bayly.

SubQuery is focused on AI inference, the process of using pre-trained models to make predictions on new data, rather than on model training. “While there are commercial services that offer inference hosting for custom models, few exist within the Web3 space,” James explained. “Our decentralized network is ideally suited for reliable, long-term AI model hosting.”

Currently, the market for AI inference is dominated by large centralized cloud providers who charge high fees and often use user data to improve their proprietary models. “Providers like OpenAI and Google Cloud AI are not only expensive but also leverage your data to enhance their closed-source offerings,” James noted. SubQuery is committed to providing an affordable, open-source alternative for hosting production AI models. “Our goal is to make it possible for users to deploy a production-ready LLM model through our network in just 10 minutes,” he added.

“Relying on closed-source AI models risks consolidating power in the hands of a few large corporations, creating a cycle that perpetuates their dominance,” James warned. “By running AI inference on a decentralized network, we ensure that no single entity can control or exploit user data. Prompts are distributed across hundreds of node operators, ensuring privacy and supporting an open-source ecosystem.”

The SubQuery Network will provide leading-edge hosting for the latest open-source AI models, enabling scalable and accessible AI services for Web3. By embracing a community-driven approach, SubQuery will support decentralized AI inference at scale, empowering a diverse network of independent Node Operators.

About SubQuery

SubQuery Network is innovating web3 infrastructure with tools that empower builders to decentralise the future – without compromise. Our flexible DePIN infrastructure network powers the fastest data indexers, the most scalable RPCs, innovative Data Nodes, and leading open-source AI models. We are the roots of the web3 landscape, helping blockchain developers and their cutting-edge applications to flourish. We’re not just a company – we’re a movement driving an inclusive and decentralised web3 era. Let’s shape the future of web3, together. 

​​​​Linktree | Website | Discord | Telegram | Twitter | Blog | Medium | LinkedIn | YouTube

Contact

Head of Marketing
Brittany Seales
SubQuery PTE LTD
[email protected]

Disclaimer

Loading...

Read more --->