Centralization Risks in AI, Human Potential Opportunities: Interview With Inference Labs Co-Founder Ronald Chan
08/13/2024 22:10In exclusive interview, prominent AI innovator shares his views on what is next for AI and why this segment truly needs Web3
Disclaimer: The opinions expressed by our writers are their own and do not represent the views of U.Today. The financial and market information provided on U.Today is intended for informational purposes only. U.Today is not liable for any financial losses incurred while trading cryptocurrencies. Conduct your own research by contacting financial experts before making any investment decisions. We believe that all content is accurate as of the date of publication, but certain offers mentioned may no longer be available.
Contents
U.Today sat down with Ronald Chan, a co-founder of Inference Labs, a trailblazing AI team. This year, it yielded funding from top VCs and launched a first-gen Bittensor subnet.
From big tech and IoT to Web3: Background
Both co-founders of Inference Labs have been working together for over eight years. Ron Chan has been a serial entrepreneur for nearly two decades, with three successful exits from companies he started. One of these exits involved a European public company acquiring the technology Ron and Colin had developed. Ron has worked in various industries, including building data centers and national defense projects such as NORAD (North American Aerospace Defense) in North Bay, Canada. One of the exits he mentioned was in civil aviation, launching a startup nearly ten years ago that started in digital transformation and evolved into building custom embedded systems and machine learning in 2018.
This deployment scaled to over 60 airports worldwide, including notable ones like Chicago O'Hare (the largest in passenger volume in North America), FedEx's main distribution hub in the US (which took 18 months to pour the concrete, housing the largest deicing facility in the world), and international airports like London Heathrow.
Mr. Chan stresses that his developments remain relevant despite the segment gaining traction at increased pace:
The machine learning system was designed to provide visual guidance during winter conditions, with a service level agreement requiring aircraft to be positioned within 2 meters of a specific point 99% of the time, a metric the AI system exceeded. The same model from 2018 is still in production today.
In 2017, they also commercially entered the crypto space, IoT was a prominent Web3 narrative at the time. During their work in civil aviation, they needed a secondary backhaul and investigated LoRaWAN, which led them to investigate in Helium and the larger web3 industry at a commercial level.
System and Method for Distributing, Monitoring, and Controlling Information: What is the most crucial patent for Ronald Chan?
The "System and Method for Distributing, Monitoring, and Controlling Information" was initially developed as part of a digital transformation initiative. This innovation opened doors for Ron Chan to enter and disrupt a niche industry focused on ground support operations across numerous international airports.
This disruption revolutionized airport operations, equipping the aviation industry with cutting-edge technological infrastructure. By combining technological and commercial resources, this innovative technology allowed airports to scale quickly and appropriately through integrated solutions, automating processes and enhancing the safety and efficiency of airport and airline operations.
Combating monopolies and centralization in AI: Mission of Inference Labs
At Inference Labs, all contributors believe that the future of AI must include decentralization and self-sovereign.
Our mission is to hyperscale AI technology while preserving individual freedom of speech and ensuring sovereignty over personalized and proprietary models. In a world where centralized AI systems are often monopolistic and highly centralized walled gardens, we see a critical need to provide alternatives that empower users with control over their data and the AI models that they interact with. We recognize that centralized AI, despite its efficiency, often comes with significant trade-offs, including restricted access to data and how it’s used.
These limitations pose risks and challenges which stifle innovation and limit individual freedoms, Mr. Chan stresses.
The risks associated with centralized AI are not merely technical but also ethical and societal. Centralized control over AI models leads to misuse of data and manipulation, as we have witnessed with social media data being exploited by corporations and political entities.
Leveraging ZK tech for proper decentralization in AI
At Inference Labs, they envision a world where AI is democratized, allowing everyone to train their models on the vast expanse of data they interact with daily. However, this vision comes with the imperative that such AI systems must be self-sovereign.
We cannot afford a scenario where individuals lose control over their data or where AI models are used against their creators. Our commitment to decentralized AI is rooted in the belief that control should never be left to intermediaries or untrusted entities
To address the challenges of decentralized AI adoption, Inference Labs is pioneering the use of zero-knowledge cryptography to provide a secure, privacy-preserving layer for AI models. By transforming AI models into mathematical representations, we create zero-knowledge circuits by applying zero-knowledge cryptography, ensuring the authenticity and integrity of the model's output. This approach offers a balance between the open-source and proprietary models, providing users with the confidence that their interactions with AI are cryptographically certain, while AI operators can maintain confidentiality of the model’s IP (weights and biases). Inference Labs is dedicated to developing these foundational technologies, which we believe are essential for building decentralized AI to enhance human and agentic potential without compromising individual rights or freedoms.