The CEO of Tether, Paolo Ardoino, emphasizes that localizing AI models is crucial for protecting people’s privacy. Ardoino believes that AI models running locally on devices like smartphones and laptops can ensure resilience and independence and maintain data privacy.
Ardoino noted that modern smartphones and laptops have enough processing power to fine-tune general large language models (LLMs) using the user’s data, with all enhancements on the device. He shared on social media that this approach is a “work in progress” (WIP).
In an interview with Cointelegraph, Ardoino described locally executable AI models as a “paradigm shift” in user privacy and independence. By operating directly on users’ devices, these models eliminate the need for third-party servers, enhancing security and allowing offline use. This setup enables users to benefit from powerful AI experiences while retaining complete control over their data.
Tether recently announced its foray into AI, with Ardoino mentioning that they are “actively exploring” integrating locally executable models into their AI solutions. This move comes in response to a recent security breach at OpenAI, the company behind ChatGPT. In early 2023, a hacker accessed OpenAI’s internal messaging systems, compromising sensitive information about the company’s AI designs.
The hack also revealed that user conversations on ChatGPT for macOS were stored in plain-text files, raising privacy concerns. Although the issue has been resolved, the community remains wary of the incident. Some speculate that OpenAI stored these logs to improve and train ChatGPT.
Significant developers like OpenAI, Google, Meta, and Microsoft dominate the field as AI technology advances. This dominance has sparked concerns among industry analysts and governments about user privacy and data control. There is growing support for decentralizing AI to counter Big Tech’s monopoly and ensure a fairer future. Initiatives like 6079 are pushing for more equitable development in the AI industry.