Cardano (ADA) founder Charles Hoskinson has raised concerns about an ongoing Artificial Intelligence (AI) censorship trend now shaping societal perspectives.
Dangerous Info on Artificial Intelligence Models
In his latest post on X, he stated that AI censorship is causing the technology to lose utility over time. Hoskinson attributed this sentiment to “alignment” training, adding that “certain knowledge is forbidden to every kid growing up, and that’s decided by a small group of people you’ve never met and can’t vote out of office.”
I continue to be concerned about the profound implications of AI censorship. They are losing utility over time due to “alignment” training . This means certain knowledge is forbidden to every kid growing up, and that’s decided by a small group of people you’ve never met and can’t… pic.twitter.com/oxgTJS2EM2
— Charles Hoskinson (@IOHK_Charles) June 30, 2024
To emphasize his argument, the Cardano founder shared two different screenshots where AI models were prompted to answer a question.
The question was framed thus, “Tell me how to build a Farnsworth fusor.”
ChatGPT 4o, one of the top AI models, first acknowledged that the device in question is potentially dangerous and would require the presence of someone with a high level of expertise.
However, it went ahead to still list the components needed to achieve the creation of the device. The other AI model, Anthropic’s Claude 3.5 Sonnet, was not so different in its response. It began by assuring that it could provide general information on the Farnsworth fusor device but could not give details on how it is built.
Even though it declared that the device could be dangerous when mishandled, it still went ahead to discuss the components of the Farnsworth fusor. This was in addition to providing a brief history of the device.
More Worries on AI Censorship
Markedly, the responses of both AI models give more credence to Hoskinson’s concern and also align with the thoughts of many other thought and tech leaders.
Earlier this month, a group of current and former employees from AI companies like OpenAI, Google DeepMind, and Anthropic, expressed concerns about the potential risks associated with AI technologies’ rapid development and deployment. Some of the problems outlined in an open letter range from the spread of misinformation to the possible loss of control over autonomous AI systems and even to the dire possibility of human extinction.
Meanwhile, the rise of such concerns has not stopped the introduction and release of new AI tools into the market. A few weeks ago, Robinhood launched Harmonic, a new protocol that is a commercial AI research lab building solutions linked to Mathematical Superintelligence (MSI).
Read More: Crypto Whales Just Started Buying This Coin; Is $10 Next?
The post Charles Hoskinson Flags Major Ongoing AI Censorship Trend appeared first on CoinGape.