PC PLACE Blog
Should AI Development Be Stopped? Experts Say So
It seems that you can’t turn your head nowadays without seeing artificial intelligence being incorporated into some software or platform. However, many leaders in the technology space have expressed their concerns about—as they put it—the “profound risks to society and humanity” that AI poses, outlined in an open letter.
This Letter Effectively Calls for a Pause on the Development of AI
With tens of thousands of signatures, the short letter cautioned against the unfettered growth of AI without a greater appreciation of the potential outcomes.
There are assorted reasons that these signees are so concerned, that have the potential to materialize in the short, medium, and long term.
In the Short Term, Misinformation is a Massive Concern
With the capability that AI has to reference and even manufacture false or disingenuous data, there is a very real chance that any information that these systems can produce could be false…something particularly dangerous when so many people already turn to the Internet for important information. In addition, these falsehoods can be made far more convincing through the capabilities of these platforms.
What’s worse, this effect can also be manufactured, so content meant to spread misinformation can be produced far more quickly and shared that much more easily.
As Time Passes, Job Loss Could Result
Many technology experts are also very concerned that AI could swiftly render many current forms of gainful employment obsolete. While some knowledge-based careers require more practical skills than AI is able to replicate (yet), many could ultimately have their roles reduced significantly, if not eliminated entirely.
In the Long Term, Could AI Leave Us Obsolete and Extinct?
Yes, it sounds extreme, but some experts—specifically those from the Future of Life Institute, which is an organization that tries to predict “existential risks to humanity”—foresee that the largely unpredictable nature of AI could create some very serious issues as it “learns” how to write its own code. In fact, the Future of Life Institute was who wrote the aforementioned open letter.
Likewise, the Center for AI Safety has collected signatures in support of their own brief statement:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
In short, scary stuff.
While Some of These Threats are Predictions, Some are Currently a Problem
Like any other major technological enhancement, there are definitely some kinks to iron out in terms of artificial intelligence—and these challenges will almost certainly result in legislation meant to put stopgaps in place.
What do you think? Do you foresee anything being done to slow, or even stop, the advancement of AI?
Comments