When the U.S. federal government first began taking a serious interest in Artificial Intelligence (AI) and the potential dangers it might pose, they commissioned a study on the subject from the experts at Gladstone AI. There was no indication given as to whether or not our vaunted Artificial Intelligence Czar, Kamala Harris was involved in the plan or even made aware of it. Gladstone tackled the problem and conducted a lengthy study, issuing a final report that was made available this week. The news the report contains is in some ways quite alarming. Not only did they find that AI systems pose a growing risk to national security, but such technology creates a “clear and urgent need” for government intervention to avoid global security destabilization that could potentially lead to “human extinction.” Those are their words, not mine. But it certainly sounds like someone should be paying attention to this. (Fox Business)
The U.S. government has a “clear and urgent need” to act as swiftly developing artificial intelligence (AI) could potentially lead to human extinction through weaponization and loss of control, according to a government-commissioned report.
The report, obtained by TIME Magazine and titled, “An Action Plan to Increase the Safety and Security of Advanced AI,” states that “the rise of advanced AI and AGI has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.”
“Given the growing risk to national security posed by rapidly expanding AI capabilities from weaponization and loss of control — and particularly, the fact that the ongoing proliferation of these capabilities serves to amplify both risks — there is a clear and urgent need for the U.S. government to intervene,” read the report, issued by Gladstone AI Inc.
Many of the specific details of the technology involved are far too complex and advanced for my poor brain to digest. But the list of recommended government actions is written in plain English, though some of the specifics seem rather vague. One of the first things Gladstone calls for is “establishing interim advanced AI safeguards before formalizing them into law.” It goes on to say that such safeguards would later be “internationalized.”
This is problematic right out of the gate and we’ve discussed the concept here before, along with the work of some leading AI developers who have been looking into the possibilities. Nobody seems to be sure precisely what those safeguards or “guardrails” would look like or even if they would be possible to install without crippling the functionality of the systems. As for “internationalizing” them, the United States isn’t the boss of the world despite acting that way sometimes. We don’t get to dictate to every other country how AI will be used or regulated and not everyone will go along with us. You can rest assured that China and Russia won’t if they believe that advanced AI will give them an advantage. So all we’ll be doing is limiting our own efforts.
Gladstone goes on to suggest the creation of “a new AI agency putting a leash on the level of computing power AI is set at.” The government would also be given the power to force AI companies to seek Washington’s permission to exceed certain “thresholds” of computing power. I don’t recall the last time that the creation of a new federal government agency solved much of anything. In this case, I’m particularly worried about whether or not we have people in Washington with the technical and intellectual savvy to deal with this new frontier in computing. I wouldn’t trust three-quarters of them to manage a lemonade stand, but we’re going to put them in charge of this sort of power in the tech industry?
I think this report merits attention, but it doesn’t answer all of the current questions. I keep going back and forth in terms of how much of a danger AI truly represents to the future of mankind. There are days when I wonder if this won’t all turn out to be like the Y2K bug (for those of you old enough to remember that), which was supposed to spell the end of the world but turned out to largely be a nothingburger. I also wonder if the entire AI industry isn’t about to go up in smoke once the copyright lawyers are finished with these companies in court. Of course, thoughts like those will end up being cold comfort when the thirty-foot-tall robots are coming up the street and smashing everyone’s homes into kindling.
Read the full article here