I agree with each of those points, which can potentially guide us on the real limits we could consider to mitigate the dark side of AI. Things like sharing what goes into training great language models like the ones behind ChatGPT and allowing opt-outs for those who don’t want their content to be part of what LLMs present to users. Rules against built-in bias. Antitrust laws that prevent some giant companies from creating an artificial intelligence cabal that homogenizes (and monetizes) practically all the information we receive. And the protection of your personal information as used by those know-it-all artificial intelligence products.
But reading that list also highlights the difficulty of turning uplifting suggestions into actual binding law. When you look closely at the points in the White House plan, it’s clear that they don’t just apply to AI, but pretty much everything in technology. Each one seems to embody a user right that has always been violated. Big tech wasn’t waiting for generative AI to develop unfair algorithms, opaque systems, abusive data practices, and lack of opt-outs. That’s what’s at stake, my friend, and the fact that these issues are brought up in a discussion of a new technology only highlights the failure to protect citizens against the ill effects of our current technology.
During that Senate hearing at which Altman spoke, senator after senator sang the same refrain: We screwed up when it came to regulating social media, so let’s make no mistake about the AI. But there is no statute of limitations on lawmaking to curb past abuses. Last time I looked, billions of people, including nearly everyone in the US who has the means to touch a smartphone screen, are still on social media, bullied, compromised with privacy, and exposed to horrors. Nothing prevents Congress from being tougher on those companies and, above all, from passing privacy legislation.
The fact that Congress has not done this casts serious doubt on the prospects for an AI bill. Not surprisingly, certain regulators, notably FTC Chair Lina Khan, aren’t waiting for new laws. She says current law gives her agency plenty of jurisdiction to address issues of bias, anti-competitive behavior, and invasion of privacy that new AI products present.
Meanwhile, the difficulty of proposing new laws and the enormity of the work that remains to be done was highlighted this week when the White House issued an update in that AI Bill of Rights. He explained that the Biden administration is sweating hard to come up with a national AI strategy. But apparently the “national priorities” in that strategy are not yet defined.
Now the White House wants tech companies and other AI stakeholders, along with the general public, to submit responses to 29 questions about the benefits and risks of AI. Just as the Senate subcommittee asked Altman and his fellow panelists to suggest a way forward, the administration is asking corporations and the public for ideas. In its Information request, the White House promises to “consider each comment, whether it contains a personal narrative, experiences with artificial intelligence systems, or technical, legal, investigative, political, or scientific materials, or other content.” (I breathed a sigh of relief to see that no large language model feedback is solicited, though I’m willing to bet that GPT-4 will be a big contributor despite this omission.)