California’s AG Tells AI Companies Practically Everything They’re Doing Might Be Illegal

1 day ago 4

The legality of the AI industry’s business practices has long been a hanging question. As a “disruptive” new technology, artificial intelligence has caused a wealth of problems at the very same time that it has offered society new benefits. Notably, AI has been used to mislead consumers, to create new forms of disinformation and propaganda, and to discriminate against certain groups of people. Now, the California Attorney General’s office has issued a legal memo emphasizing the fact that all of that stuff is probably illegal.

On January 13th, California AG Rob Bonta issued two legal advisories that illustrate all of the myriad areas where the AI industry could be getting itself into trouble. “The AGO encourages the responsible use of AI in ways that are safe, ethical, and consistent with human dignity,” the advisory says. “For AI systems to achieve their positive potential without doing harm, they must be developed and used ethically and legally,” it continues, before dovetailing into the many ways in which AI companies could, potentially, be breaking the law.

Some of those ways include:

  • Using AI to “foster or advance deception.” If you hadn’t noticed, the internet is currently awash in a veritable tsunami of fake content. Concerns about a new generation of deepfakes and disinformation have exploded ever since AI content generators became popular—and with good reason. California’s memo makes clear that companies that use AI to create “deepfakes, chatbots, and voice clones that appear to represent people, events, and utterances that never existed” could fall under the category of “deceptive” and, thus, be considered a breach of state law.
  • Falsely advertising “the accuracy, quality, or utility of AI systems.” There has been quite a lot of, shall we say, hyperbole, when it comes to the AI industry and what it claims it can accomplish versus what it can actually accomplish. Bonta’s office says that, to steer clear of California’s false advertising law, companies should refrain from “claiming that an AI system has a capability that it does not; representing that a system is completely powered by AI when humans are responsible for performing some of its functions; representing that humans are responsible for performing some of a system’s functions when AI is responsible instead; or claiming without basis that a system is accurate, performs tasks better than a human would, has specified characteristics, meets industry or other standards, or is free from bias.”
  • Create or sell an AI system or product that has “an adverse or disproportionate impact on members of a protected class, or create, reinforce, or perpetuate discrimination or segregation of members of a protected class. AI systems have been shown to integrate human bias into their algorithms, which is particularly disturbing when you consider that AI is now being used to vet people for housing and employment opportunities. Bonta’s office notes that automated systems that have disparate impacts on different groups of people could run afoul of the state’s anti-discrimination laws.

Bonta’s advisory also includes a list of recently passed regulations related to the AI industry. The fact that the advisory says that all of these activities “may” break the law seems to signal that companies should effectively sell-regulate, lest they stray into criminal territory and tempt the state to take action against them.

Bonta’s memo clearly illustrates what a legal clusterfuck the AI industry represents, though it doesn’t even get around to mentioning U.S. copyright law, which is another legal gray area where AI companies are perpetually running into trouble. Currently, OpenAI is being sued by the New York Times, which has accused the company of breaking U.S. copyright law by using its articles to train its algorithms. AI companies have repeatedly been sued over this issue but, because AI’s foray into content generation represents largely unsettled legal territory, none of those lawsuits have yet been successful.

Read Entire Article