Open Source Bites Back as China’s Military Makes Full Use of Meta AI

3 weeks ago 4

Chinese research institutions with connections to the Chinese military have developed AI systems using Meta’s open-source Llama model. Papers discussing the AI model are unequivocal: the bots have military applications.

According to a report from Reuters, six Chinese researchers from three different institutions with connections to the People’s Liberation Army (PLA) released a paper about the AI in June. The researchers scooped up Llama 13B, an early version of Meta’s open-source large language model, and trained it on military data with the goal of making a tool that could gather and process intelligence and help make decisions.

They called it ChatBIT. “In the future, through technological refinement, ChatBIT will not only be applied to intelligence analysis, but also … strategic planning, simulation training, and command decision-making will be explored,” the paper said, according to a Reuters translation.

ChatBIT was trained using 100,000 military dialogue records. Another paper from the same period described how a Llama-based LLM has already been deployed for domestic policing. Like ChatBIT, the domestic version helps police gather and process large amounts of data to aid decision-making.

In a third paper that Reuters uncovered, two researchers at an Aviation firm connected to the PLA are using Llama 2 for war. The bot is for “the training of airborne electronic warfare interference strategies,” Reuters said.

We’re living through an AI gold rush. Companies like OpenAI and Microsoft are trying to make millions of dollars from proprietary AI systems that are promising the moon. Many of those systems are closed, a black box in which the inputs and training data are poorly understood by the end-user.

Mark Zuckerberg took Meta a different way. In a July essay that invoked open-source gold standard systems like Unix and Linux, Zuckerberg decreed that “open-source AI is the path forward.”

“There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives,” the essay said. “I think governments will conclude it’s in their interest to support open source because it will make the world more prosperous and safer.”

At the time, Zuckerberg also waved off fears that China would get its hand on Llama. He argued the benefits outweighed the risks. “Some people argue that we must close our models to prevent China from gaining access to them, but my view is that this will not work and will only disadvantage the U.S. and its allies,” he said. “Our adversaries are great at espionage, stealing models that fit on a thumb drive is relatively easy, and most tech companies are far from operating in a way that would make this more difficult. It seems most likely that a world of only closed models results in a small number of big companies plus our geo-political adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities.”

Meta does outline acceptable practices for using its open-source LLMs. The listen includes prohibitions against “military, warfare, nuclear industries or applications, espionage, use for materials or activities” and “any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual.”

Those are, of course, the kinds of things a military does. Their whole job is inflicting bodily harm on an individual. “Any use of our models by the People’s Liberation Army is unauthorized and contrary to our acceptable use policy,” Meta’s director of public policy, Molly Montgomery, told Reuters.

But there is no recourse here for Meta. Llama is out there. China is using it. Zuckerberg’s company has no way to stop it. The open-source revolution continues.

Read Entire Article