How Do We Stop Deepfakes From Tricking Juries?

3 hours ago 4

Reflecting on the evidence that passes through her Phoenix, Arizona courtroom, superior court judge Pamela Gates says she’s becoming less confident that the average person can sort out the truth.

Say a victim presents a photograph showing bruises on their arm and the defendant argues that the injuries were digitally added to the image. Or perhaps a plaintiff submits an incriminating recording and the defendant protests that while the voice sounds identical to theirs, they never spoke the words.

In an era where anyone can use free generative AI tools to create convincing images, video, and audio, judges like Gates are increasingly worried that courts aren’t equipped to distinguish authentic material from deepfakes.

“You had a better ability to assess [evidence in the past] just using your common sense, the totality of the circumstances, and your ability to verify the authenticity by looking at it,” said Gates, who is chairing an Arizona state court workgroup examining how to handle AI-generated evidence. “That ability to determine based on looking at it is gone.”

The explosion of cheap generative AI systems has prompted some prominent legal scholars to call for changes to rules that have governed court evidence in the U.S. for 50 years. Their proposals, including several that were reviewed by a federal court advisory committee earlier this month, would shift the burden of determining authenticity away from juries and place more responsibility on judges to separate fact from fiction before trials begin.

“The way the rules function now is if there’s any question about whether the evidence is authentic or not it should go to the jury,” said Maura Grossman, a computer science and law professor who, along with former federal judge Paul Grimm, has authored several proposed changes to the federal rules of evidence aimed at deepfakes. “We’re saying wait a second, we know how impactful this stuff is on the jury and they can’t just strike that [from their memory], so give the court more power. And that’s a big change”

‘Befuddle and confuse’

Jurors find audio-visual evidence convincing and hard to forget.

Rebecca Delfino, an associate dean and law professor at Loyola Law School who has proposed her own changes to evidentiary rules, points to studies showing that exposure to fabricated videos can convince people to give false testimony about events they witnessed and that jurors who see video evidence in addition to hearing oral testimony are more than six times as likely to retain information than if they just heard the testimony.

Judges already have some power to exclude potentially fake evidence, but the standard parties must meet to get contested evidence before a jury is relatively low. Under current federal rules, if one party were to claim that an audio recording wasn’t their voice the opposing party would need only call a witness familiar with their voice to testify to its similarity. In most cases, that would satisfy the burden of proof necessary to get the recording before a jury, Grossman said.

Given the current quality of deepfaked audio and images—which, as scammers have demonstrated, can trick parents into believing they’re hearing or seeing their children—the proponents of new court rules say AI fabrications will easily pass that low barrier.

They also want to protect juries from the opposite problem: litigants who claim that legitimate evidence is fake. They worry that the glut of AI-generated content people encounter online will predispose jurors to believe those false accusations, which scholars have dubbed the liar’s dividend.

Several defendants have already attempted that argument in high-profile cases. Lawyers for rioters who stormed the U.S. Capitol building on Jan. 6, 2021, argued that critical video evidence in the trials may have been fake. And in a civil trial involving a fatal Tesla crash, attorneys for Elon Musk suggested that videos of Musk boasting about the safety of the car brand’s autopilot feature may have been AI-generated

“Any time you have an audio-visual image in a trial, which is the most common type of evidence presented at any trial, there’s a potential for someone to make that claim,” Delfino said. “There’s a real risk that it’s not only going to extend and prolong trials but utterly befuddle and confuse juries. And there’s a strong risk that smart attorneys are going to use it to confuse juries until they throw up their hands and say ‘I don’t know.’”

The proposals

On November 8, the federal Advisory Committee on Evidence Rules reviewed the latest rule proposal from Grossman and Grimm, which would empower judges to exert a stronger gatekeeping role over evidence.

Under their new rule, a litigant challenging the authenticity of evidence would have to provide sufficient proof to convince a judge that a jury “reasonably could find” that the evidence had been altered or fabricated. From there, the burden would shift back to the party seeking to introduce the contested evidence to provide corroborating information. Finally, it would be up to the judge in a pre-trial hearing to decide whether the probative value of the evidence—the light it sheds on the case—outweighs the prejudice or potential harm that would be done if a jury saw it.

Delfino’s proposals, which she laid out in a series of law journal articles but has not yet formally submitted to the committee, would take deepfake questions entirely out of the hands of the jury.

Her first rule would require that the party claiming a piece of evidence is AI-generated obtain a forensic expert’s opinion regarding its authenticity well before a trial began. The judge would review that report and other arguments presented and, based on the preponderance of the evidence, decide whether the audio or image in question is real and therefore admissible. During the trial, the judge would then instruct the jury to consider the evidence authentic.

Additionally, Delfino proposes that the party making the deepfake allegation should pay for the forensic expert—making it costly to falsely cry deepfake—unless the judge determines that the party doesn’t have sufficient financial resources to cover the cost of the expert and the other party should pay instead.

No quick fix

Any changes to the federal rules of evidence would take years to be finalized and first need to be approved by a variety of committees and, ultimately, the Supreme Court.

So far, the Advisory Committee on Evidence Rules has chosen not to move forward with any of the proposals aimed at deepfakes. Fordham Law School professor Daniel Capra, who is tasked with investigating evidence issues for the committee, has said it may be wise to wait and see how judges handle deepfake cases within the existing rules before making a change. But in his most recent report, he added that “a [new] rule may be necessary because deepfakes may present a true watershed moment.”

In Arizona, Gates’ committee on AI-generated evidence has been considering whether there’s a technological solution to the deepfake problem that courts could quickly implement.

Academic researchers, government forensics experts, and big tech companies are in an arms race with generative AI developers to build tools that can detect fake content or add digital watermarks to it at the point it’s created.

“I don’t think any of them are ready for use in the court,” Gates said of the AI-detection tools she’s seen.

V.S. Subrahmanian, a computer science professor and deepfake expert at Northwestern University, and his colleagues recently tested the performance of four well-known deepfake detectors. The results weren’t encouraging: the tools labeled between 71 and 99 percent of fake videos as real.

Subrahamanian said that, at least in the near term, he doesn’t expect watermarking technologies to be widespread or reliable enough to solve the problem either. “Whatever the protection is, there’s going to be somebody who wants to figure out how to strip it out.”

Access to justice

So far, there have been few publicized cases where courts have had to confront deepfakes or claims that evidence was AI-generated.

In addition to the January 6 rioter trial and Musk’s civil suit, Pennsylvania prosecutors in 2021 accused Raffaela Spone of criminally harassing members of her daughter’s cheerleading team by allegedly sharing deepfaked videos of the girls drinking, vaping, and breaking team rules. Spone denied that the videos were deepfakes but didn’t have the financial resources to hire a forensic expert, according to her lawyer. However, after her case made national news, a team of forensic experts offered to examine the evidence pro bono and determined that the videos were real. Prosecutors eventually dropped the harassment charges against Spone related to making deepfakes.

Not everyone will be so lucky. The judges and legal scholars Gizmodo spoke to said they’re most concerned about cases that are unlikely to make headlines, particularly in family courts where litigants often don’t have attorneys or the financial resources to hire expert witnesses.

“What happens now when a family court judge is in court and I come in and I say, ‘my husband’s threatening me and the kids … I have a tape of him threatening us.'” Grossman said. “What on earth is that judge supposed to do under those circumstances? What tools do they have? They don’t have the tools right now.”

Read Entire Article