Not actual legal advice: AI model tries to re-create mind of Ruth Bader Ginsburg

We’re sorry, this feature is currently unavailable. We’re working to restore it. Please try again later.

Advertisement

Not actual legal advice: AI model tries to re-create mind of Ruth Bader Ginsburg

By Pranshu Verma

Washington: When reports surfaced in May that the US Supreme Court wanted to overturn abortion rights, many wondered how the late Justice Ruth Bader Ginsburg might have responded. Now, they don’t have to wait.

“I think they’re wrong on the law, but on the facts, no,” said a simulation of Ginsburg, who died in 2020, when asked about the Supreme Court’s upcoming decision on Roe v. Wade.

The answer came not from Ginsburg’s numerous court opinions, but an artificial intelligence model of the late justice released on Tuesday. “Whether it’s good or bad, it’s settled, and, therefore, it’s not my business to think about it,” the RBG bot concluded.

Supreme Court justice Ruth Bader Ginsburg.

Supreme Court justice Ruth Bader Ginsburg.Credit:AP

The model, called Ask Ruth Bader Ginsburg, is based on 27 years of Ginsburg’s legal writings on the Supreme Court, along with a host of news interviews and public speeches. A team from the Israeli artificial intelligence company, called AI21 Labs, fed this record into a complex language processing program, giving the AI an ability, they say, to predict how Ginsburg would respond to questions.

“We wanted to pay homage to a great thinker and leader with a fun digital experience,” the company says on the AI app’s website. “It is important to remember that AI in general, and language models specifically, still have limitations.”

Loading

The tool comes during fierce argument around the ethics of creating technology that replicates human life, particularly when the humans involved aren’t around to offer input. But its creators argue their invention is a useful and easy-to-use tool to help ordinary people, who might not know much about technology, understand how the field of artificial intelligence is progressing.

“There are not many places where the general public can go and play with real AI,” said Yoav Shoham, co-founder of AI21 Labs. “But now you can.”

In recent years, research labs and companies across the world have been racing to build technology that replicates or surpasses human intelligence, offering ways for people to examine and interact with their work along the way.

Advertisement

OpenAI, an Elon Musk-backed artificial intelligence company, unveiled a text-generator, called GPT-3 that can write movie scripts and undergirds an image generator, DALL-E 2, which translates text commands into inventive, sometimes psychedelic visuals.

Loading

In 2020, Shoham’s company created Wordtune, a tool that suggests different ways to write sentences. They followed the release a year later with Wordtune Read, which summarises the main points of long, dense passages.

But as AI technology has gotten better, Shoham said many surrounding the field are divided. “People project all kinds of [thoughts] on . . . automation that has nothing to do with reality,” he said. “I don’t want people to be disappointed by the underperformance of current AI and I don’t want them to monger fear.”

The general public, he said, needs to make up their own mind, and his team’s RBG model is an accessible, hands-on way of engaging with the technology.

To build it, the researchers used Jurassic-1, a neural network they created that analyses large troves of data and develops its own language to spit out results to questions or prompts. Neural networks are computer architecture that attempt to mimic the human brain, by processing information.

They fed the model roughly 600,000 of Ginsburg’s words and created a tool that lets anyone ask it questions, to which it gives answers based on the massive trove of writing. “It gives you access to the kind of wisdom possessed by a person we hold in high regard,” Shoham said.

Paul Schiff Berman, a law professor at George Washington University who clerked for Ginsburg from 1997 to 1998, said that when he saw the bot, he was amused.

Right away, he tried asking it a question he would have been interested in getting Ginsburg’s opinion on: “Should federal courts defer to the factual findings of state courts?”

The response left a lot to be desired, according to Berman. The model didn’t directly answer the question and its reply implied Ginsburg didn’t believe in the judicial concept of deference, which is not true, he said. Berman also noted that the model did a poor job in replicating her unique speaking and writing style.

“I would have thought that’s something the AI could have imitated better,” he said. “If this is the best the [technology] can do, we’ve still got a ways to go.”

Meanwhile, several AI technology experts raised concerns with the experiment.

Emily Bender, a linguistics professor at the University of Washington, said she recognises the experiment’s creators come from a place of respect for Ginsburg, but insinuating the technology can think and reason like the late justice is not accurate. “It can spit out words and the style of those words are going to be informed by the style of text they fed into it, but it’s not doing any reasoning,” she said.

Bender added that linguistics research shows that when people encounter “coherent-seeming texts” on a topic they care about, there’s a risk they will take it seriously when they shouldn’t.

“People might use this to make arguments out in the world and say, ‘Well, RBG would have said,’ this AI [model] told me so.”

Meredith Broussard, an associate professor and artificial intelligence researcher at New York University, said the bot is engaging but should not be confused with actual legal advice. “It’s really fun to play with, but we should not take it seriously as well as we shouldn’t pretend that that’s a lawyer,” she said. (AI21 states that the model is “just an experiment” and that it can give inaccurate responses that should be taken “with a grain of salt.“)

Loading

Broussard added that the technology does not seem to be much more advanced than ELIZA, a chatbot created by MIT researchers in the 1960s, where a computer program replicated a therapist well enough to make people think it was human. She added there could be a limit to how advanced this type of artificial intelligence technology can ever get.

“There is a ceiling on the technology because it’s not a brain, it’s a machine,” she said. “And it’s just doing math.”

Washington Post

Get a note directly from our foreign correspondents on what’s making headlines around the world. Sign up for the weekly What in the World newsletter here.

Most Viewed in World

Loading