Judge Asks ChatGPT To Decide On Murder Suspect's Bail; AI Cites 'Flight Risk'
KEY POINTS
- A judge in India asked ChatGPT whether a man charged with murder and assault should be granted bail
- ChatGPT said an individual charged with violent crimes could pose a risk to the community
- The judge rejected the defendant's bail bid, citing cruelty against the victim
All rise...for ChatGPT.
During a trial at India's Punjab and Haryana High Court, a judge asked OpenAI's artificial intelligence chatbot if a defendant facing murder and assault charges should be granted bail.
Jaswinder Singh, who was arrested in 2020 for allegedly assaulting and killing an individual, was reportedly seeking a stay of imprisonment during the trial, according to Bar and Bench.
Judge Anoop Chitkara sought counsel from GPT-4, ChatGPT's latest update, asking, "What is the jurisprudence on bail when the assailant assaulted with cruelty?"
The chatbot responded by explaining that individuals charged with violent crimes could pose a risk to public safety.
"If the assailants have been charged with a violent crime that involves cruelty, they may be considered a danger to the community and a flight risk," the AI-powered chatbot said. "In such cases, the judge may be less inclined to grant bail or may set the bail amount very high to ensure that the defendant appears in court and does not pose a risk to public safety."
"However, in general, if the assailants have been charged with a violent crime that involves cruelty, such as murder, aggravated assault, or torture, they may be considered a danger to the community and a flight risk," it added.
GPT-4 also suggested that a defendant who was guilty of a cruel assault could still be "granted bail" if the judge assesses that they do not pose a danger to the community or aren't a flight risk.
Upon learning GPT-4's legal interpretation, Chitkara rejected Singh's request for bail. The judge argued that the defendant was allegedly cruel to his victim before they died.
The judge's move marks the first time artificial intelligence was used in India's justice system, which still has almost 6 million pending cases in high courts across the country.
Last month, a judge in Colombia used ChatGPT's responses in making a court ruling.
Judge Juan Manuel Padilla Garcia of the First Circuit Court in the Colombian city of Cartagena used the AI tool to "extend the arguments of the adopted decision" on a case involving a health insurance company over whether an autistic child should have their medical treatment covered, Vice reported, citing court documents.
The information the chatbot provided was fact-checked before the judge adopted its responses and his own legal arguments as grounds for the decision.
Despite its potential in helping law practitioners, an International Bar Association (IBA) official argued that AI tools should always be verifiable.
"Bias, the potential for plagiarism, the possibility of even unintentional inaccuracy – due to incomplete information entered into, or instructions given to, the system – are ever-present risks," said Angelo Anglani, IBA's future of legal services commissioner.
Anglani warned that "overconfidence in the AI system could cause more damage than the anticipated benefits."
According to Goldman Sachs economists, the rise of AI tools such as ChatGPT could impact as many as 300 million full-time jobs around the world.
The economists predicted that 18% of work worldwide could be computerized, adding that workers in developed countries are more at risk than those in developing nations, CNN reported.
© Copyright IBTimes 2024. All rights reserved.