TECH NEWS – The rise of artificial intelligence can no longer be swept under the carpet.
Vice has reported that a Colombian judge, Juan Manuel Padilla Garcia, has admitted to using ChatGPT to deliver a verdict in a case in Cartagena. The judge acknowledged in a court document dated January 30 that he had used AI. Garcia said he did so to optimize the time taken to reach a verdict after confirming the information provided by the AI. Incidentally, Garcia was negotiating a case between a health insurer and the family of an autistic child over the child’s medical care.
Garcia based his decision on ChatGPT’s responses to legal precedents and case law and said he subjected each response to fact-checking. It is worth highlighting because chatbots can be prone to bias, and there can also be questions about reliability. The judge took a somewhat bold path, as it has been primarily unprecedented for a judge to use artificial intelligence to make a judgment.
It is also a slippery slope because a judge is supposed to be in charge of making a decision, and to get there, they have to go through a rigorous period of years of education and training, and during that time, a lot of students drop out. It is another way a judge who passes the exam proves that they can do the job and therefore play a significant role in upholding the law. In the United States, the legal community in California has firmly rejected the use and application of AI by legal advisers, and others may well be similarly condemning it.
What is at stake is not the life of a human being (or at least not a death sentence…) but ‘only’ the medical care of a child, but if it were more, the technology could no longer be relied upon. Of course, there are situations where AI can help, but this is not an area where it should be accepted without a word.
Source: PCGamer
Leave a Reply