AI And Admissibility of Evidence

About the Author

Alexy Joy is an editor at ljrfvoice.com.

Introduction

Artificial intelligence (AI) is the emulation of human intellect in devices that have been designed to behave and think like humans. The phrase may also be used to refer to any computer that demonstrates characteristics of the human intellect, such as learning and problem-solving. Ability to reason and take actions that have the highest likelihood of reaching a certain objective is the ideal quality of artificial intelligence. Machine learning (ML), a subtype of artificial intelligence, is the idea that computer systems can automatically learn from and adapt to new data without human assistance. Deep learning algorithms allow for this autonomous learning by ingesting vast quantities of unstructured data, including text, photos, and video. Section 17 of the Indian Evidence Act, 1872, defines the term ‘admission.’ Admissibility involves determining whether the judge can legally consider evidence or information presented in court when deciding a case’s outcome. It hinges on factors like relevance, reliability, and adherence to legal procedures stipulated in the Act and other relevant laws. The Act specifies rules for admissibility, including those related to relevancy, hearsay, expert opinions, character evidence, public documents, privileged communication, admissions, confessions, and the potential exclusion of illegally obtained evidence. Judges are responsible for assessing the admissibility of evidence in line with the Act, ensuring a just and equitable legal process. Admissibility encompasses all relevant facts that a court deems permissible. In accordance with Section 136 of the Evidence Act, the ultimate authority for determining the admissibility of evidence in a case rests with the judge.

Section 136 specifies that when a party intends to present evidence of any fact or circumstance, the presiding judge has the discretion to inquire how the purported fact, when established, would be pertinent. If the judge deems that the fact, when demonstrated, would indeed be relevant and not otherwise, they shall admit the evidence. Last month, after almost 150 years, India overhauled the legal framework for its criminal justice system. The Parliament replaced the Indian Evidence Act, 1872 (IEA) with the Bharatiya Sakshya Adhiniyam, 2023 (BSA). This move can have implications for India’s transition to a digitally empowered society.

AI and Admissibility of Evidence

The application of Artificial Intelligence has been pervasive through every industry including the Legal Industry. It is set to reduce the most mundane works and the inefficiencies across the Legal Industry. The use of automation has already been benefited in non-litigation especially due diligence of documents, digitizing case laws, thereby allowing easier searches, storage, transfer etc., and decreasing the cost and labour hours drastically. The 2020s have been described by Prime Minister Narendra Modi as India’s ‘Techade’*(Prime Minister Narendra Modi, during his Independence Day speech on August 15, 2022, said that India’s ‘techade’ is here with extensive development of 5G technology and chip manufacturing. He said that the government is successfully bringing a revolution to the grassroots with the help of the Digital India initiative) with the decade to be marked by increased adoption of technologies and India leading this adoption.

Emerging tech, such as AI, holds the potential to boost productivity and efficiency across various sectors, including the criminal justice system. AI-generated outputs could also be used as evidence in a judicial proceeding or help law enforcement agencies in investigations. For instance, AI algorithms can be used to enhance the forensic analysis of evidence such as fingerprints/DNA samples; AI-powered emotion-detection technology can assess the emotional state of suspects; or AI-based voice-recording software as evidence in legal proceedings. There are complexities involved in using AI generated outputs as evidence in a trial. It is important to assess whether the BSA is adequately equipped to address the admissibility problem for AI.

An AI generated picture with the prompt of AI as evidence
(Gencraft – free AI generator)

Examining the BSA

Under the BSA, an output generated from AI will most likely be classified as ‘digital’ or ‘electronic evidence.’ The BSA defines documents and documentary evidence to include electronic or digital records. Illustrations to the definition of ‘document’ in the BSA include electronic records on emails, server logs, documents on computers, laptop or smartphone etc. Notably, it appears that the changes brought through the BSA may be cosmetic, especially with regard to digital/electronic evidence. This is not a substantive change as it merely formalises existing practices under the erstwhile IEA.

Documentary evidence, including digital/electronic evidence, may take the form of ‘primary evidence’ or ‘secondary evidence’. For context, primary evidence is considered the best level of evidence that can be presented in a trial without supporting evidence, as opposed to secondary evidence, which needs separate authentication.

There are problems in treating AI-generated evidence as primary evidence. This is encapsulated in the ‘black box’ problem – where understanding the reasoning behind an AI system’s predictions or decisions becomes difficult. Perhaps for the first time in India, the Punjab and Haryana High Court has used artificial intelligence for taking opinions on a criminal case. The High Court used ChatGPT for validating its opinion regarding the bail application of an accused. This is the first instance ChatGPT has been used to decide on a bail application in India.

While evaluating responses generated by ChatGPT, Justice Prathiba Singh of the Delhi High Court noted that the “accuracy and reliability of AI-generated data are still in the grey area.” This, along with other potential problems such as inaccuracies, hallucinations, etc., makes it difficult to establish its credibility as primary evidence. There are complexities in taking AI-generated evidence as secondary evidence too. Digital evidence to be admissible as ‘secondary evidence’ needs to be ‘authenticated’ through a certificate (section 63) signed by any person ‘in charge of the computer or communication device’ and an expert.

This presents certain challenges, given the nature of AI systems. AI systems involve multiple contributors (with different persons doing different tasks such as collating and analysing data, training AI models, developing model techniques and algorithms, testing and evaluation of AI models etc.). They are also complex and often self-learning algorithms, which can make obtaining authentication certificates a cumbersome task. Moreover, it may also become difficult to clearly explain the functioning of AI systems – especially those involving deep learning or advanced machine learning techniques.

Crucially, we are only in the early stages of the development of AI systems. The evolving nature of AI systems raise concerns about the suitability of section 63, which borrows from section 65-B of the older Evidence Act, perhaps designed with more traditional forms of electronic evidence in mind (such as pen drives based on optical or magnetic media as opposed to flash drives based on semiconductors), to effectively address the intricacies of AI-generated evidence.

The fast-developing Generative AI technology can turn our ideas into images which may serve as evidence. Admissibility of such evidence thus, requires clarifications.

Foreign Approaches

In both the US and the UK, for evidence to have high probative value, it must be relevant and reliable (authentic). Evidence may become inadmissible if its usefulness is substantially outweighed by its drawbacks like the risk of causing prejudice, misleading the jury, or wasting the time of the court etc. Thus, the legal standard broadly appears to be the same across different jurisdictions – the US and the UK. Neither the US nor the UK have come up with conclusive solutions to authenticate the reliability of AI systems as evidence during a trial till date. For instance, in the US, the current authentication method is calling the developer or creator of the system to testify about the system – which is similar to the Indian requirement under section 63, BSA. Information Technology equipped with Artificial Intelligence not only facilitates lawyers and judges as we saw in the preceding sections, but also help in the administration of Courts. During and after the Cross-Examination, there requires a transcription of the process which can be done by the Speech Recognition tools that employ Artificial Intelligence. In future, in cases where the witness is from a different country speaking a different language, Artificial Intelligence could help the witness in the translation of the language of Court in real time. There is a possibility that the witness gives statements in a half-heartedly manner not sure of its authenticity. At such times, it is probable that the systems might infer it to be false. If it is an expert system, where it predicts based upon the past data that is fed, the output depends on the data that is being fed. In non-expert systems, hard programming errors can lead to inaccurate results. Though this technology cannot be used as evidence, it can be used to see things which a judge would miss with a normal sight. There is no technology that is completely fail proof. Our criminal system is built upon “beyond reasonable doubt” and not on “beyond all doubts”. Adopting the technology and relying on it the right amount with the judges understanding the technology so that it does not form any bias will help the judiciary in the long run. The necessary guidelines have to be set-up by the judiciary like in the case of lie-detector tests for investigation purpose.

Conclusion

The future and capabilities of Artificial Intelligence is uncertain and the fact that Artificial Intelligence can learn over time and data, means that it is only bound to improve in the future. The recent potential of Artificial Intelligence has been realized only because of the growth in the volume of data. When the data in the legal industry is structured, there is no stopping of automating mundane work and tasks that require less human intelligence that is prevalent in the industry. Engineers have been able to create general purpose AI systems that mimics human brain using machine learning and neural network processing. There needs to be a concerted multi-stakeholder approach comprising technologists, judiciary, policy makers, civil society, and policy experts to work towards creating a framework that ensures the credibility and reliability of AI generated evidence.

As AI continues to advance and for it to be truly useful for the justice system, policymakers may need to arrive at a more adaptive legal framework to address the unique authentication challenges posed by AI-generated evidence. Such a framework will also need to account for the proprietary rights (IPR) of different stakeholders (developers, contributors to training models etc.). Upcoming sectoral legislation (such as the Digital India Act) may be a good place to address some of the concerns that accompany AI technologies. For instance, mandating the adoption of responsible AI principles such as safety and reliability, inclusivity and non-discrimination, equality, privacy and security, transparency, accountability, and protection and reinforcement of positive human values (NITI Aayog, 2021) could be a good first step. Additionally, AI systems that might be relied on for evidentiary purposes may be classified as ‘high-risk’ systems and be subjected to heightened transparency and explainability obligations.

Building an enabling legal framework is only a part of the solution. Given the role that judges and lawyers play in trials, there needs to be greater sensitisation amongst both the Bar and the Bench on emerging tech like AI, especially on the limitations and opportunities of these technologies. Parties to the case need to be given access to technical experts who can help them evaluate the nature of technologies.