AI And Admissibility of Evidence

About the Author

Mr. Arunraj R is a final year student pursuing his Three-year LLB course from Sree Narayana Law College Poothotta, Ernakulam.

Introduction

Admissibility means the state or quality of being admissible or permissible. Legally, the term ‘evidence’ means anything admitted by a Court to prove or disprove alleged matters of fact in a trial. Thus, the admissibility of evidence means any document, testimony, or tangible evidence used in a Court of Law. In fact, all of the evidence is not allowed in the Court, those evidence which are relevant and reliable only are admitted in the Court of Law. Evidence is introduced to a judge or a court to prove a point or a significant part in a case. 

The legal term ‘Admission’ is defined in Section 17 of the Indian Evidence Act, 1872. In general, the term admission means power or permission to enter, admittance, entrance, access and the power to approach. Legally, agreement or concurrence in a statement made by another, and distinct from a confession in that an admission presumes prior inquiry by another, but a confession may be made without such inquiry. A fact, point, or statement admitted as the admission made out of Court are received in evidence. 

The Indian Evidence Act, 1872 was amended by presentation of Section 92 of Information Technology Act, 2000 (before the amendment). Section 3 of the Indian Evidence Act, 1872 was also amended which previously only included the documents which were produced for inspection in the Court as evidence but after the amendment the Act stated that it included all the documents including the electronic records which were produced for inspection in the Court as evidence. And in regard to the documentary evidence stated in Section 59 the words ‘Contents of documents’ and ‘Contents of documents or electronic records’ were replaced by Section 65A and Section 65B to include the admissibility of the electronic evidence and other electronic records. Thus, just like other evidence, electronic evidence or digital evidence is also considered as evidence admissible in Court if it is relevant and not against any factors of inspection of the Court.

Admissibility of Electronic Evidence under The Indian Evidence Act, 1872

The term E-evidence can be extended as electronic evidence as well as digital evidence. In today’s world use of the internet, gadgets such as mobile phones, applications, laptops, computers, tablets, iPad, etc. is very common. Almost every next person creates a profile on social Media Platform like Facebook, Snapchat, WhatsApp, Twitter, Instagram, etc. There is continuous monitoring of all the activities and events taking place in a particular area by the guards and policemen via CCTV cameras and other devices. The footages or snaps or call records obtained from sources that are authentic in nature and can be produced before the Court to prove the defendant guilty and are relevant and admissible are termed as E-evidence. Examples of electronic evidence are data stored in a computer system, information transmitted electronically through any communication network, etc.

Section 65A and Section 65B of The Indian Evidence Act, 1872

Section 65A 

Section 65A of the Indian Evidence Act is always read along with Section 65B of the Indian Evidence Act, 1872 as Section 65A of the Act contains the contents of electronic records which have to be proved by the provisions which are mentioned in Section 65B of the Act. 

Section 65B 

Section 65B of the Indian Evidence Act states the admissibility of electronic records in Court proceedings. It states that any record which is contained in any electronic or digital format shall be termed as a document. And if the terms specified in Section 65B of the said Act are satisfied by such a document then it shall be necessarily admissible in Court proceedings without any need for proof of validity of such document in the future. 

The conditions of Section 65B are:

  • The information shall be produced during the regular course of activities by the person having lawful control over the use of the computer.
  • The information has been regularly fed into the computer in the ordinary course of the said activities.
  • Throughout the material part of the said period, the computer was operating properly, or the improper operation was not of such nature to affect the electronic or digital record, or the accuracy of its contents produced.
  • The information contained in the electronic or digital records is derived from such information fed into the computer in the ordinary course of an individual’s activities.

How to govern the admissibility of AI-generated evidence in courts?

With the advent of electronic devices and internet, evidence in electronic format have become an imminent part in litigation proceedings. Electronic mails, SMS, social media posts, and surveillance footage are used in establishing facts and supporting legal arguments. On the one hand when electronic evidence are increasingly sought after, AI has added a new hurdle with the rise of AI-generated evidence.

Kinds of AI-generated Content

Prognostic AI models can deliver perceptions on imminent events, while biometrics aid in identification, and AI transcription services convert audio into written transcripts for court evidence. These are some examples of AI-generated evidence.

While evaluating such evidence, judges face quite a difficulty with its admissibility in the court as there are concerns related to reliability, transparency, interpretability, and bias in such evidence. Disinformation and incorrect data are in the rise due to generative AI which creates quite a dilemma for the judges in being convinced of its authenticity. Recently, deepfake sexually explicit images of singer and songwriter Taylor Swift made the headlines which was a result of generative AI. Prior to this, an image of Pope Francis wearing a fancy white jacket also created ripples as it created an impression of being an original photo.

An AI generated picture of Pope Francis wearing a white jacket

Crucial interrogations for Judges and Lawyers

Even though photographic evidence might require a description, on most occasions they are self-explanatory. Thus, an AI generated image creates a hurdle. For example, an image of a high official committing an offence can be a burden on him when in reality, he may not have indulged in any such activity. In such an instance then, the judge or lawyer looking for conclusive evidence might find it difficult in figuring out the truth. How can then a judge determine that the image is AI-generated and not real? In addition to the numerous risks that affect the credibility of evidence, the uncensored methods in AI processes hampers transparency, while bias in training data can lead to unfair outcomes. The absence of standard guidelines on how to prove AI-generated evidence obscures the decision-making process. Governments of various states are contemplating this issue and are taking steps towards making laws to curb issues related to generative AI.

Evidence are leads which help to pinpoint the culprit without a shadow of doubt. However, with machines taking over human activities with the aid of a ‘brain’ of its own, culpability and placing liability over the perpetrators of crime could be difficult. An autonomous car which can be driven without human input can pose a challenge while considering electronic evidence and AI related evidence. For example, there is uncertainty around how a drowsiness detector’s data could be used in inquisitorial or adversarial justice systems to determine liability for an accident. Whether this data could be used for evidence to nab the mastermind behind the accident? How the concept of mens rea would alter as machines can now think just like a human? Would machine data based on human-machine interaction count as evidence? We must assess the accuracy and limitations of the AI system’s data, determine responsibility in the event of accidents or disputes, and understand the reasoning behind the system’s decisions.

Admissibility of AI evidence requires scrutiny to avoid misinterpretation of the case.

Reliability of AI Evidence

Artificial Intelligence (AI) programs, such as the regularly highlighted ChatGPT, CoPilot, Google’s Bard are rapidly changing the way we approach many aspects of contemporary life. As AI continues to grow, it’s likely it will play an increasingly important role in an increasing number of legal proceedings.

AI Data Scrutiny

AI can be used to scrutinize data in ways that were previously impossible. For example, AI can be used to analyse large amounts of data and identify patterns or correlations that may be relevant to a factual dispute. Lawyers have already seen this sort of tool used by e-discovery vendors to assist in the initial review of large discovery productions in complex cases.

One area where proponents of AI may claim it is particularly useful is the analysis of audio and video recordings. They hope that AI can be used to examine the tone and inflection of a speaker’s voice. Similarly, AI could be used to analyse facial expressions and body language. Would that analysis be able to reliably determine who is telling the truth? Would the degree of certainty be enough to allow those conclusions to be admitted into evidence? Could the underlying data be trusted, just as the AI faked the Pope’s inflated jacket? What if the voice samples were also unnatural?

AI and Admissibility of Evidence

When beginning any case, a good litigator would first and foremost think of how the facts translate into admissible evidence. Parties tell a story and bring the lawyer related documents, but the rules of evidence may require even more. Frequently additional witnesses are needed to authenticate texts or written documents or to avoid hearsay objections. The considerations become more complex when expert testimony is involved.

The use of AI elevates a number of significant interrogations about the reliability of evidence at trial. If AI cannot be proven to be reliable, its assumptions may therefore be inadmissible. For instance, if AI is used to analyse social media posts, it may be programmed to look for specific keywords or phrases associated with certain types of behaviour. There are concerns that this could lead to false positives, where innocent individuals are wrongly accused of wrongdoing.

How Reliable is AI Evidence?

How reliable will AI be in defining whether a person is telling the truth or not? How precise will AI be in analysing facial expressions and body language? To reply to these types of interrogations, lawyers may need to be able to analyse data considered by AI. Perhaps even the programming of AI itself. However, this analysis is actually conducted by an expert, not the lawyer, and an expert is an added expense. In this way, AI, while a potentially useful tool, may result in a continued boom of the prohibitive costs of litigation for common man.

There is always a probability for errors in the AI software itself. Like any software, AI is not dependable and can make mistakes. This means that evidence obtained through AI may not always be reliable or accurate. It may face substantial barricades to admissibility.

AI – as Evidence

In the current scenario, litigation entails multifaceted, business and intellectual property disputes, where important concerns about financial information, trade secrets, or other sensitive information keep piling up. In some ways, AI tools may make the analysis of this information easier. However, there is an additional layer of difficulty and expense in trying to present this analysis as evidence. For instance, while electronic ‘signatures’ on different classes of documents may help avoid counterfeits, the process of proving how the electronic signatures are themselves reliable may result numerous layers of expert testimony.

To get ahead of these issues, courts will need to develop new rules and procedures for the admissibility of evidence obtained through AI. This may involve setting guidelines for the use of AI in legal cases such as:

  • allowing litigants to review the underlying coding of the AI program itself,
  • establishing procedures for the revelation or certification of AI-generated evidence, or
  • demanding that the evidence be verified by independent third parties before it is used in court.

Conclusion 

Hence, evidence is admissible in Court proceedings only if it is relevant to the facts or issues or matters in dispute. If evidence is admissible but irrelevant to the case, then it is only a waste of time for the Court. Thus, evidence should be relevant and should also satisfy all the specified provisions of admissibility then and only then can it be admitted in the Court of Law. As of the present situation, even electronic or digital records are admissible as evidence as they are reliable, relevant and obtained from a reliable source of electronic communication. 

Evidence is the most essential and crucial element of any proceeding – either criminal or civil and should be protected from any kind of hustling or else it might turn inadmissible in the Court.