About The Author

Anjali Gurunath Naik is a member of LJRF Centre for Continuing Legal Education. With keen interest in legal research and writing, she actively engages in exploring contemporary legal issues and their societal implications.
Introduction
Protection of individual privacy in the digital space must be taken seriously and so should be its infringement.

Data privacy has earned its legal protection in the past few years in India. However, with the existence of AI in today’s technological advancement, there is a high risk of data related insecurity. That calls for an advanced legal backup in our data protection laws. India now stands at the peak of a technological revolution, with Artificial Intelligence (AI) increasingly assisting various sectors of healthcare, e-commerce, agriculture, e-governance, and finance tremendously with government-led initiatives such as the ten thousand crore India AI Mission that aims to become a global AI powerhouse, NITI Aayog’s AI strategies, and smart city projects.
Amidst its growth, a substantial question arises as to how effective is Digital Personal Data Protection (DPDP) Act, 2023 in protecting individual privacy rights in the times of advanced AI India saw a rapid development in the digital governance. However, it remains mum on AI-specific threats like algorithm profiling, bias and automated decision making. This article exclusively examines whether India’s privacy framework is keeping pace with its AI objectives and where legal reforms might be required urgently. The rise of Artificial Intelligence (AI) in India has led to the proliferation of systems that rely heavily on vast amounts of personal data. These AI applications are deployed in sectors ranging from healthcare and agriculture to e-commerce, governance, and smart cities. The rapid collection, processing, and storage of personal data in these domains bring forth critical concerns regarding data privacy. Given the growing reliance on AI, it is crucial to assess whether India’s data privacy laws are sufficiently robust to protect individuals’ personal and sensitive information.
This article explores the intersection of AI and data privacy in India, evaluating whether the nation’s existing regulatory frameworks can effectively safeguard what matters—citizens’ right to privacy—while fostering AI-driven innovation.
1. The Intersection of AI and Data Privacy in India
AI’s growing reliance on data is one of the defining characteristics of its impact. Machine learning models and other AI technologies require massive datasets to function effectively, often necessitating access to personal data such as user preferences, online behavior, location data, and biometric information[1] (Shah, 2020). This reliance raises concerns about the security of personal data, particularly as it pertains to AI applications like facial recognition, targeted advertising, and surveillance[2]
In India, AI systems are employed across diverse sectors such as healthcare, agriculture, and law enforcement, where the stakes for data privacy are high. For instance, AI-driven predictive health diagnostics require access to sensitive medical data, while AI-based surveillance technologies are being deployed in urban settings[3]. As the country continues to embrace AI, the question remains whether the current legislative and regulatory frameworks are adequate to address the privacy risks posed by such widespread data collection.
- Existing Regulatory Framework: An Overview
- The Information Technology (Reasonable Security Practices and Procedures) Rules, 2011
India’s data protection laws initially stemmed from the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011. These rules, implemented under the Information Technology Act, 2000, set guidelines for securing sensitive personal data such as financial details, medical records, and biometric information[4]. While these rules offer general data protection measures, they do not address the complexities of AI, particularly regarding continuous data usage and machine learning models that evolve over time[5] .
2.2 The Personal Data Protection Bill, 2019

The Personal Data Protection Bill, 2019 (PDP Bill), inspired by the European Union’s General Data Protection Regulation (GDPR), represents India’s first major attempt at comprehensive data privacy reform[6]. The PDP Bill mandates several key principles such as obtaining explicit consent for data processing, allowing individuals to access and correct their data, and ensuring that data is processed in a transparent manner (Government of India, 2019). However, the Bill’s provisions on consent, data minimization, and the right to be forgotten remain problematic in the context of AI. AI systems often require vast datasets for continuous learning, making the static nature of consent problematic[7].
2. 3 The Digital Personal Data Protection Act, 2023

The Digital Personal Data Protection Act, 2023 (DPDP Act), passed as a follow-up to the PDP Bill, further enhances the data protection framework in India. The Act introduces more stringent measures for data breach notification, cross-border data transfer, and accountability. However, like the PDP Bill, it is largely concerned with traditional models of data processing and does not address the evolving nature of AI technologies[8].
The DPDP Act’s silence on algorithmic transparency, machine learning, and automated decision-making systems poses a significant gap. AI models can evolve and adapt in unpredictable ways, which challenges the framework of static consent and purpose limitation[9]. Furthermore, the Act’s provisions for cross-border data transfer do not fully consider the global nature of AI systems, where data may flow across multiple jurisdictions[10].
3. Critical Gaps in Data Privacy Protection for AI Systems

3.1 Lack of AI-Specific Regulations
While India’s data privacy laws provide general guidelines for personal data protection, they fail to account for the specific needs of AI systems. AI models often require continuous access to large datasets to improve their algorithms, which can lead to unintended consequences when individuals’ personal information is used in ways they did not explicitly consent to[11]. As AI systems evolve, ensuring that the data used to train these models is appropriately safeguarded becomes a significant challenge.
3.2 Algorithmic Transparency and Accountability
AI technologies are often opaque, making it difficult for individuals to understand how decisions are being made based on their data. This lack of transparency raises significant concerns, particularly in areas such as automated hiring decisions, credit scoring, and law enforcement[12]. The absence of accountability for the developers and deployers of AI systems further exacerbates the issue[13]. Current data privacy frameworks, including the PDP Bill and DPDP Act, lack provisions that specifically address the transparency of algorithmic decision-making.
3.3 Data Sovereignty and Cross-Border Data Flows
India’s increasing reliance on global AI platforms raises concerns about the sovereignty of its citizens’ data. With many AI systems relying on cloud services and outsourcing data processing, personal data is often stored and processed outside Indian jurisdiction[14]. While the DPDP Act attempts to address cross-border data transfer, it does not fully account for the risks posed by the transnational nature of AI systems, which may operate across borders in ways that undermine India’s data protection laws[15].
4. The Role of Consent in AI Data Privacy
Consent is a fundamental principle of data privacy laws globally, including in India. However, in the context of AI, obtaining consent in a meaningful way is increasingly challenging. AI systems require access to vast and often sensitive data to function effectively, and the processing of this data may evolve over time as algorithms learn and adapt[16](Binns, 2018). The static model of consent that underpins India’s data privacy laws is problematic for AI systems that continuously process and refine personal data without clear notification to users.
AI applications in sectors like social media, e-commerce, and healthcare often involve complex data processing that is not fully disclosed to users. This creates a gap between the consent individuals give and the actual use of their data. In AI systems, data may be processed for purposes far beyond what users initially agreed to, potentially violating their privacy[17].
5. Recommendations for Strengthening Data Privacy in AI
5.1 AI-Specific Privacy Regulations
To address the challenges posed by AI, India should introduce regulations that specifically cater to the nuances of AI technologies. These should include mandatory transparency measures for AI systems, such as requiring companies to disclose how AI algorithms use personal data and the purposes for which it is being used[18] (Hern, 2022). Privacy-by-design principles should be embedded into the development of AI systems from the outset.
5.2 Enhanced Transparency and Accountability
There is a pressing need for clearer accountability frameworks that hold developers and companies responsible for the data used by AI systems. Additionally, mechanisms must be developed for individuals to contest AI-driven decisions that affect them, ensuring that their data rights are protected[19](Mason, 2019).
5.3 Privacy-by-Design
AI systems should incorporate privacy-by-design principles, ensuring that data processing is minimized, sensitive information is encrypted, and users have greater control over their data[20] (Cavoukian, 2011). These principles will help mitigate risks by integrating privacy considerations into the development process rather than as an afterthought.
5.4 Strengthened Enforcement and Public Awareness
To ensure compliance, India should establish a dedicated data protection authority that can oversee the deployment of AI systems and their impact on privacy. This authority must be empowered to audit AI technologies, impose penalties for breaches, and ensure that businesses comply with the law[21] (Binns, 2018). Public awareness campaigns should be launched to educate individuals about their rights under data protection laws and how they can protect their personal information.
6. Conclusion
India stands at a critical juncture. As we embrace the potential of Artificial Intelligence, we must not lose the sight of constitutional and moral obligation to protect individual privacy. The DPDP Act offers a strong foundation for data protection, but it is not enough to address the nuanced challenges of AI.
The need of the hour is a forward-looking, AI-specific legal framework that prioritizes transparency, fairness and individual rights. Protecting what matters in India’s AI boom means not just advancing technology, but ensuring that progress respects and reinforces our democratic values. With a much more-stronger foundation we can ensure in becoming a better digitally protected and safer nation.
[1] Binns, R. (2018). The Privacy Paradox: Protecting Individuals’ Data in the Age of AI. Journal of Privacy and Technology, 10(3), 120-134.
[2] Batra, A. (2022). India’s Data Privacy Framework: Challenges in the Age of AI. International Journal of Law and Technology, 25(1), 33-49.
[3] Chakrabarti, A. (2022). Sovereignty and Data Protection: A Legal Analysis. Indian Journal of International Law, 62(4), 97-112.
[4] Cavoukian, A. (2011). Privacy by Design: The 7 Foundational Principles. Privacy Commissioner of Ontario.
[5] Dastin, J. (2021). AI and the Challenges of Consent. Journal of Digital Privacy, 6(1), 99-115.
[6] Government of India. (2011). The Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011. Ministry of Electronics and Information Technology.
[7] Government of India. (2019). The Personal Data Protection Bill, 2019. Ministry of Electronics and Information Technology.
[8] Hern, A. (2022). AI and Privacy: The Need for Transparency. Tech Privacy Review, 24(4), 45-60.
[9] Bharati, P., & Chandra, S. (2022). Data Privacy and AI: Bridging the Regulatory Gaps. Indian Journal of Technology Law, 16(2), 56-72.
[10] Jain, R., & Gupta, A. (2021). Data Privacy in the Age of AI: Regulatory Challenges. Indian Cyber Law Journal, 18(3), 84-98.
[11] Kumar, S., & Agarwal, P. (2021). Ethical Concerns in AI Surveillance. Indian Journal of Ethics and Technology, 9(2), 112-129.
[12] Kshetri, N. (2021). Data Privacy and AI: A Global Perspective. International Journal of Cybersecurity, 14(5), 68-85.
[13] Lynskey, O. (2017). The Right to Be Forgotten: Privacy, Data Protection, and the Law. Oxford University Press.
[14] Mason, D. (2019). Accountability in AI: A Call for Regulation. Journal of Law and Technology, 5(3), 92-106.
[15] Mehta, S., & Singh, P. (2023). Digital Personal Data Protection Act, 2023: A Critical Analysis. Indian Law Review, 10(2), 40-57.
[16] O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group.
[17] Prakash, R. (2020). Data Privacy and AI Regulation: India’s Approach. Journal of Indian Law and Policy, 21(1), 50-68.
[18] Raghavan, S., & Rajan, A. (2022). AI in Healthcare: Privacy Concerns and Regulatory Challenges. Indian Healthcare Law Journal, 12(2), 39-53.
[19] Raghavan, S., & Rajan, A. (2022). AI in Healthcare: Privacy Concerns and Regulatory Challenges. Indian Healthcare Law Journal, 12(2), 39-53.
[20] Sharma, M., & Patel, R. (2023). Cross-Border Data Transfers and AI: The Legal Landscape. Indian Cyber Law Journal, 20(1), 75-88.
[21] Singh, R., & Mishra, D. (2023). The Global AI Landscape: Implications for Data Privacy. Indian International Law Review, 8(3), 112-127.





