
AI AND DATA PROTECTION: CHALLENGES IN AUTOMATED DECISION MAKING
AI AND DATA PROTECTION: CHALLENGES IN AUTOMATED DECISION-MAKING Introduction Artificial Intelligence (AI) is rapidly revolutionizing industries by automating decision-making processes in banking, healthcare, governance, and law. While AI-driven decision-making enhances efficiency and scalability, it also raises significant concerns regarding privacy, fairness, and accountability. India’s legal framework, particularly the Digital Personal Data Protection Act, 2023 (DPDP Act)[1], attempts to address these challenges, but its silence on AI-specific issues calls for a more comprehensive regulatory approach. This article examines the legal, ethical, and policy challenges of AI-powered automated decision-making (ADM) in India and proposes solutions for a balanced regulatory framework. The Privacy and Security Risks of AI Decision-Making AI systems require vast amounts of personal data to function, raising significant privacy concerns. In India, AI-driven ADM systems collect information from social media, financial transactions, and biometric databases like Aadhaar[1]. While these technologies improve service delivery, they also risk unauthorized access, data misuse, and mass surveillance. The DPDP Act, 2023, aims to protect personal data through consent-based collection and stringent penalties for non-compliance. However, it does not explicitly regulate AI-specific concerns such as algorithmic profiling, predictive analytics, and real-time surveillance. This gap leaves room for potential data breaches and misuse of sensitive information. Algorithmic Bias and Discrimination A significant challenge of AI-driven ADM is the risk of algorithmic bias[1], which can lead to unfair outcomes and discrimination. AI models learn from historical data, which often contains biases related to gender, caste, and socio-economic status. If unchecked, AI-based recruitment tools, credit-scoring systems, and facial recognition technology can reinforce discriminatory patterns, disproportionately impacting marginalized communities. Unlike the EU’s GDPR[1], which enforces transparency in AI decision-making, India’s legal framework does not explicitly address algorithmic fairness. The absence of clear mandates for fairness audits, bias detection, and data diversity standards increases the likelihood of systemic discrimination in AI-powered decision-making processes. Lack of Transparency and Explainability One of the most pressing concerns in AI and ADM is the lack of transparency. Many AI models operate as “black boxes,” making decisions without clear explanations. This opacity is particularly problematic in high-stakes sectors like healthcare, law enforcement, and finance, where AI-driven decisions can have life-altering consequences. The DPDP Act does not mandate AI explainability or grant individuals the right to challenge AI-driven decisions. Unlike Article 22 of the GDPR[1], which gives individuals the right to contest automated decisions, India’s legal framework lacks strong provisions for algorithmic accountability, leaving affected individuals with limited legal recourse. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. Legal Framework and Regulatory Challenges in India India’s current legal landscape for AI and data protection remains fragmented. The DPDP Act, 2023, establishes fundamental data protection guidelines but does not regulate AI-specific concerns. Other relevant laws include: Information Technology Act, 2000 (IT Act)[1] – Governs cybersecurity and data protection but lacks AI-specific provisions. Aadhaar Act, 2016[2] – Regulates biometric data collection but does not address AI-driven profiling. National Data Governance Framework Policy, 2022[3] – Facilitates data sharing for AI research while ensuring security. EU Artificial Intelligence Act (Comparative Perspective)[4] – Aims to classify AI systems by risk level and enforce transparency requirements, something India has yet to implement. India’s lack of a dedicated AI regulation leaves gaps in accountability, making it necessary for policymakers to introduce AI-specific guidelines for fairness, transparency, and accountability. Accountability and Ethical Responsibility A critical issue in AI-driven ADM is determining liability. When AI makes a flawed or harmful decision—such as rejecting a job application, denying a loan, or misdiagnosing a patient—who is responsible? The developer, the deploying organization, or the government? Currently, India does not have clear legal provisions assigning liability for AI-related harm[1]. Some legal experts propose a “human-in-the-loop” model, where AI decisions are subject to human oversight, particularly in sensitive domains. Others advocate for AI liability frameworks, ensuring that AI developers and users bear legal responsibility for algorithmic errors and discriminatory outcomes. Case Studies: AI and Legal Precedents in India and Beyond Legal actions against AI systems are rising globally. In India, ANI vs OpenAI is a landmark case where the Delhi High Court reviewed copyright claims against AI-generated content. Internationally, Microsoft, GitHub, and OpenAI have faced lawsuits over unauthorized data usage in AI training models[1]. While India has begun addressing AI-related disputes, it still lacks a robust legal framework to regulate AI-driven harm effectively. Strengthening regulatory policies is crucial to address AI’s evolving risks. The case was filled in the us courts against the Microsoft, GitHub and OpenAI for the violation of copyright. https://sustainabletechpartner.com/topics/ai/generative-ai-lawsuit-timeline/ The case has been filled in us and Europe by the artist, more than 8500 authors, and media organization for staling the work. https://www.techtarget.com/WhatIs/feature/AI-lawsuits-explained-Whos-getting-sued Mitigating Risks: Steps Towards Responsible AI To ensure AI is used responsibly in India, the following measures must be taken: Enact AI-Specific Regulations – Introduce laws addressing AI accountability, fairness, and transparency. Mandate Fairness Audits – Establish independent reviews to detect and mitigate algorithmic bias. Enhance Explainability Requirements – Require AI systems to disclose decision-making logic, especially in critical sectors. Align with Global Standards – Adopt best practices from GDPR and the EU AI Act to ensure AI compliance. Strengthen User Rights and Redressal Mechanisms – Provide legal channels for individuals to challenge AI decisions and seek redress. Improve Data Protection Measures – Implement stricter encryption, anonymization, and security protocols for AI-generated data. Increase Public Awareness – Educate individuals on their rights regarding AI-driven decisions and available legal protections. FACT OF CONCERN In all over the world, the cases in the courts against the AI is increasing day by day, especially in us, Europe and now even in India. Increase in the cases in the courts also increases the concerns for the privacy of the individuals. According to the google, 50% of the bank scams and fraud are done through the AI. When there is an ADM there is no any limit for