AI AND DATA PROTECTION: CHALLENGES IN AUTOMATED
DECISION-MAKING
Introduction
Artificial Intelligence (AI) is rapidly revolutionizing industries by automating decision-making processes in banking, healthcare, governance, and law. While AI-driven decision-making enhances efficiency and scalability, it also raises significant concerns regarding privacy, fairness, and accountability. India’s legal framework, particularly the Digital Personal Data Protection Act, 2023 (DPDP Act)[1], attempts to address these challenges, but its silence on AI-specific issues calls for a more comprehensive regulatory approach. This article examines the legal, ethical, and policy challenges of AI-powered automated decision-making (ADM) in India and proposes solutions for a balanced regulatory framework.
The Privacy and Security Risks of AI Decision-Making
AI systems require vast amounts of personal data to function, raising significant privacy concerns. In India, AI-driven ADM systems collect information from social media, financial transactions, and biometric databases like Aadhaar[1]. While these technologies improve service delivery, they also risk unauthorized access, data misuse, and mass surveillance.
The DPDP Act, 2023, aims to protect personal data through consent-based collection and stringent penalties for non-compliance. However, it does not explicitly regulate AI-specific concerns such as algorithmic profiling, predictive analytics, and real-time surveillance. This gap leaves room for potential data breaches and misuse of sensitive information.
Algorithmic Bias and Discrimination
A significant challenge of AI-driven ADM is the risk of algorithmic bias[1], which can lead to unfair outcomes and discrimination. AI models learn from historical data, which often contains biases related to gender, caste, and socio-economic status. If unchecked, AI-based recruitment tools, credit-scoring systems, and facial recognition technology can reinforce discriminatory patterns, disproportionately impacting marginalized communities.
Unlike the EU’s GDPR[1], which enforces transparency in AI decision-making, India’s legal framework does not explicitly address algorithmic fairness. The absence of clear mandates for fairness audits, bias detection, and data diversity standards increases the likelihood of systemic discrimination in AI-powered decision-making processes.
Lack of Transparency and Explainability
One of the most pressing concerns in AI and ADM is the lack of transparency. Many AI models operate as “black boxes,” making decisions without clear explanations. This opacity is particularly problematic in high-stakes sectors like healthcare, law enforcement, and finance, where AI-driven decisions can have life-altering consequences.
The DPDP Act does not mandate AI explainability or grant individuals the right to challenge AI-driven decisions. Unlike Article 22 of the GDPR[1], which gives individuals the right to contest automated decisions, India’s legal framework lacks strong provisions for algorithmic accountability, leaving affected individuals with limited legal recourse.
The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.
Legal Framework and Regulatory Challenges in India
India’s current legal landscape for AI and data protection remains fragmented. The DPDP Act, 2023, establishes fundamental data protection guidelines but does not regulate AI-specific concerns. Other relevant laws include:
-
Information Technology Act, 2000 (IT Act)[1] – Governs cybersecurity and data protection but lacks AI-specific provisions.
-
Aadhaar Act, 2016[2] – Regulates biometric data collection but does not address AI-driven profiling.
-
National Data Governance Framework Policy, 2022[3] – Facilitates data sharing for AI research while ensuring security.
-
EU Artificial Intelligence Act (Comparative Perspective)[4] – Aims to classify AI systems by risk level and enforce transparency requirements, something India has yet to implement.
India’s lack of a dedicated AI regulation leaves gaps in accountability, making it necessary for policymakers to introduce AI-specific guidelines for fairness, transparency, and accountability.
Accountability and Ethical Responsibility
A critical issue in AI-driven ADM is determining liability. When AI makes a flawed or harmful decision—such as rejecting a job application, denying a loan, or misdiagnosing a patient—who is responsible? The developer, the deploying organization, or the government?
Currently, India does not have clear legal provisions assigning liability for AI-related harm[1]. Some legal experts propose a “human-in-the-loop” model, where AI decisions are subject to human oversight, particularly in sensitive domains. Others advocate for AI liability frameworks, ensuring that AI developers and users bear legal responsibility for algorithmic errors and discriminatory outcomes.
Case Studies: AI and Legal Precedents in India and Beyond
Legal actions against AI systems are rising globally. In India, ANI vs OpenAI is a landmark case where the Delhi High Court reviewed copyright claims against AI-generated content. Internationally, Microsoft, GitHub, and OpenAI have faced lawsuits over unauthorized data usage in AI training models[1].
While India has begun addressing AI-related disputes, it still lacks a robust legal framework to regulate AI-driven harm effectively. Strengthening regulatory policies is crucial to address AI’s evolving risks.
-
The case was filled in the us courts against the Microsoft, GitHub and OpenAI for the violation of copyright.
https://sustainabletechpartner.com/topics/ai/generative-ai-lawsuit-timeline/
-
The case has been filled in us and Europe by the artist, more than 8500 authors, and media organization for staling the work.
https://www.techtarget.com/WhatIs/feature/AI-lawsuits-explained-Whos-getting-sued
Mitigating Risks: Steps Towards Responsible AI
To ensure AI is used responsibly in India, the following measures must be taken:
-
Enact AI-Specific Regulations – Introduce laws addressing AI accountability, fairness, and transparency.
-
Mandate Fairness Audits – Establish independent reviews to detect and mitigate algorithmic bias.
-
Enhance Explainability Requirements – Require AI systems to disclose decision-making logic, especially in critical sectors.
-
Align with Global Standards – Adopt best practices from GDPR and the EU AI Act to ensure AI compliance.
-
Strengthen User Rights and Redressal Mechanisms – Provide legal channels for individuals to challenge AI decisions and seek redress.
-
Improve Data Protection Measures – Implement stricter encryption, anonymization, and security protocols for AI-generated data.
-
Increase Public Awareness – Educate individuals on their rights regarding AI-driven decisions and available legal protections.
FACT OF CONCERN
-
In all over the world, the cases in the courts against the AI is increasing day by day, especially in us, Europe and now even in India. Increase in the cases in the courts also increases the concerns for the privacy of the individuals.
-
According to the google, 50% of the bank scams and fraud are done through the AI. When there is an ADM there is no any limit for controlling and hence the frauds and scams increases.
AI Implication in Credits
AI has the potential to touch pretty much every aspect of the business of lending. Lending is an information-based business and many of the tasks performed by humans can be replaced or assisted by AI.
Pretty much the whole life cycle of credit could be impacted by AI.
-
Marketing: developing ads and campaigns and targeting customers.
-
Product design: monitoring competitors, evaluating product options, and developing alternatives.
-
Product selection: comparing products and matching products to customer requirements; “robot advice”.
-
Credit scoring and credit assessments: data analysis to create more accurate and personalised models of creditworthiness.
-
Credit decisions: automated credit decisions.
-
Loan processing and settlement: extracting information from documents, generating and sending documents, and automated settlements.
-
Customer identification and AML/CTF: KYC and transaction monitoring.
-
Customer service: responding to queries and pricing variations.
-
Collections and enforcement: monitoring loan performance, and automated collections contact.
-
Dispute resolution: processing complaints and conducting conversations with customers.
-
Fraud detection: spotting fakes and scams and responding rapidly to prevent losses.
-
Compliance and legal: breach detection, issue identification and legal sign off
Impact of DIGITAL PERSONAL DATA PROTECTION ACT on Artificial Intelligence
According to a study report conducted by Boston Consulting Group in collaboration with IIT-A titled ” AI in India – A Strategic Necessity” founds that incorporating and adopting Artificial Intelligence could potentially add up to 1.4% to the annual real GDP growth of India and further quantitatively depicts the increased research in the field of AI development in India and founds that the Private investment in Research and Development relating to AI technologies in India was approximately about 642 million USD which depicts the acceleration of AI investments in the country.
The newly enacted DPDP act does not explicitly mention anything about Artificial Intelligence, but the core principle and preamble of the act is to recognize the rights of individuals and to protect their data by mandating the act of permitting processing of such personal data for lawful purposes only. The core functioning of Artificial intelligence and Machine learning models is based on the collection of vast amounts of data. To be precise, the training of AI systems entirely depends upon the data collection, and the presence of a vast amount of data set is essential for determining the success or failure of a Machine learning Algorithm. Even the definition of the term ‘Machine learning’ by IBM states that “ML is a branch of AI and computer science that focuses on the use of data and Algorithms to imitate how humans learn and improve its accuracy.
Some Important Sections of Digital Personal Data Protection Act on Artificial Intelligence
To regulate the Digital Personal Data Protection Act on Artificial Intelligence, section 4 of the DPDP Act mandates that valid consent or legitimate uses are required for the processing of Personal Data of a Data Principal. The term legitimate use is different from that of the ground “Legitimate interest” mentioned in Article 6(1) (f) of GDPR.
Section 7 of the DPDP Act, tabulates the instances which amount to legitimate interest in 9 headings including:
-
Specified purpose with voluntary disclosure
-
State and instrumentalities process data for issuing licenses, benefits, subsidies etc.,
-
Perform state functions or national interest
-
Performing legal obligations in India.
-
For compliance with judgment or order in India
-
responding to a life-threatening medical emergency
-
Provide medical treatment during the epidemic, or any threat to public health
-
Assist with public safety and disaster
-
For employment-related purposes.
So, to train the AI models the owners of the model require consent or the act should be justified within one of the legitimate uses. Apart from these two conditions, there is an exception provided under Section 3 of the DPDP Act.
According to Section 3(c)(ii) of the DPDP Act, which exempts the application of the provisions of the act for personal data made or caused to be made available to the public by the Data Principal or by any other person who is legally obliged to make such Personal Data publicly available. Corporations may exploit this exception to process vast amounts of data sets, but there is a lack of clear provisions regarding instances where such publicly available data has been taken down to private and whether it will be qualified as personal data and be protected under provisions of the DPDP Act.
Conclusion
AI-powered automated decision-making presents both immense opportunities and significant risks in India. While the DPDP Act, 2023, lays the groundwork for data protection, it does not comprehensively address AI’s unique challenges, such as algorithmic bias, transparency, and accountability. Without dedicated AI regulations, concerns over unfair AI decisions, lack of legal recourse, and data privacy risks will persist.
For India to harness AI’s potential while safeguarding individual rights, policymakers must introduce robust AI-specific laws, strengthen transparency measures, and enforce fairness in AI-driven decision-making. A well-regulated AI ecosystem will not only promote innovation but also ensure ethical AI deployment that aligns with India’s broader legal and human rights frameworks.
References
-
GDPR Guidelines on Automated Decision-Making
-
DPDP Act, 2023 – Government of India Publications
-
ANI vs OpenAI – Delhi High Court Ruling
-
National Data Governance Framework Policy, 2022
-
EU Artificial Intelligence Act
-
https://tsaaro.com/blogs/the-impact-of-the-dpdp-act-on-artificial-intelligence-and-machine-learning/
-
https://www.dwyerharris.com/blog/artificial-intelligence-in-credit-legal-and-compliance-issues