IISPPR

AI and Data Privacy Law: are we truly Protected by our data Shield?

 

1. Deepika, CLC, University of Delhi

2. Ajit Kumar, Faculty of Law, BHU

3.Pranjal Sahay, Bharti vidyapeeth university 

AI at a point can make human pointless as due to its usage people have stopped physical and neurological usage of their body. The physical activities of human will decrease which will impact the health of humans. Also , due to easy access of solution to problems at a certain period of time it might be possible that human will completely depend on AI and will stop using personal intelligence which will harm their neurological system. People will reduce their human interaction as AI will provide them with their emotional intelligence.

 

It will not only harm in physical and emotional sense but also in financial sense. AI is way more efficient and efficient than human. Software like amto.ai has been developed to draft the file of a particular case in very less time against human who takes times for the procedure of drafting and there are more software present and are yet to come. Company will rely on AI for its fast working capacity which will eventually reduce the job opportunities for humans.

 

AI has the power to create artificial environment. It can create a virtual ecosystem but this may hamper the actual natural environment. There are chances that due to AI at a certain point artificial natural resources will introduced for the survival but it will definitely impact the natural ecosystem which can be threat to living beings.

 

Another aspect which should made forward is the infringement of privacy by AI. AI is present everywhere in the cyber world today. Be it any website or application AI collects individuals personal information which can affect the privacy. This sensitive information can be used against an individual and may create threat in personal as well as in professional life. Students use resumes generated through AI for which they have to agree certain terms and conditions which are usually overlooked. Also, while making transaction through online banking terms and conditions appears but is not taken into account. This definitely can lead to threaten to privacy of individuals. There might be possibility that these personal information can get into wrong hands which can create cyber problems like online threatening, obscenity, frauds, etc .

In order to resolve the problems certain legislations has been to take control on AI.

 

Data Privacy Law At Global Level:- Due to advancement in technology such as the advent of the AI, internet, mobile,

devices and social media platforms, misuse of personal data is increasing day by day. So, the question is: how can we solve it ? To solve the problem, countries at the global level have taken some important steps, such as The European Union (EU) has passed the General Data Protection Regulation in 2018 and the USA has adopted a sectoral approach to privacy regulation and other countries such as Canada and Nigeria, have also taken steps.All these step will be discuss in following manner:-

Steps taken by European Union (EU):- EU has passed the General Data Protection Regulation (GDPR) in 2018. The European Union (EU) stands at the forefront of global privacy protection with the enactment of the General Data Protection Regulation (GDPR) in 2018 (Cortez, 2020). The GDPR introduces a set of fundamental principles to govern the processing of personal data (Tamburri, 2020).

It is enacted to regulate personal data and unify the data protection law across the europe. It is not bound by the territory, the GDPR applies to all the organizations. Crucially, the regulation establishes a robust framework for cross-border data transfers, promoting a unified approach to international data flows (Jiang, 2022. Okunade et al., 2023).

This act contained some principles such as:- a) Fairness

b) purpose limitation

© Data minimization

(d) Accuracy

(e)Integrity and confidentiality etc.

Further this act also provide some rights to the individual such as:-

Right to Access

Right to be forgotten

Right ot be rectification

Right to object

Right to data portability

Right to restriction of processing.

In this act there is also provision regarding appointment of the Data Protection officer. The officer is required to be appointed by some organization. The main duty of the officer is to oversee the misuse of the data protection activity.

In this act there is also provision of the penalties for non- compliance. Organizations can face penalties up to €20 million or 4% of their global annual turnover, whichever is higher.

Steps Taken by the USA:- Unlike the EU, the United States follows a sectoral approach to privacy regulation, with laws addressing specific industries and types of data (Hartzog and Richards, 2020). Means there is no single data protection law like GDPR. The USA has adopted a sectoral approach to regulate privacy data in its country. In the usa there are lots of privacy laws such as:

California consumer privacy act, provides rights to the residents of the california regarding the collection and use of their personal data.Consumer has right to know what data is being collected and for which purpose it will be used.

The Health Insurance Portability and Accountability Act (HIPAA)- it deals with healthcare data. It applies for healthcare data and healthcare plans that deal with protected health information.

The Children’s Online Privacy Protection Act (COPPA):-it applies to operators of websites and service providers who knowingly collect the data from children under the age of 13 years old. Before collecting the data from the children under the age of 13 years old you have to take permission from its parents.

The Gramm-Leach-Bliley Act (GLBA):- it applies in financial institutions and financial institutions require establishing the privacy policy regarding the financial information of the individuals.

Steps Taken by the Canada:- in Canada, data privacy is governed by the Personal Information Protection and Electronic Documents Act (PIPEDA).it is a federal law which sets some rules for how private organizations can collect, use and disclose personal information in the course of commercial activity.

Further it also includes some significant principles such as:-

Personal information protection:- this act applies to the personal data of the individual. This personal data includes name, address and any other information.

Consent:- organization must obtain the consent of individuals for the collection, use and disposal of the data.

Data security:- organization must maintain the personal data of the individual. It is the duty of the organization to protect the data of the individual.

Right of the individual:- every individual has the right to access his personal information.

Steps Taken by the Nigeria:- Nigeria data protection regulation of 2019 has been replaced by the new data protection act, 2023 which is called Nigeria data protection act, 2023. It is a very significant step in the area of data privacy and data protection. The Act governs how personal data is collected, processed, and stored, with specific provisions relevant to AI technologies, particularly those used in the financial services and telecommunications sectors (Owolabi, 2023).In comparison, countries like the USA have adopted more mature and robust frameworks to regulate AI technologies (Papyshev & Yarime, 2023).

Principles which is incorporated in this act:-

Consent management:- every organization must obtain the consent from every individual before using his personal data.

Right to access:- every individual has the right to access his data.

Right to restrict the using the data:- every individual has the right to restrict to any organization using his personal data.

It further provides the provision of the data protection officer(DPO) who will oversee the wrongful activities, which will occur with the personal data. It also provides the provision to establish the National Data Protection Bureau (NDPB). This bureau has the power to impose the sanction on that organization which failed to comply.

It also provides the provision relating to the fine and penalties. The 2023 Act includes harsher penalties for non-compliance, with fines up to ₦500 million (about $1.1 million USD) for major offenses.

Steps Taken by the Asia Pacific Region:- in the area of the asia pacific region different types of approaches have been adopted by the different countries. Some countries have accepted consent management on the other hand some countries have not accepted. In asia pacific regions there are lots of countries which passed the legislation relating to the data privacy and data protection, it includes lots of legislation such as :-

Singapore personal data protection act( DPPA)

Japan also passed an act. “Act on the protection of personal information”.

China: Personal Information Protection Law (PIPL)

Australia: Privacy Act 1988 (updated in 2023)

Regional variations in privacy laws within the Asia-Pacific are significant. While some countries prioritize individual consent and notice, others focus on accountability and data localization. Harmonization efforts are underway, such as the Asia-Pacific Economic Cooperation (APEC) Privacy Framework, aiming to bridge gaps and establish common principles across the region.

 

Digital Privacy in India

The legal framework governing digital privacy in India has evolved considerably in recent years, with landmark judicial and legislative developments. Historically, India’s approach to privacy was relatively underdeveloped.

The foundational legal instrument for privacy was Article 21 of the Indian Constitution, which guarantees the “right to life and personal liberty.” In 2017, the Supreme Court of India in K.S. Puttaswamy v. Union of India recognized the “right to privacy” as a fundamental right under Article 21, marking a significant shift in India’s privacy jurisprudence.

Following this, the Personal Data Protection Bill (PDPB), 2019 was introduced to regulate the processing of personal data and ensure individual privacy rights. Although the Bill has undergone several revisions and debates.

It’s core aims remain to set clear guidelines for data processing, establish data protection authorities, and provide citizens with greater control over their personal information. It represents a critical step toward harmonizing data protection with India’s digital growth.

In August 2023, Indian parliament passed the Digital personal Data Protection Act, 2023. Which aims to:-

Protects the privacy rights of individuals.

Promotes responsible data management practices.

Balances the rights of individuals with the need to process data for lawful purposes.

Prohibits tracking, behavioral monitoring, and targeted advertising directed at children.

It aims to regulate the processing of digital personal data in India by establishing clear rights and obligations for data fiduciaries. That are as follows:-

Consent Requirements :- Data processing requires explicit and informed consent from individual.

Data Rights :- Only data essential for the intended purpose be collected

Security Measures :- Robust measures must be implemented to protect personal data from breach.

Establishment of regulatory body:- Organization are accountable for their data processing activities and must comply with the act.

Storage Limitation :- Personal data to be retained only as long as necessary for it’s intended purpose.

The DPDP created the Data Protection Board of India (DPB), the first regulatory body in India focused on protecting personal data privacy. Like similar regulatory bodies, the goal of the DPB is to oversee compliance and impose penalties on non-compliant organizations.

Responsibilities of Data Principals and Organizations

The DPDP Act assigns restrictions and obligations to organizations that process personal data including:-

Obtain consent from individuals before processing their personal data: Organizations must obtain consent from individuals before processing their personal data, unless an exemption applies.

Use personal data only for the purposes for which it was collected: Organizations must use personal data only for the purposes for which it was collected, unless they have obtained consent from the individual for further processing.

Protect personal data from unauthorized access, use, disclosure, alteration, or destruction: Organizations must take appropriate technical and organizational measures to protect personal data from unauthorized access, use, disclosure, alteration, or destruction.

Respond to individual’s requests for access, correction, deletion, and objection: Organizations must respond to individual’s requests for access, correction, deletion, and objection within a reasonable time.

Report data breaches to the DPB: Organizations must report data breaches to the DPB within 72 hours of becoming aware of the breach.

Penalties for Noncompliance:-

Violations of the requirements – in particular for the failure to implement information security measures necessary to mitigate the risk of a personal data breach – could result in fines of up to 250 crore INR/$30 million. The penalty is less severe than 2022’s legislation, which wanted to impose a fine of up to approximately INR 500 crore.

DPDP Act also provides that a body corporate must provide a comprehensive privacy policy and that must include:-

A clear statement as to it’s practices and policies.

Type of information collected.

Purpose for collecting the data and storage.

Security Measures.

Disclosure policy for the information.

Judgments on privacy law:-

If we go by the recent legal decision on January 2025 the NCLAT i.e, National Company Law Appellate Tribunal temporarily suspended a five year ban imposed by the CCI i.e, Competition Commission of india on data sharing between whatsapp and it’s parent company Meta. CCI had previously restricted this data sharing citing antitrust concerns, meta on it’s behalf argued that the ban could disrupt WhatsApp business model in India, leading to that NCLAT to suspend the ban while case is under review.

2:-Pegasus spyware was reportedly able to hack digital devices, access stored data in real time, control the camera and microphone, and operate the device remotely. These writ petitions were filed because the spyware was allegedly used to target private individuals in India. In August 2022, a team of experts found no clear proof that spyware was used on the phones they checked and the report remains sealed and has not been made public.

3:-Manohar Lal Sharma v. Union of India (2021) is related to the Pegasus spyware controversy in India. Advocate Manohar Lal Sharma filed a petition in the Supreme Court, seeking an investigation into allegations that the Indian government used Pegasus spyware to surveil journalists, activists, and politicians and On this point, The Indian government did not confirm or deny the use of Pegasus, citing national security concerns.

These kinds of judgment shows that the government is not fully concerned about the privacy laws for all the citizens and even it is not abiding by the laws laid down in it’s own country, in similar way there are numerous nuances under the DPDP Act because

The law doesn’t have the retrospective effect and would be enforced for the future only.

DPDP Act talks about only digital data and not about physical data that are available and stored.

DPDP Act exempt the government from it’s responsibility under the act.

Not only this but Central government do have power to exempt certain classes of data fiduciaries from the ambit of this act.

so we do need to have more stringent laws for the data privacy because in the digital era it’s very difficult to protect the Personal Data.

 

“AI and Privacy Laws in India: Are We Truly Protected by Our Data Shields?”

 

Global Data Shields Unveiled: Key Incidents and Landmark Case Laws across the EU, USA, Canada, Nigeria, and India

 

European Union

The European Union (EU) has some of the most stringent data protection standards globally, encapsulated in the General Data Protection Regulation (GDPR). Since the GDPR’s enactment in May 2018, national regulators and courts have handled numerous cases, testing the boundaries of data privacy in a digital era characterized by vast data flows and sophisticated AI-driven analysis. Below are key incidents and case laws in which EU data privacy laws have been squarely applied.

 

Google Spain SL v. Agencia Española de Protección de Datos (2014)

Before the GDPR took effect, the foundation of modern EU data protection was the Data Protection Directive 95/46/EC. Under this framework, the Court of Justice of the European Union (CJEU) issued a landmark judgment in Google Spain SL v. AEPD. The case was triggered by a Spanish citizen whose name was linked, in Google’s search results, to a newspaper announcement of a real-estate auction related to a social security debt. He requested that Google remove or “delist” links to this information, arguing it was outdated and prejudicial.

The CJEU held that Google, as a search engine, was a data controller under EU law and thus obliged to comply with valid erasure requests unless compelling reasons of public interest justified continued indexing. This decision was pivotal in recognizing what is now commonly referred to as the “right to be forgotten,” laying a legal foundation for Article 17 of the GDPR, which expanded this right to erasure. Although this case predated the GDPR, it influenced the regulation’s drafting, heralding the EU’s firm stance on balancing data usage with individual privacy rights.

 

Schrems I (2015) and Schrems II (2020)

Austrian privacy activist Maximillian Schrems brought two seminal challenges concerning data transfers from the EU to the United States, focusing on whether U.S. surveillance laws undermined EU data protection guarantees.

Schrems I (2015): The case centered on the Safe Harbor framework, which allowed companies to transfer data from the EU to the U.S. if they self-certified compliance with certain privacy principles. Schrems argued that revelations about the U.S. National Security Agency’s mass surveillance programs (e.g., PRISM) meant that Safe Harbor offered inadequate protection. The CJEU agreed, invalidating Safe Harbor on grounds that it failed to ensure EU-equivalent data safeguards, thus forcing businesses to rely on alternative mechanisms such as Standard Contractual Clauses (SCCs) and Binding Corporate Rules (BCRs).

Schrems II (2020): Dissatisfied with the replacement Privacy Shield framework, Schrems launched another challenge. The CJEU struck down Privacy Shield, citing similar concerns over surveillance and insufficient legal remedies for EU citizens in U.S. courts. While the judgment upheld Standard Contractual Clauses in principle, it emphasized that organizations must conduct case-by-case assessments to ensure that recipient countries provide GDPR-equivalent protections. These judgments have had far-reaching consequences for AI-driven global data flows, as multinational companies must evaluate whether their cross-border data transfers meet the EU’s rigorous requirements.

 

CNIL Enforcement Against Google LLC (2019)

One of the first major penalties under the GDPR was levied against Google by the French data protection authority, the Commission Nationale de l’Informatique et des Libertés (CNIL). In January 2019, CNIL imposed a €50 million fine on Google for failing to provide transparent information about its data processing activities and for not validly obtaining consent when personalizing ads. The regulator held that Google’s consent mechanism was neither “specific” nor “unambiguous,” and that users were not adequately informed about the scope of data collection.

 

While not a single-court “case” but rather an administrative enforcement action, this incident illustrated the stringent approach the EU takes toward companies that handle personal data at scale. It also underscored the accountability principle of the GDPR, under which data controllers must demonstrate compliance, particularly in contexts involving AI-driven advertising and user profiling.

 

Clearview AI Investigations in Europe

Clearview AI, a U.S.-based startup specializing in facial recognition software, reportedly scraped billions of images from social media and other public sites. Multiple European data protection authorities initiated investigations. While the most definitive regulatory actions took place in Canada and other jurisdictions, the EU’s reaction—driven by concerns around large-scale biometric data processing without individuals’ consent—exemplifies how regulators use GDPR’s broad territorial scope to challenge AI-based ventures that collect EU citizens’ data. Some EU regulators have insisted that Clearview AI delete EU-origin data, underscoring the union’s protective posture regarding biometric identifiers and AI-driven surveillance.

 

United States

The United States has no single, overarching federal data privacy law comparable to the GDPR, but rather a patchwork of legislation at both federal and state levels. Case law emerges from federal agencies (particularly the Federal Trade Commission), state regulators, and private lawsuits challenging corporate or government data practices. Key incidents and cases show how evolving AI technologies and large-scale data processing practices collide with this fragmented legal framework.

 

FTC Enforcement Action Against Facebook (Cambridge Analytica, 2019)

In the notorious Cambridge Analytica scandal, personal data of approximately 87 million Facebook users was harvested without clear consent for political advertising and voter profiling. The Federal Trade Commission (FTC) launched an investigation into Facebook’s data practices, ultimately resulting in a $5 billion settlement in 2019—one of the largest fines in FTC history. Although some criticized the settlement for not demanding deeper reforms, it set a precedent that U.S. regulators are willing to penalize tech giants for privacy violations.

This incident is illustrative of how AI-driven profiling and data analytics can exploit vast user datasets for targeted influence operations. While the FTC’s authority stems primarily from Section 5 of the FTC Act (which prohibits unfair or deceptive acts or practices), the settlement’s conditions laid the groundwork for more robust oversight of data-hungry AI platforms.

 

hiQ Labs v. LinkedIn

This case stemmed from a dispute over web scraping and the use of publicly available personal data for AI-driven analytics. hiQ Labs scraped LinkedIn profiles to develop predictive algorithms for employee attrition, prompting LinkedIn to threaten legal action under the Computer Fraud and Abuse Act (CFAA). hiQ sued for a declaratory judgment to continue scraping public profiles, arguing that the CFAA did not apply to public websites.

The Ninth Circuit Court of Appeals largely sided with hiQ, reasoning that publicly available information did not, in this context, constitute “unauthorized” access under the CFAA. LinkedIn’s appeal to the Supreme Court resulted in a remand to the Ninth Circuit, leaving the legal question somewhat unsettled. Nevertheless, the case underscores the complexities of applying legacy statutes to modern AI-based data collection. It also highlights that data aggregators training AI models on publicly available information may not always be running afoul of the law—depending on jurisdiction and the interpretation of “unauthorized” access.

 

Biometric Information Privacy Act (BIPA) Litigation in Illinois

Illinois’s BIPA is one of the strictest biometric data privacy laws in the U.S., requiring organizations to obtain explicit, informed consent before collecting or storing biometric identifiers. A wave of class-action lawsuits has been filed under BIPA against tech companies that use facial recognition or voiceprint analysis. For example, Facebook settled a BIPA class action for $650 million in 2020 over its “Tag Suggestions” feature, which scanned user photos to suggest tags based on facial recognition.

 

Several other companies—ranging from retailers using face-scanning security systems to AI-driven platforms storing voiceprints—have faced BIPA lawsuits. These cases collectively demonstrate that while federal data privacy legislation remains piecemeal, individual states can enforce potent rules that significantly impact AI developers and data-handling practices.

 

State-Level Consumer Privacy Acts and AI Relevance

California’s Consumer Privacy Act (CCPA) and its successor, the California Privacy Rights Act (CPRA), grant consumers rights to know, access, and delete personal information collected by businesses. While these acts are not focused solely on AI, they apply to any organization that processes large volumes of consumer data for analytics, profiling, or automated decision-making. Lawsuits have begun testing these provisions in AI contexts—for example, claims that companies fail to adequately disclose or secure the data used to train recommendation algorithms. Enforcement by the California Attorney General’s office shows growing willingness to hold companies accountable under these new state laws.

 

Canada

Canada’s primary federal data protection statute, the Personal Information Protection and Electronic Documents Act (PIPEDA), is enforced by the Office of the Privacy Commissioner (OPC). Provincial laws and sector-specific regulations also interplay, creating a multifaceted framework. Canadian courts and regulators have addressed the use of personal data for AI systems, highlighting important precedents on consent, transparency, and fair use.

 

Clearview AI Investigation and Findings (2020–2021)

In what has become a notable multi-jurisdictional case, Canadian authorities found that Clearview AI’s scraping of billions of images from the internet—without obtaining consent from individuals—violated federal and provincial privacy laws. The company sold access to a facial recognition database to law enforcement agencies, raising concerns about mass surveillance and the chilling effects on free expression. Following the OPC’s investigation, Clearview AI was ordered to cease offering its services in Canada and delete images belonging to Canadian residents.

This incident illuminates how data privacy authorities can enforce existing principles (under PIPEDA or parallel provincial statutes) against AI companies that argue their data sources are “public.” The ruling solidified that privacy rights persist even when personal data is theoretically accessible on social media platforms, forcing AI-focused enterprises to reassess compliance strategies.

 

PIPEDA Compliance in AI-Driven Retail and Banking

Canadian courts have also indirectly dealt with AI’s implications through broader interpretations of PIPEDA. Retailers employing AI-driven analytics, such as smart cameras for behavioral tracking, have been investigated by provincial privacy commissioners, particularly when these systems collect identifiable information without explicit consumer knowledge. Meanwhile, banks and financial services companies using AI-based credit scoring have been instructed to disclose how they gather, store, and use personal data, ensuring compliance with basic fair information principles like consent, purpose specification, and data minimization.

Although individual lawsuits are often resolved through settlements or mediated outcomes, the overall enforcement trend signals that Canada’s privacy regulators interpret PIPEDA and provincial laws flexibly enough to cover a range of AI applications. These authorities emphasize informed consent and require that AI developers implement privacy by design—demonstrating that even older legislation can adapt to modern data challenges.

 

Anticipated Reforms: Consumer Privacy Protection Act (CPPA) and Artificial Intelligence and Data Act (AIDA)

While not fully enacted at the time of writing, the proposed CPPA and AIDA reflect Canada’s endeavor to modernize its legal framework for digital governance. Should these bills pass, they will likely empower regulators with stronger enforcement tools and clarify obligations for organizations deploying AI systems. Though not an existing “case law,” the legislative trajectory in Canada indicates a proactive stance on ensuring that AI-driven innovations respect privacy rights. Future jurisprudence, building on PIPEDA precedents, will likely set new standards for AI governance in Canada.

 

Nigeria

Nigeria is emerging as a technological powerhouse in Africa, with innovative AI-driven solutions gaining traction across finance, telecom, and health care. Data privacy in Nigeria is principally governed by the Nigeria Data Protection Regulation (NDPR), which came into effect in 2019 under the oversight of the National Information Technology Development Agency (NITDA). Although still relatively new, the NDPR has been applied in several incidents, signaling the growth of a local data protection regime.

 

Enforcement Actions Under NDPR

Since its introduction, the NDPR has prompted some organizations to file mandatory annual data audits and implement privacy policies. NITDA has actively issued compliance notices and, in certain cases, imposed fines for non-compliance. A notable early enforcement concerned unauthorized disclosures of customer data by businesses in the marketing and telecom sectors, illustrating that NDPR provisions on consent and data minimization apply broadly.

 

AI-Driven Lending Apps and Privacy Concerns

One context where NDPR enforcement intersects with AI usage is digital lending. Several FinTech platforms employ AI-powered algorithms to determine creditworthiness, collecting extensive personal data from a user’s phone—such as call logs, contact lists, and geolocation—sometimes without explicit or fully informed consent. Complaints lodged with NITDA allege that these practices violate NDPR principles, particularly around transparency and purpose limitation.

While specific court judgments on AI-based lending apps are scant, regulatory interventions highlight that NDPR enforcers view excessive data collection with suspicion. They also stress the requirement to secure freely given consent, especially for sensitive or biometric data if AI processes biometrics for identity verification.

Data Breach Notifications and AI-Related Violations

The NDPR mandates that organizations notify authorities and affected individuals in the event of data breaches. Instances have surfaced in which companies using AI-driven customer relationship systems experienced breaches exposing personal information. While the NDPR lacks the massive fine structures of the EU’s GDPR, NITDA has been proactive in investigating these violations and compelling remedial measures. These incidents, though not always litigated in traditional courts, affirm that Nigeria’s evolving privacy framework can apply to AI-based tools when they mishandle or fail to secure personal data.

 

India

India’s journey toward a comprehensive data protection regime has been heavily influenced by the Supreme Court’s recognition of privacy as a fundamental right in Justice K.S. Puttaswamy (Retd.) & Anr. v. Union of India (2017). Although a dedicated data protection law is still in flux, the interplay between judicial precedents and sectoral regulations offers insight into how data privacy norms are enforced. Below are the key incidents and cases illustrating how data privacy law has been (or could be) applied, particularly in the context of AI.

 

Justice K.S. Puttaswamy (Retd.) & Anr. v. Union of India (2017)

While not a data privacy “case” in the strict sense, the Supreme Court’s unanimous ruling in Puttaswamy fundamentally altered India’s constitutional landscape. The Court declared that privacy is intrinsic to the right to life and personal liberty under Article 21 of the Constitution. This precedent paved the way for stronger legislative and judicial scrutiny of data collection practices. It specifically mentioned the need for a robust data protection framework in an era of digital transformations, including AI. Consequently, any future AI-related privacy dispute in India will likely draw from Puttaswamy’s articulation of privacy as a fundamental right.

 

Aadhaar Verdicts and Biometric Data Concerns:-

India’s ambitious biometric identification program, Aadhaar, has been subject to multiple legal challenges for alleged privacy violations. In the 2018 Aadhaar judgment (Puttaswamy v. Union of India, 2018), the Supreme Court upheld the program’s constitutionality but imposed restrictions on private entities’ access to Aadhaar data. Though Aadhaar is not strictly an “AI-based” system, it exemplifies how large-scale data collection—biometrics, demographic details—can ignite privacy debates. Subsequent litigation has addressed unauthorized use of Aadhaar-based authentication in FinTech and telecom, echoing concerns that advanced algorithms might exploit biometric information for profiling or surveillance without sufficient safeguards.

Emerging Disputes Over Facial Recognition Technology:-

Several state police departments in India have rolled out AI-driven facial recognition systems (FRS) to identify criminals or missing persons. Civil liberty groups have challenged such deployments in High Courts, arguing they lack a statutory basis and violate the fundamental right to privacy. Although no definitive Supreme Court ruling has yet emerged on the matter, interim orders sometimes require authorities to clarify the legal basis, data retention policies, and accuracy of these AI-driven tools.

Because India’s existing IT Act and Rules do not explicitly govern AI-based facial recognition, litigants have invoked Puttaswamy to demand that any intrusion be proportionate, necessary, and subject to robust oversight. If a final ruling asserts these principles, it could become a seminal case in India’s data privacy jurisprudence—paving the way for specialized rules on AI-driven surveillance.

 

Legal Contests Involving Digital Lending Apps:-

Similar to Nigeria, India has seen a proliferation of digital lending platforms that deploy AI to screen borrowers. The Reserve Bank of India (RBI) has received complaints about predatory practices, including unauthorized access to phone contacts and personal files. While most actions to date have been regulatory guidelines from the RBI rather than judicial rulings, some lawsuits have been filed in consumer courts alleging privacy violations. Litigants typically claim that the broad permissions requested by these apps amount to coerced or uninformed consent, contravening established privacy norms.

 

Though outcomes vary, the controversies illustrate how AI-driven personal data collection collides with nascent Indian privacy standards. In the absence of a comprehensive data protection law, courts and administrative bodies rely on fundamental rights doctrine (from Puttaswamy) and consumer protection rules to rein in invasive data practices.

 

Anticipated Effects of the Digital Personal Data Protection Bill:-

While not yet fully enacted (at the time of writing), the Digital Personal Data Protection (DPDP) Bill aims to codify consent mechanisms, define data fiduciaries, and establish a Data Protection Board. Legal experts anticipate an uptick in challenges to AI-based data processing once the bill becomes law. Although no direct “case law” exists under it yet, the Bill’s eventual passage will likely lead to litigation clarifying how automated decision-making, data profiling, and large-scale analytics fit within the emerging Indian privacy framework. Courts will probably build on Puttaswamy to delineate the permissible contours of AI-driven data use.

 

Closing Observations on Cross-Jurisdictional Trends:-

From Europe’s sweeping GDPR enforcement actions (such as Google Spain and Schrems), to the United States’ patchwork but increasingly active arena of lawsuits (e.g., Cambridge Analytica and hiQ Labs), to Canada’s firm stance on AI-based biometric data (e.g., Clearview AI), to Nigeria’s nascent but growing NDPR enforcement, and finally to India’s pivotal constitutional judgments (Puttaswamy)—courts and regulators worldwide are grappling with the transformative impact of AI on privacy rights.

Although the precise contours of data privacy vary by jurisdiction, a unifying theme is the recognition that AI’s insatiable appetite for personal information necessitates robust legal oversight. Each of the cases and incidents summarized above demonstrates how existing or recently passed regulations are being tested in real-world scenarios. As AI technologies continue to evolve, future case law will likely revolve around questions of meaningful consent, proportional use of personal data, transparency in automated decision-making, and the right to challenge AI-driven outcomes. The cumulative effect of these judicial and regulatory precedents is shaping a rapidly developing global framework for data privacy in the age of intelligent machines.

 

Conclusion and suggestions

In the provided paper we saw how AI has been both boon and bane for us. It is contributing in creation of future but due to excess usage can create obstruction in the path of innovation. So, it is suggested to control it properly. With the support of our government and legislative authority it can be used in a way which will allow it to keep going on it’s creation and a bright future can be welcomed.

Law and order should be followed strictly which should be backed by penalty. Also one should not rely completely on AI . One must fulfil basic needs on their own.

 

References:-

 

1. https://doi.org/10.56781/ijsrr.2024.5.1.0044

2.https://iapp.org/news/a/is-there-a-right-to-explanation-for-machine-learning-in-the-gdpr/

 

3.www.fepbl.com/index.php/ijarss

 

4.https://www.researchgate.net/publication/378779704_PRIVACY_LAW_CHALLENGES_IN_THE_DIGITAL_AGE_A_GLOBAL_REVIEW_OF_LEGISLATION_AND_ENFORCEMENT

 

5.Cortez, E.K. ed., (2020). Data Protection Around the World: Privacy Laws in Action (Vol. 33). Springer Nature.

 

6.Tamburri, D.A. (2020). Design principles for the General Data Protection Regulation (GDPR): A formal concept analysis and its evaluation. Information Systems, 91, 101469.

7.Jiang, X. (2022). Governing cross-border data flows: China’s proposal and practice. China Quarterly of International Strategic Studies, 8(01), 21-37

8.Hartzog, W., & Richards, N. (2020). Privacy’s constitutional moment and the limits of data protection. BCL Review, 61, 1687.

 

 

Commission Nationale de l’Informatique et des Libertés (CNIL) (2019)

CNIL’s restricted formation imposes a financial penalty of 50 million euros against GOOGLE LLC.

[Online]. Available at: https://www.cnil.fr/en/cnils-restricted-formation-imposes-financial-penalty-50-million-euros-against-google-llc [Accessed 30 January 2025].

 

Court of Justice of the European Union (CJEU) (2014)

Judgment in Case C-131/12, Google Spain SL v. AEPD.

[Online]. Available at: https://curia.europa.eu/juris/document/document.jsf?docid=152065&doclang=EN [Accessed 30 January 2025].

 

Court of Justice of the European Union (CJEU) (2015)

Judgment in Case C‑362/14, Maximillian Schrems v Data Protection Commissioner (Schrems I).

[Online]. Available at: https://curia.europa.eu/juris/document/document.jsf?docid=169195&doclang=EN [Accessed 30 January 2025].

 

Court of Justice of the European Union (CJEU) (2020)

Judgment in Case C‑311/18, Data Protection Commissioner v Facebook Ireland Ltd (Schrems II).

[Online]. Available at: https://curia.europa.eu/juris/document/document.jsf?docid=228677&doclang=EN [Accessed 30 January 2025].

 

Federal Trade Commission (FTC) (2019)

FTC Imposes $5 Billion Penalty and Sweeping New Privacy Restrictions on Facebook.

[Online]. Available at: https://www.ftc.gov/news-events/news/press-releases/2019/07/ftc-imposes-5-billion-penalty-sweeping-new-privacy-restrictions-facebook [Accessed 30 January 2025].

 

Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI) (2020)

Press Release: Fine imposed on H&M for data protection violations.

[Online]. Available at: https://datenschutz-hamburg.de/assets/pdf/Press_release_HM.pdf [Accessed 30 January 2025].

 

National Information Technology Development Agency (NITDA) (2019)

Nigeria Data Protection Regulation (NDPR).

[Online]. Available at: https://nitda.gov.ng/wp-content/uploads/2023/01/NigeriaDataProtectionRegulation2019.pdf [Accessed 30 January 2025].

 

Office of the Privacy Commissioner of Canada (OPC) (2021)

Commissioner finds Clearview AI violated federal and provincial privacy laws.

[Online]. Available at: https://www.priv.gc.ca/en/opc-news/news-and-announcements/2021/nr-c_210202/ [Accessed 30 January 2025].

 

Supreme Court of India (2017)

Justice K.S. Puttaswamy (Retd.) & Anr. v. Union of India (W.P. (C) No. 494 of 2012).

[Online]. Available at: https://main.sci.gov.in/supremecourt/2012/35071/35071_2012_Judgement_24-Aug-2017.pdf [Accessed 30 January 2025].

 

The Ninth Circuit Court of Appeals (2022)

hiQ Labs, Inc. v. LinkedIn Corp., No. 17-16783.

[Online]. Available at: https://cdn.ca9.uscourts.gov/datastore/opinions/2022/04/18/17-16783.pdf [Accessed 30 January 2025].

 

https://www.tableau.com/data-insights/ai/history

 

https://autogpt.net/20-ai-devices-you-should-know-about/

Leave a Reply

Your email address will not be published. Required fields are marked *