1.Ajit Kumar 2.Pranjal Sahay 3. Deepika Mehra 4. Sakshi Agarwal
Abstract
As we know, the use of AI is increasing day by day. The advent of AI has brought significant changes on a global scale. Due to AI, manual work has been reduced, and smart automation has evolved. The accuracy of work has increased, and human labour has decreased. AI also saves time. However, with the rise of AI, the misuse of personal data is also increasing. Individuals, companies, and organizations can collect, use, and dispose of personal data with the help of AI.
In this technological era, the misuse of personal data is not a big challenge for those skilled in technology. Data misuse is rising, and the crucial question is: how can we control it? Controlling data misuse is a challenging task in today’s digital age. This paper discusses the steps taken at both global and national levels to regulate the misuse of personal data. The European Union implemented the General Data Protection Regulation (GDPR) on May 25, 2018 to safeguard individuals’ privacy.
On the other hand, the United States has adopted a sectoral approach to data protection. Various regulations have been enacted, such as the Health Insurance Portability and Accountability Act (HIPAA) and the Gramm-Leach-Bliley Act (GLBA), to regulate data privacy in different sectors. Canada has also introduced legislation to ensure data protection—the Personal Information Protection and Electronic Documents Act (PIPEDA).
Recently, Nigeria replaced its old data protection law with the Nigeria Data Protection Act, 2023. Similarly, India passed its new data protection legislation, the Digital Personal Data Protection Act (DPDP Act) of 2023, which came into effect on September 1, 2023.
Introduction
The emergence of artificial intelligence has proved to be a boon for society, benefiting not only individuals but also industries. AI has become a necessary evil in today’s world. It is a man-made intellect created to drive innovation and creativity. AI has demonstrated its significance in ways that were once considered impossible. It not only predicts problems but also provides solutions that can be implemented when needed.
From an industrial perspective, AI is contributing its intelligence across various sectors, including education, healthcare, logistics and transportation, retail and e-commerce, banking and financial institutions, and many more. Today, AI is no longer just a luxury but a necessity in our daily lives. However, what is a boon today may become a bane tomorrow, as every coin has two sides.
While AI offers immense benefits, it also poses threats, such as the infringement of individuals’ privacy and the rise of cybercrime, which can harm society and hinder progress. To address these challenges, global initiatives have been taken by legislators to protect individual rights and maintain law and order.
Legislation has been introduced in the form of data protection laws, such as India’s Digital Personal Data Protection Act (DPDP Act), the European Union’s General Data Protection Regulation (GDPR), and the California Consumer Privacy Act (CCPA) in the United States. These regulations aim to govern how organizations collect, process, and store personal data, highlighting the importance of human intelligence in overseeing artificial intelligence.
History of Artificial Intelligence
Artificial Intelligence has become an essential part of our lives. To better understand its functioning, let’s explore its origins. The roots of AI can be traced back to ancient times, as seen in Greek mythology’s mechanical birds and the Golem. The ideas of Aristotle also played a crucial role in shaping early conceptions of AI.
Later development in AI With the advent of the digital revolution, scientists envisioned creating a machine that could mimic human intellect. This led to the birth of AI. The term “Artificial Intelligence” was first coined during the Dartmouth Conference.
The 1950s and 1960s witnessed early successes in game playing and theorem proving; however, the “AI Winter” of the 1970s followed due to unfulfilled expectations for progress.
In the 1980s, expert systems were developed to solve problems using rule-based reasoning.
By the 1990s, computing power and data availability had significantly increased. Additionally, machine learning expanded, enabling systems to learn from data without explicit programming.
By 2010, neural networks began achieving significant advances in complex data analysis, including image and language processing.
A major turning point occurred with the Turing Test, where IBM’s Deep Blue defeated a chess champion, and AlphaGo defeated a Go champion.
As AI continues to develop, it has the potential to revolutionize society; however, ethical concerns such as algorithmic bias and employment displacement must be carefully considered.
Opportunities with AI
AI presents an incredible opportunity, knocking at our doors. This opportunity can be understood from three different perspectives: individual perspective, industrial perspective, and contingent perspective.
Individual Perspective
AI can be integrated into individuals’ lives to alleviate loneliness. It provides an emotional quotient, offering companionship, especially to people who live alone—whether due to employment reasons or personal circumstances.
Certain AI tools like Alexa, Siri, and Rabbit R1 not only answer queries but also engage in polite and meaningful conversations, making users feel less isolated. These technologies act as digital companions or acquaintances.
Industrial Perspective
AI has benefitted various sectors of society, not just at a national level but on a global scale.
Education Sector
AI has enabled students to expand their knowledge beyond traditional fields. With the help of AI-powered prompt engineering, students can enhance their research and innovation.
It has simplified learning by introducing technologies that prepare students for interviews, skill acquisition, and exploration.
Teachers can track students’ progress using AI software like Brisk Teaching, Grade scope, School AI, Magic School, and more.
Healthcare Industry
AI has proved to be a boon in the healthcare industry. With advanced monitoring technologies, diseases can now be detected in their earlier stages, improving patient outcomes.
AI has also reduced errors in dosage administration, introduced virtual nursing assistants, minimized fraud, and streamlined administrative procedures.
Transportation Industry
In transportation, AI has introduced electric and autonomous vehicles, reducing fuel consumption and environmental impact. These innovations not only cut costs on petrol and diesel but also contribute to a cleaner environment by reducing the use of fossil fuels.
Additionally, AI has facilitated efficient trade between countries by optimizing logistics and supply chain management.
Banking and Financial Institutions
AI has enhanced security and fraud detection in the banking sector. It provides customer protection against fraudulent transactions and secures e-commerce payments.
Additionally, AI ensures the safety of private data and streamlines financial operations, benefiting both retail and online businesses.
Legal Institution
AI has allowed the legal industry to expand its outreach beyond statutes and precedents. It has introduced legal websites and tools that were much needed in the industry. Platforms like Amto.ai, Harvey.ai, and others have revolutionized legal research and drafting.
With the help of AI, courts can now conduct virtual hearings and e-filing, making legal proceedings more efficient. Additionally, the introduction of SUVAS (Supreme Court Vidhik Anuvaad Software) and SUPACE (Supreme Court Portal for Assistance in Court’s Efficiency) has enhanced the understanding and management of legal matters within courts.
Contingent Perspective
AI is not only relevant today but will also play a crucial role in the future. It prepares society for tomorrow by predicting potential challenges and providing solutions in advance.
For instance, Graph Cast, an AI model, can predict the weather for ten days within minutes, helping farmers plan accordingly. Such advancements demonstrate AI’s ability to foresee events and assist in decision-making across various industries.
Criticism and Problems of AI
While AI is widely acclaimed for its efficiency and intelligence, it is also criticized due to the dual nature of technology—its pros and cons. While its advantages are celebrated, its disadvantages should not be ignored.
AI possesses intelligence beyond human imagination and continues to evolve. However, its rapid growth raises serious concerns that could severely impact humanity. AI has created a virtual space for entertainment and convenience, but it has also introduced problems that require careful regulation and caution.
One major issue is the rise in cybercrime. AI is widely used on social media platforms to create reels and digital content for entertainment, but it is also being misused. Cloning, voice modulation, and image forgery have led to serious ethical concerns. AI-generated fake images and videos can manipulate reality, leading to social and emotional distress. The spread of fake news increases the risk of defamation and misinformation, which can harm individuals and organizations.
AI also poses a threat to human physical and neurological well-being. As AI takes over tasks once performed by humans, people are becoming increasingly dependent on it. This decreased physical activity can have negative health effects, while the ease of accessing AI-generated solutions may lead to a decline in critical thinking and cognitive function. Over time, excessive reliance on AI could reduce human interaction, as people may turn to AI for emotional support instead of seeking connections with other humans.
The financial impact of AI is another major concern. AI is far more efficient and cost-effective than human labour. For example, Amto.ai can draft legal documents within minutes, whereas human lawyers take significantly longer. As more AI-powered tools emerge, businesses may increasingly rely on AI for its speed and accuracy, leading to a decline in job opportunities for humans.
AI also has the potential to alter the natural environment. While it can create virtual ecosystems, there is a risk that AI-generated solutions may disrupt the actual natural balance. If AI eventually leads to the introduction of artificial natural resources, it could severely impact the earth’s ecosystem, posing a threat to all living beings.
Another aspect that should be brought forward is the infringement of privacy by AI.
AI is present everywhere in the cyber world today. Be it any website or application, AI collects individuals’ personal information, which can affect their privacy. This sensitive information can be used against an individual and may create a threat in both personal and professional life. Students use resumes generated through AI, for which they have to agree to certain terms and conditions that are usually overlooked. Also, while making transactions through online banking, terms and conditions appear but are often not taken into account. This can definitely threaten the privacy of individuals. There might be the possibility that this personal information can get into the wrong hands, leading to cyber problems like online threats, obscenity, fraud, etc.
In order to resolve these problems, certain legislations have been introduced to take control of AI.
Data Privacy Law at the Global Level
Due to advancements in technology, such as the advent of AI, the internet, mobile devices, and social media platforms, the misuse of personal data is increasing day by day. So, the question is: How can we solve it? To address this issue, countries at the global level have taken some important steps. For example, The European Union (EU) passed the General Data Protection Regulation (GDPR) in 2018, and the USA has adopted a sectoral approach to privacy regulation. Other countries, such as Canada and Nigeria, have also taken measures.
All these steps will be discussed in the following manner:
Steps Taken by the European Union (EU)
The EU passed the General Data Protection Regulation (GDPR) in 2018. The European Union (EU) stands at the forefront of global privacy protection with the enactment of the General Data Protection Regulation (GDPR) in 2018 (Cortez, 2020). The GDPR introduces a set of fundamental principles to govern the processing of personal data (Tamburri, 2020).
It was enacted to regulate personal data and unify data protection laws across Europe. It is not bound by territory, meaning the GDPR applies to all organizations. Crucially, the regulation establishes a robust framework for cross-border data transfers, promoting a unified approach to international data flows (Jiang, 2022; Okunade et al., 2023).
This act contains some key principles, such as:
a) Fairness
b) Purpose limitation
c) Data minimization
d) Accuracy
e) Integrity and confidentiality, etc.
Furthermore, this act provides certain rights to individuals, such as:
- Right to Access
- Right to be Forgotten
- Right to Rectification
- Right to Object
- Right to Data Portability
- Right to Restriction of Processing
This act also includes a provision regarding the appointment of a Data Protection Officer (DPO). The officer is required to be appointed by certain organizations. The main duty of the officer is to oversee data protection activities and prevent misuse.
Additionally, the act includes provisions for penalties for non-compliance. Organizations can face penalties of up to €20 million or 4% of their global annual turnover, whichever is higher.
Steps Taken by the USA
Unlike the EU, the United States follows a sectoral approach to privacy regulation, with laws addressing specific industries and types of data (Hartzog and Richards, 2020). This means there is no single data protection law like the GDPR. Instead, the USA has adopted a sectoral approach to regulate privacy data within its jurisdiction.
The USA has enacted several privacy laws, such as:
-
California Consumer Privacy Act (CCPA) – This law provides rights to residents of California regarding the collection and use of their personal data. Consumers have the right to know what data is being collected and for what purpose it will be used.
-
The Health Insurance Portability and Accountability Act (HIPAA) – This law deals with healthcare data. It applies to healthcare providers, healthcare plans, and organizations that handle protected health information (PHI).
The Children’s Online Privacy Protection Act (COPPA)
COPPA applies to operators of websites and service providers who knowingly collect data from children under the age of 13. Before collecting data from children under 13, permission must be obtained from their parents.
The Gramm-Leach-Bliley Act (GLBA)
The GLBA applies to financial institutions, requiring them to establish a privacy policy regarding the financial information of individuals.
Steps Taken by Canada
In Canada, data privacy is governed by the Personal Information Protection and Electronic Documents Act (PIPEDA). It is a federal law that sets rules for how private organizations can collect, use, and disclose personal information in the course of commercial activity.
Furthermore, it includes some significant principles, such as:
- Personal Information Protection – This act applies to the personal data of individuals, including names, addresses, and other identifying details.
- Consent – Organizations must obtain individuals’ consent before collecting, using, or disposing of their data.
- Data Security – Organizations must maintain and protect individuals’ personal data. It is their duty to ensure data security.
- Right of the Individual – Every individual has the right to access their personal information.
Steps Taken by Nigeria
The Nigeria Data Protection Regulation of 2019 has been replaced by the Nigeria Data Protection Act, 2023. This is a significant step in the area of data privacy and protection. The Act governs how personal data is collected, processed, and stored, with specific provisions relevant to AI technologies, particularly in the financial services and telecommunications sectors (Owolabi, 2023).
In comparison, countries like the USA have adopted more mature and robust frameworks to regulate AI technologies (Papyshev & Yarime, 2023).
Principles Incorporated in This Act
- Consent Management – Every organization must obtain consent from individuals before using their personal data.
- Right to Access – Every individual has the right to access their data.
- Right to Restrict the Use of Data – Every individual has the right to restrict any organization from using their personal data.
The Act further provides for the appointment of a Data Protection Officer (DPO), who will oversee wrongful activities related to personal data. It also establishes the National Data Protection Bureau (NDPB), which has the power to impose sanctions on organizations that fail to comply with the regulations.
Additionally, the Act includes provisions for fines and penalties. The 2023 Act enforces harsher penalties for non-compliance, with fines of up to ₦500 million (approximately $1.1 million USD) for major offenses.
Steps Taken by the Asia-Pacific Region
In the Asia-Pacific region, different approaches have been adopted by different countries. Some nations have embraced consent management, while others have not. Several countries in the region have passed legislation relating to data privacy and protection, including:
- Singapore – Personal Data Protection Act (PDPA)
- Japan – Act on the Protection of Personal Information (APPI)
- China – Personal Information Protection Law (PIPL)
- Australia – Privacy Act 1988 (updated in 2023)
There are significant regional variations in privacy laws within the Asia-Pacific region. While some countries prioritize individual consent and notice, others focus on accountability and data localization. Efforts toward harmonization are underway, such as the Asia-Pacific Economic Cooperation (APEC) Privacy Framework, which aims to bridge gaps and establish common principles across the region. Digital Privacy in India
The legal framework governing digital privacy in India has evolved considerably in recent years, with landmark judicial and legislative developments. Historically, India’s approach to privacy was relatively underdeveloped.
The foundational legal instrument for privacy was Article 21 of the Indian Constitution, which guarantees the “right to life and personal liberty.” In 2017, the Supreme Court of India, in K.S. Puttaswamy v. Union of India, recognized the “right to privacy” as a fundamental right under Article 21, marking a significant shift in India’s privacy jurisprudence.
Following this, the Personal Data Protection Bill (PDPB), 2019, was introduced to regulate the processing of personal data and ensure individual privacy rights. Although the Bill has undergone several revisions and debates, its core aims remain to:
- Set clear guidelines for data processing.
- Establish data protection authorities.
- Provide citizens with greater control over their personal information.
It represents a critical step toward harmonizing data protection with India’s digital growth.
The Digital Personal Data Protection Act, 2023
In August 2023, the Indian Parliament passed the Digital Personal Data Protection (DPDP) Act, 2023, which aims to:
- Protect the privacy rights of individuals.
- Promote responsible data management practices.
- Balance the rights of individuals with the need to process data for lawful purposes.
- Prohibit tracking, monitoring, and targeted advertising directed at children.
The Act regulates the processing of digital personal data in India by establishing clear rights and obligations for data fiduciaries, which include the following:
1. Consent Requirements
- Data processing requires explicit and informed consent from individuals.
2. Data Rights
- Only data essential for the intended purpose should be collected.
3. Security Measures
- Robust measures must be implemented to protect personal data from breaches.
4. Establishment of a Regulatory Body
- Organizations are accountable for their data processing activities and must comply with the Act.
5. Storage Limitation
- Personal data should be retained only as long as necessary for its intended purpose.
The DPDP Act also created the Data Protection Board of India (DPB), the first regulatory body in India focused on protecting personal data privacy. Like similar regulatory bodies worldwide, the goal of the DPB is to oversee compliance and impose penalties on non-compliant organizations.
Responsibilities of Data Principles and Organizations
The DPDP Act assigns restrictions and obligations to organizations that process personal data, including:
- Obtain consent from individuals before processing their personal data – Organizations must obtain consent from individuals before processing their personal data unless an exemption applies.
- Use personal data only for the purposes for which it was collected – Organizations must use personal data only for the purposes for which it was collected unless they have obtained explicit consent from the individual for further processing.
- Protect personal data from unauthorized access, use, disclosure, alteration, or destruction – Organizations must take appropriate technical and organizational measures to protect personal data from breaches.
- Respond to individuals’ requests for access, correction, deletion, and objection – Organizations must respond to individuals’ requests for access, correction, deletion, and objection within a reasonable time.
- Report data breaches to the DPB – Organizations must report data breaches to the DPB within 72 hours of becoming aware of the breach.
Penalties for Noncompliance
Violations of the DPDP Act—particularly failure to implement necessary information security measures to mitigate the risk of a personal data breach—could result in fines of up to ₹250 crore ($30 million USD).
This penalty is less severe than the 2022 legislation, which proposed a fine of up to ₹500 crore ($60 million USD).
Additionally, the DPDP Act requires that a body corporate must provide a comprehensive privacy policy, which must include:
A Clear Statement on Its Practices and Policies
- Type of Information Collected
- Purpose for Collecting the Data and Storage
- Security Measures
- Disclosure Policy for the Information
Judgments on Privacy Law
If we go by the recent legal decision in January 2025, the NCLAT (National Company Law Appellate Tribunal) temporarily suspended a five-year ban imposed by the CCI (Competition Commission of India) on data sharing between WhatsApp and its parent company, Meta. The CCI had previously restricted this data sharing, citing antitrust concerns. Meta, on its behalf, argued that the ban could disrupt WhatsApp’s business model in India, leading the NCLAT to suspend the ban while the case is under review.
-
Pegasus spyware was reportedly able to hack digital devices, access stored data in real-time, control the camera and microphone, and operate the device remotely. These writ petitions were filed because the spyware was allegedly used to target private individuals in India. In August 2022, a team of experts found no clear proof that spyware was used on the phones they examined. The report remains sealed and has not been made public.
-
Manohar Lal Sharma v. Union of India (2021) is related to the Pegasus spyware controversy in India. Advocate Manohar Lal Sharma filed a petition in the Supreme Court seeking an investigation into allegations that the Indian government used Pegasus spyware to journalists, activists, and politicians. On this point, the Indian government neither confirmed nor denied the use of Pegasus, citing national security concerns.
These kinds of judgments show that the government is not fully concerned about privacy laws for all citizens and is not even abiding by the laws laid down in its own country. Similarly, there are numerous nuances under the DPDP Act because:
- The law does not have a retrospective effect and will be enforced only in the future.
- The DPDP Act covers only digital data, excluding physical data that is available and stored.
- The DPDP Act exempts the government from its responsibilities under the Act.
- The Central Government has the power to exempt certain classes of data fiduciaries from the ambit of this Act.
Therefore, we need more stringent data privacy laws because, in the digital era, it is extremely difficult to protect personal data.
“AI and Privacy Laws in India: Are We Truly Protected by Our Data Shields?”
Global Data Shields Unveiled: Key Incidents and Landmark Case Laws across the EU, USA, Canada, Nigeria, and India
European Union
The European Union (EU) has some of the most stringent data protection standards globally, encapsulated in the General Data Protection Regulation (GDPR). Since the GDPR’s enactment in May 2018, national regulators and courts have handled numerous cases, testing the boundaries of data privacy in a digital era characterized by vast data flows and sophisticated AI-driven analysis. Below are key incidents and case laws in which EU data privacy laws have been squarely applied.
Google Spain SL v. AEPD (2014)
Before the GDPR took effect, the foundation of modern EU data protection was the Data Protection Directive 95/46/EC. Under this framework, the Court of Justice of the European Union (CJEU) issued a landmark judgment in Google Spain SL v. AEPD. The case was triggered by a Spanish citizen whose name was linked, in Google’s search results, to a newspaper announcement of a real estate auction related to a social security debt. He requested that Google remove or “delist” links to this information, arguing it was outdated and prejudicial.
The CJEU held that Google, as a search engine, was a data controller under EU law and was thus obliged to comply with valid erasure requests unless compelling reasons of public interest justified continued indexing. This decision was pivotal in recognizing what is now commonly referred to as the “right to be forgotten,” laying a legal foundation for Article 17 of the GDPR, which expanded this right to erasure. Although this case predated the GDPR, it influenced the regulation’s drafting, heralding the EU’s firm stance on balancing data usage with individual privacy rights.
CNIL Enforcement Against Google LLC (2019)
One of the first major penalties under the GDPR was levied against Google by the French data protection authority, the (CNIL). In January 2019, CNIL imposed a €50 million fine on Google for failing to provide transparent information about its data processing activities and for not validly obtaining consent when personalizing ads.
The regulator held that Google’s consent mechanism was neither “specific” nor “unambiguous,” and that users were not adequately informed about the scope of data collection.
While this was not a single-court case but rather an administrative enforcement action, this incident illustrated the stringent approach the EU takes toward companies handling personal data at scale. It also underscored the accountability principle of the GDPR, which requires data controllers to demonstrate compliance, particularly in contexts involving AI-driven advertising and user profiling.
Several other companies—ranging from retailers using face-scanning security systems to AI-driven platforms storing voiceprints—have faced BIPA lawsuits. These cases collectively demonstrate that while federal data privacy legislation remains fragmented, individual states can enforce potent regulations that significantly impact AI developers and data-handling practices.
State-Level Consumer Privacy Acts and AI Relevance
California’s Consumer Privacy Act (CCPA) and its successor, the California Privacy Rights Act (CPRA) grant consumers the right to know, access, and delete personal information collected by businesses. While these acts are not exclusively focused on AI, they apply to any organization that processes large volumes of consumer data for analytics, profiling, or automated decision-making.
Lawsuits have begun testing these provisions in AI contexts—for example, claims that companies fail to adequately disclose or secure the data used to train recommendation algorithms. Enforcement by the California Attorney General’s office reflects a growing willingness to hold companies accountable under these new state laws.
Canada
Canada’s primary federal data protection statute, the Personal Information Protection and Electronic Documents Act (PIPEDA), is enforced by the Office of the Privacy Commissioner (OPC). Provincial laws and sector-specific regulations also interplay, creating a multifaceted framework. Canadian courts and regulators have addressed the use of personal data for AI systems, establishing important precedents on consent, transparency, and fair use.
Clearview AI Investigation and Findings (2020–2021)
In what has become a notable multi-jurisdictional case, Canadian authorities found that Clearview AI’s scraping of billions of images from the internet—without obtaining consent from individuals—violated federal and provincial privacy laws. The company sold access to a facial recognition database to law enforcement agencies, raising concerns about mass surveillance and its chilling effects on free expression.
Following the OPC’s investigation, Clearview AI was ordered to cease offering its services in Canada and delete images belonging to Canadian residents.
This incident underscores how data privacy authorities can enforce existing principles (under PIPEDA or parallel provincial statutes) against AI companies that claim their data sources are “public.” The ruling solidified that privacy rights persist even when personal data is theoretically accessible on social media platforms, forcing AI-focused enterprises to reassess compliance strategies.
PIPEDA Compliance in AI-Driven Retail and Banking
Canadian courts have also indirectly addressed AI’s implications through broader interpretations of PIPEDA.
- Retailers employing AI-driven analytics, such as smart cameras for tracking, have been investigated by provincial privacy commissioners, particularly when these systems collect identifiable information without explicit consumer knowledge.
- Banks and financial services companies using AI-based credit scoring have been instructed to disclose how they gather, store, and use personal data, ensuring compliance with key fair information principles such as consent, purpose specification, and data minimization.
Although individual lawsuits are often resolved through settlements or mediated outcomes, the overall enforcement trend suggests that Canada’s privacy regulators interpret PIPEDA and provincial laws as flexible enough to cover a range of AI applications. These authorities emphasize informed consent and require that AI developers implement “privacy by design”, demonstrating that even older legislation can adapt to modern data challenges.
Conclusion and suggestions
In the provided paper we saw how AI has been both boon and bane for us. It is contributing to the creation of the future but due to excess usage can create obstruction in the path of innovation. So, it is suggested to control it properly. With the support of our government and legislative authority it can be used in a way which will allow it to keep going on its creation and a bright future can be welcomed.
Law and order should be followed strictly which should be backed by penalty. Also one should not rely completely on AI . One must full fill basic needs on their own.
References:-
2.https://iapp.org/news/a/is-there-a-right-to-explanation-for-machine-learning-in-the-gdpr/
3.www.fepbl.com/index.php/ijarss
5.Cortez, E.K. ed., (2020). Data Protection Around the World: Privacy Laws in Action (Vol. 33). Springer Nature.
6.Tamburri, D.A. (2020). Design principles for the General Data Protection Regulation (GDPR): A formal concept analysis and its evaluation. Information Systems, 91, 101469.
7.Jiang, X. (2022). Governing cross-border data flows: China’s proposal and practice. China Quarterly of International Strategic Studies, 8(01), 21-37
8.Hartzog, W., & Richards, N. (2020). Privacy’s constitutional moment and the limits of data protection. BCL Review, 61, 1687.
(CNIL) (2019)
CNIL’s restricted formation imposes a financial penalty of 50 million euros against GOOGLE LLC.
[Online]. Available at: https://www.cnil.fr/en/cnils-restricted-formation-imposes-financial-penalty-50-million-euros-against-google-llc [Accessed 30 January 2025].
Court of Justice of the European Union (CJEU) (2014)
Judgment in Case C-131/12, Google Spain SL v. AEPD.
[Online]. Available at: https://curia.europa.eu/juris/document/document.jsf?docid=152065&doclang=EN [Accessed 30 January 2025].
Federal Trade Commission (FTC) (2019)
FTC Imposes $5 Billion Penalty and Sweeping New Privacy Restrictions on Facebook.
[Online]. Available at: https://www.ftc.gov/news-events/news/press-releases/2019/07/ftc-imposes-5-billion-penalty-sweeping-new-privacy-restrictions-facebook [Accessed 30 January 2025].
Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI) (2020)
Press Release: Fine imposed on H&M for data protection violations.
[Online]. Available at: https://datenschutz-hamburg.de/assets/pdf/Press_release_HM.pdf [Accessed 30 January 2025].
National Information Technology Development Agency (NITDA) (2019)
Nigeria Data Protection Regulation (NDPR).
[Online]. Available at: https://nitda.gov.ng/wp-content/uploads/2023/01/NigeriaDataProtectionRegulation2019.pdf [Accessed 30 January 2025].
Office of the Privacy Commissioner of Canada (OPC) (2021)
Commissioner finds Clearview AI violated federal and provincial privacy laws.
[Online]. Available at: https://www.priv.gc.ca/en/opc-news/news-and-announcements/2021/nr-c_210202/ [Accessed 30 January 2025].
Supreme Court of India (2017)
Justice K.S. Puttaswamy (Retd.) & Anr. v. Union of India (W.P. (C) No. 494 of 2012).
[Online]. Available at: https://main.sci.gov.in/supremecourt/2012/35071/35071_2012_Judgement_24-Aug-2017.pdf [Accessed 30 January 2025].