IISPPR

Category: Blog

Public Policies
Chhavi Thakur

Ethical AI Frameworks for Financial Inclusion in Developing Economies: A Case Study of India

Incorporating Artificial Intelligence (AI) in financial services can significantly improve financial inclusion in developing countries, especially in India, where a large segment of the population is either unbanked or inadequately served. Nonetheless, the application of AI in this area presents ethical dilemmas, such as bias, insufficient transparency, concerns regarding data privacy, and the possibility of marginalizing disadvantaged groups. This research paper aims to tackle these issues by creating a context-specific ethical AI framework designed for the Indian financial sector, focusing on principles of fairness, inclusivity, and accountability.

Read More »
Blog
Rama Rathore

AI AND DATA PROTECTION: CHALLENGES IN AUTOMATED DECISION MAKING

AI AND DATA PROTECTION: CHALLENGES IN AUTOMATED DECISION-MAKING Introduction Artificial Intelligence (AI) is rapidly revolutionizing industries by automating decision-making processes in banking, healthcare, governance, and law. While AI-driven decision-making enhances efficiency and scalability, it also raises significant concerns regarding privacy, fairness, and accountability. India’s legal framework, particularly the Digital Personal Data Protection Act, 2023 (DPDP Act)[1], attempts to address these challenges, but its silence on AI-specific issues calls for a more comprehensive regulatory approach. This article examines the legal, ethical, and policy challenges of AI-powered automated decision-making (ADM) in India and proposes solutions for a balanced regulatory framework. The Privacy and Security Risks of AI Decision-Making AI systems require vast amounts of personal data to function, raising significant privacy concerns. In India, AI-driven ADM systems collect information from social media, financial transactions, and biometric databases like Aadhaar[1]. While these technologies improve service delivery, they also risk unauthorized access, data misuse, and mass surveillance. The DPDP Act, 2023, aims to protect personal data through consent-based collection and stringent penalties for non-compliance. However, it does not explicitly regulate AI-specific concerns such as algorithmic profiling, predictive analytics, and real-time surveillance. This gap leaves room for potential data breaches and misuse of sensitive information. Algorithmic Bias and Discrimination A significant challenge of AI-driven ADM is the risk of algorithmic bias[1], which can lead to unfair outcomes and discrimination. AI models learn from historical data, which often contains biases related to gender, caste, and socio-economic status. If unchecked, AI-based recruitment tools, credit-scoring systems, and facial recognition technology can reinforce discriminatory patterns, disproportionately impacting marginalized communities. Unlike the EU’s GDPR[1], which enforces transparency in AI decision-making, India’s legal framework does not explicitly address algorithmic fairness. The absence of clear mandates for fairness audits, bias detection, and data diversity standards increases the likelihood of systemic discrimination in AI-powered decision-making processes. Lack of Transparency and Explainability One of the most pressing concerns in AI and ADM is the lack of transparency. Many AI models operate as “black boxes,” making decisions without clear explanations. This opacity is particularly problematic in high-stakes sectors like healthcare, law enforcement, and finance, where AI-driven decisions can have life-altering consequences. The DPDP Act does not mandate AI explainability or grant individuals the right to challenge AI-driven decisions. Unlike Article 22 of the GDPR[1], which gives individuals the right to contest automated decisions, India’s legal framework lacks strong provisions for algorithmic accountability, leaving affected individuals with limited legal recourse. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. Legal Framework and Regulatory Challenges in India India’s current legal landscape for AI and data protection remains fragmented. The DPDP Act, 2023, establishes fundamental data protection guidelines but does not regulate AI-specific concerns. Other relevant laws include: Information Technology Act, 2000 (IT Act)[1] – Governs cybersecurity and data protection but lacks AI-specific provisions. Aadhaar Act, 2016[2] – Regulates biometric data collection but does not address AI-driven profiling. National Data Governance Framework Policy, 2022[3] – Facilitates data sharing for AI research while ensuring security. EU Artificial Intelligence Act (Comparative Perspective)[4] – Aims to classify AI systems by risk level and enforce transparency requirements, something India has yet to implement. India’s lack of a dedicated AI regulation leaves gaps in accountability, making it necessary for policymakers to introduce AI-specific guidelines for fairness, transparency, and accountability. Accountability and Ethical Responsibility A critical issue in AI-driven ADM is determining liability. When AI makes a flawed or harmful decision—such as rejecting a job application, denying a loan, or misdiagnosing a patient—who is responsible? The developer, the deploying organization, or the government? Currently, India does not have clear legal provisions assigning liability for AI-related harm[1]. Some legal experts propose a “human-in-the-loop” model, where AI decisions are subject to human oversight, particularly in sensitive domains. Others advocate for AI liability frameworks, ensuring that AI developers and users bear legal responsibility for algorithmic errors and discriminatory outcomes. Case Studies: AI and Legal Precedents in India and Beyond Legal actions against AI systems are rising globally. In India, ANI vs OpenAI is a landmark case where the Delhi High Court reviewed copyright claims against AI-generated content. Internationally, Microsoft, GitHub, and OpenAI have faced lawsuits over unauthorized data usage in AI training models[1]. While India has begun addressing AI-related disputes, it still lacks a robust legal framework to regulate AI-driven harm effectively. Strengthening regulatory policies is crucial to address AI’s evolving risks. The case was filled in the us courts against the Microsoft, GitHub and OpenAI for the violation of copyright. https://sustainabletechpartner.com/topics/ai/generative-ai-lawsuit-timeline/ The case has been filled in us and Europe by the artist, more than 8500 authors, and media organization for staling the work. https://www.techtarget.com/WhatIs/feature/AI-lawsuits-explained-Whos-getting-sued Mitigating Risks: Steps Towards Responsible AI To ensure AI is used responsibly in India, the following measures must be taken: Enact AI-Specific Regulations – Introduce laws addressing AI accountability, fairness, and transparency. Mandate Fairness Audits – Establish independent reviews to detect and mitigate algorithmic bias. Enhance Explainability Requirements – Require AI systems to disclose decision-making logic, especially in critical sectors. Align with Global Standards – Adopt best practices from GDPR and the EU AI Act to ensure AI compliance. Strengthen User Rights and Redressal Mechanisms – Provide legal channels for individuals to challenge AI decisions and seek redress. Improve Data Protection Measures – Implement stricter encryption, anonymization, and security protocols for AI-generated data. Increase Public Awareness – Educate individuals on their rights regarding AI-driven decisions and available legal protections. FACT OF CONCERN In all over the world, the cases in the courts against the AI is increasing day by day, especially in us, Europe and now even in India. Increase in the cases in the courts also increases the concerns for the privacy of the individuals. According to the google, 50% of the bank scams and fraud are done through the AI. When there is an ADM there is no any limit for

Read More »
Blog
Yash Roy

WHISTLEBLOWING AND CORPORATE GOVERNANCE: STRENGTHENING ETHICAL COMPLIANCE

White-collar crimes, which range from insider trading and fraud to money laundering and cybercrime, cause significant financial and psychological harm to people, companies, and entire economies. Using laws like the Dodd-Frank Act and the Bribery Act, nations including the United States, the United Kingdom, and Singapore have created stringent legal structures to tackle these crimes. India continues to grapple with significant challenges related to enforcement, the protection of whistleblowers, and corporate accountability. In this context, could innovative technological solutions such as blockchain and artificial intelligence provide viable answers?
Consider the notorious Enron scandal, which serves as a quintessential example of corporate malfeasance. Executives engaged in the manipulation of financial records, concealing billions in liabilities while deceiving investors. The repercussions of this scandal resulted in one of the most substantial bankruptcies in history and spurred essential regulatory reforms, including the Sarbanes-Oxley Act, which was designed to improve financial transparency.
This paper intends to delve into the nature of white-collar crime, examining its ramifications and the associated corporate liability. By scrutinizing international legal frameworks and enforcement strategies, it aims to identify the strengths and weaknesses of current legislation and investigate potential reforms that could enhance accountability within the corporate sector.

Read More »
Public Policies
Sakshi Sharma

BREAKING BARRIERS: WOMEN IN BUSINESS

Women’s entrepreneurship drives economic growth and promotes gender equality in India. Despite this, many women face financial and digital hurdles that hinder their involvement. Initiatives from the government, such as Stand-Up India and Mahila e-Haat, aim to provide financial support and access to digital marketplaces, helping women entrepreneurs thrive. This study uses various cases to illustrate these programs’ positive effects while addressing ongoing challenges like financial illiteracy and bureaucratic barriers. To overcome these challenges steps like financial inclusion, digital literacy, and mentorship opportunities, can be taken to empower women-led businesses and create a more inclusive economy.

Read More »
Public Policies
Shristi Meel

INTERSECTION OF FARM LAWS AND FARMERS: BALANCING REFORMS, RESISTANCE AND SUSTAINABILITY

This study explores the impact of India’s 2020 farm laws on farmers, examining the reforms, resistance sparked, and their broader implications for agricultural sustainability. These laws aim to modernize the sector by increasing market freedom and reducing government control. Supporters believed it would empower farmers by providing more selling options, while critics feared it would favor big corporations and undermine Minimum Support Price (MSP) protection. Massive protests led to the laws being repealed in 2021, highlighting farmer concerns.

Read More »
International Relations
Mohit Sharma

International Human Rights Laws and Refugee Crisis.

The global refugee crisis has intensified due to conflicts, persecution, and climate change, challenging international legal frameworks designed to protect displaced individuals. Despite the existence of the 1951 Refugee Convention and the Universal Declaration of Human Rights, refugees often face restrictive immigration policies, xenophobia, and inadequate living conditions in host countries. The principle of non-refoulement, a cornerstone of refugee protection, is frequently undermined by national security concerns and political interests. This article explores the complexities of forced migration, international human rights laws, and the challenges of global responsibility-sharing, emphasizing the need for comprehensive policy reforms and stronger international cooperation to safeguard refugee rights.

Read More »
Blog
Priyanka Tapadia

AI AND DATA PROTECTION: CHALLENGES IN AUTOMATED DECISION-MAKING

  Introduction Artificial Intelligence (AI) is rapidly revolutionizing industries by automating decision-making processes in banking, healthcare, governance, and law. While AI-driven decision-making enhances efficiency and scalability, it also raises significant concerns regarding privacy, fairness, and accountability. India’s legal framework, particularly the Digital Personal Data Protection Act, 2023 (DPDP Act), attempts to address these challenges, but its silence on AI-specific issues calls for a more comprehensive regulatory approach. This article examines the legal, ethical, and policy challenges of AI-powered automated decision-making (ADM) in India and proposes solutions for a balanced regulatory framework. The Privacy and Security Risks of AI Decision-Making AI systems require vast amounts of personal data to function, raising significant privacy concerns. In India, AI-driven ADM systems collect information from social media, financial transactions, and biometric databases like Aadhaar. While these technologies improve service delivery, they also risk unauthorized access, data misuse, and mass surveillance. The DPDP Act, 2023, aims to protect personal data through consent-based collection and stringent penalties for non-compliance. However, it does not explicitly regulate AI-specific concerns such as algorithmic profiling, predictive analytics, and real-time surveillance. This gap leaves room for potential data breaches and misuse of sensitive information. Algorithmic Bias and Discrimination A significant challenge of AI-driven ADM is the risk of algorithmic bias, which can lead to unfair outcomes and discrimination. AI models learn from historical data, which often contains biases related to gender, caste, and socio-economic status. If unchecked, AI-based recruitment tools, credit-scoring systems, and facial recognition technology can reinforce discriminatory patterns, disproportionately impacting marginalized communities. Unlike the EU’s GDPR, which enforces transparency in AI decision-making, India’s legal framework does not explicitly address algorithmic fairness. The absence of clear mandates for fairness audits, bias detection, and data diversity standards increases the likelihood of systemic discrimination in AI-powered decision-making processes. Lack of Transparency and Explain ability One of the most pressing concerns in AI and ADM is the lack of transparency. Many AI models operate as “black boxes,” making decisions without clear explanations. This opacity is particularly problematic in high-stakes sectors like healthcare, law enforcement, and finance, where AI-driven decisions can have life-altering consequences. The DPDP Act does not mandate AI explain ability or grant individuals the right to challenge AI-driven decisions. Unlike Article 22 of the GDPR, which gives individuals the right to contest automated decisions, India’s legal framework lacks strong provisions for algorithmic accountability, leaving affected individuals with limited legal recourse. Legal Framework and Regulatory Challenges in India India’s current legal landscape for AI and data protection remains fragmented. The DPDP Act, 2023, establishes fundamental data protection guidelines but does not regulate AI-specific concerns. Other relevant laws include: Information Technology Act, 2000 (IT Act) – Governs cybersecurity and data protection but lacks AI-specific provisions. Aadhaar Act, 2016– Regulates biometric data collection but does not address AI-driven profiling. National Data Governance Framework Policy, 2022 – Facilitates data sharing for AI research while ensuring security. EU Artificial Intelligence Act (Comparative Perspective) – Aims to classify AI systems by risk level and enforce transparency requirements, something India has yet to implement. India’s lack of a dedicated AI regulation leaves gaps in accountability, making it necessary for policymakers to introduce AI-specific guidelines for fairness, transparency, and accountability. Accountability and Ethical Responsibility A critical issue in AI-driven ADM is determining liability. When AI makes a flawed or harmful decision—such as rejecting a job application, denying a loan, or misdiagnosing a patient—who is responsible? The developer, the deploying organization, or the government? Currently, India does not have clear legal provisions assigning liability for AI-related harm. Some legal experts propose a “human-in-the-loop” model, where AI decisions are subject to human oversight, particularly in sensitive domains. Others advocate for AI liability frameworks, ensuring that AI developers and users bear legal responsibility for algorithmic errors and discriminatory outcomes. Case Studies: AI and Legal Precedents in India and Beyond Legal actions against AI systems are rising globally. In India, ANI vs Open AI is a landmark case where the Delhi High Court reviewed copyright claims against AI-generated content. Internationally, Microsoft, GitHub, and Open AI have faced lawsuits over unauthorized data usage in AI training models. While India has begun addressing AI-related disputes, it still lacks a robust legal framework to regulate AI-driven harm effectively. Strengthening regulatory policies is crucial to address AI’s evolving risks. The case was filled in the us courts against the Microsoft, GitHub and Open AI for the violation of copyright. The case has been filled in us and Europe by the artist, more than 8500 authors, and media organization for staling the work. Mitigating Risks: Steps Towards Responsible AI To ensure AI is used responsibly in India, the following measures must be taken: Enact AI-Specific Regulations – Introduce laws addressing AI accountability, fairness, and transparency. Mandate Fairness Audits – Establish independent reviews to detect and mitigate algorithmic bias. Enhance Explain ability Requirements – Require AI systems to disclose decision-making logic, especially in critical sectors. Align with Global Standards – Adopt best practices from GDPR and the EU AI Act to ensure AI compliance. Strengthen User Rights and Redressal Mechanisms – Provide legal channels for individuals to challenge AI decisions and seek redress. Improve Data Protection Measures – Implement stricter encryption, anonymization, and security protocols for AI-generated data. Increase Public Awareness – Educate individuals on their rights regarding AI-driven decisions and available legal protections. FACT OF CONCERN In all over the world, the cases in the courts against the AI is increasing day by day, especially in us, Europe and now even in India. Increase in the cases in the courts also increases the concerns for the privacy of the individuals. According to the google, 50% of the bank scams and fraud are done through the AI. When there is an ADM there is no any limit for controlling and hence the frauds and scams increases. AI Implication in Credits AI has the potential to touch pretty much every aspect of the business of lending. Lending is an information-based business and many of the tasks performed by humans can

Read More »
Blog
SANDHYADEVI KUMMETHA

Navigating Inheritance Laws in India Testamentary Succession

Inheritance in India refers to the legal transfer of a deceased person’s assets to their heirs, either through a will (testamentary succession) or according to succession laws in the absence of a will (intestate succession). The Indian Succession Act, 1925 governs the inheritance laws for various religious communities, with specific provisions for each. The process involves drafting a valid will, appointing an executor, and, if necessary, obtaining probate for smooth transfer and resolution of any disputes among heirs.

Read More »
International Relations
Vaibhav puri

Hunting Bin Laden: The Deadly Manhunt of Operation Neptune Spear

INTRODUCTION  Operation Neptune Spear was a pivotal military operation conducted by the United States on May 2, 2011, to eliminate Osama bin Laden, the leader of al-Qaeda and the mastermind behind the September 11, 2001, terrorist attacks. Executed by the U.S. Navy SEAL Team 6 (DEVGRU) under the direction of the Central Intelligence Agency (CIA) and the U.S. Department of Defence, the raid took place in Abbottabad, Pakistan. The operation was the result of years of intelligence gathering and strategic planning, culminating in a high-risk mission that ultimately led to bin Laden’s death. This paper examines the intelligence efforts, strategic execution, and geopolitical implications of Operation Neptune Spear, assessing its impact on U.S. national security and counterterrorism policies. Intelligence and Planning Shortly after the 9/11 terrorist attacks on the United States, the CIA began collecting information on key individuals connected to or providing support to Bin Laden.   THE FIRST CLUE Shortly after 9/11, the CIA began tracking individuals linked to bin Laden. A major early breakthrough came from a piece of luggage belonging to Mohammad Atta, the lead hijacker. The bag contained documents, hijacker instructions, and flight training manuals, confirming al-Qaeda’s involvement and bin Laden’s role. Intelligence efforts continued, with a CIA operative, Jalal, identifying bin Laden’s voice in transmissions from the Tora Bora Mountains, proving his continued influence. However, bin Laden evaded capture and resurfaced in Pakistan. (Washington Post), (CIA), (PBS).  A MISTAKE Bin Laden relied on trusted couriers to maintain communication with Al-Qaeda. One, Ibrahim, made a fatal error on August 27, 2010, when he used a mobile phone in Peshawar, a city under CIA surveillance. This allowed the agency to track him to a suspicious compound in Abbottabad, which exhibited unusual security measures. The compound’s high walls, lack of digital communication, and residents’ habit of burning trash pointed to the presence of a high-value target. Surveillance identified a mysterious tall man, “The Pacer,” whose physical traits matched bin Laden’s.  GREAT DISCOVERY Once the CIA identified Ibrahim’s location, they conducted further surveillance to assess the compound. The facility was situated in a highly secured area of Abbottabad, close to the Pakistan Military Academy. Several key factors indicated that the compound housed a high-value individual: Unlike other homes in the area, the compound had no telephone or internet connections, an unusual measure suggesting the need for secrecy. The residents burned their trash instead of disposing of it in the usual collection system, minimizing external exposure. A mysterious tall man, who never left the premises, was occasionally seen walking in the courtyard. Analysts referred to him as “The Pacer” due to his habitual pacing back and forth. His physical characteristics closely resembled those of bin Laden. After gathering substantial evidence, the CIA presented its findings to top U.S. officials, including President Barack Obama. While the intelligence was not 100% certain, the assessment strongly suggested that bin Laden was hiding in the Abbottabad compound. (bookshelf) Nail Into the coffin To further verify bin Laden’s presence in the compound, the CIA enlisted the help of Dr. Shakil Afridi, a Pakistani physician. Dcotor Afridi was tasked with running a fake vaccination campaign in Abbottabad under the guise of administering hepatitis B vaccines. The objective of this covert operation was to collect DNA samples from individuals residing in the compound to confirm bin Laden’s identity. Dr. Afridi and his medical team visited the surrounding areas and attempted to gain access to the compound by offering free vaccinations. While the team was unable to directly obtain DNA from bin Laden or his immediate family, their efforts provided valuable intelligence on the residents and their movements. This reinforced the CIA’s confidence that bin Laden was indeed hiding inside the compound. (BBC) EXECUTION OF THE MISSION On the night of May 1, 2011, two stealth-modified Black Hawk helicopters carrying SEAL Team 6 with 24 officers departed from a U.S. base in Afghanistan and infiltrated Pakistani airspace undetected. Upon arrival at the compound, one of the helicopters experienced mechanical issues and crash-landed, though no personnel were injured. The SEALs quickly adjusted their strategy and proceeded with the mission. The team breached the compound and engaged in a brief firefight with bin Laden’s guards. Moving through the building, they encountered and neutralized several occupants before reaching the top floor, where Osama bin Laden was located. Bin Laden was shot and killed after resisting capture. His body was positively identified through facial recognition and DNA analysis. The SEALs collected valuable intelligence materials before exfiltrating the site. Due to the compromised helicopter, a backup aircraft was called in, and the damaged helicopter was destroyed to prevent technology from falling into foreign hands. Within 40 minutes of landing, the SEAL team successfully completed the operation and returned to Afghanistan. (Caravan)  Legal and Ethical Considerations in the Hunt for Osama bin Laden Legal Considerations Under U.S. Law: In the aftermath of the September 11 attacks, the U.S. Congress enacted the Authorization for Use of Military Force Against Terrorists (AUMF) in 2001. This legislation empowered the President to employ “necessary and appropriate force” against entities responsible for the attacks. The Obama administration cited the AUMF as a legal basis for the operation against bin Laden. John Bellinger III, former legal adviser to the U.S. State Department asserted that the operation was a legitimate military action, stating that the assassination prohibition does not apply to killings in self-defence or during armed conflict. Under International Law: The incursion into Pakistani territory without prior consent sparked debates about sovereignty violations. Pakistan’s Prime Minister, Yousaf Raza Gillian, emphasized the nation’s disapproval of such unilateral actions, highlighting concerns over sovereignty and adherence to international law. Conversely, U.S. Attorney General Eric Holder defended the operation as an act of national self-defence, aligning it with the U.S.’s inherent right to protect itself under international law. (Wikipedia) Scholars have also scrutinized the operation’s legality under international humanitarian law. Some argue that the absence of an active armed conflict between the U.S. and al-Qaeda at the time challenges the justification of bin

Read More »