
AI and Law: Global Regulatory Intersections and India's Evolving Framework
This article explores the dynamic intersection of artificial intelligence and law, dissecting regulatory approaches in the EU's risk-based AI Act, the US's executive-driven framework, and India's evolving landscape via DPDPA, AI governance guidelines, and DPIIT's generative AI copyright proposals, with detailed comparisons highlighting opportunities and gaps for Indian practitioners.
Artificial intelligence is forcing legal systems to rethink fundamental concepts of personhood, liability, privacy, intellectual property and due process, and different jurisdictions have begun to respond with strikingly different regulatory philosophies that offer useful comparators for India’s evolving, still largely “soft law” approach to AI governance. While the European Union is building a dense, ex ante, risk‑based regulatory code (the EU AI Act), the United States is moving through executive orders, sectoral regulators and federal pre‑emption debates, and India is relying on the Digital Personal Data Protection Act, 2023 (DPDPA), AI governance guidelines and a DPIIT working paper on generative AI and copyright rather than a dedicated AI statute, creating a patchwork that Indian lawyers must navigate across domains such as data protection, consumer law, IP and tort.
Conceptual foundations of AI regulation
Across jurisdictions, the starting point is the recognition that AI is not a single technology but a family of statistical and machine‑learning techniques deployed across sectors, which makes horizontal, technology‑neutral regulation difficult yet increasingly unavoidable. The common normative anchors are protection of fundamental or human rights, preservation of innovation and competition, and allocation of responsibility when autonomous or semi‑autonomous systems cause harm, but each system balances these differently: the EU leans heavily on rights and precaution, the US on innovation and national competitiveness, and India on digital public infrastructure, inclusion and “responsible AI” rhetoric within a growth‑oriented industrial policy.
The EU’s risk‑based AI Act model
The EU Artificial Intelligence Act is the world’s first comprehensive, cross‑sectoral statute devoted entirely to AI, structured around a four‑tier “risk‑based” classification of AI systems: unacceptable risk (outright prohibited), high risk (subject to stringent obligations), limited risk (transparency duties) and minimal or no risk (largely unregulated). Unacceptable uses include AI for social scoring by public authorities, certain real‑time remote biometric identification in public spaces, manipulative “subliminal” techniques and exploitative systems targeting vulnerable persons, while high‑risk systems include AI used in critical infrastructure, education, employment, essential private services, law enforcement, migration control and the administration of justice.
For high‑risk AI, the Act mandates a suite of ex ante compliance measures: risk‑management and quality‑management systems, robust data governance with documented bias assessments, detailed technical documentation and logging, human oversight mechanisms and conformity assessments with CE‑marking before placing the system on the EU market. General‑purpose AI (GPAI) and “GPAI models with systemic risk” face additional obligations around model documentation, incident reporting, cybersecurity and evaluation benchmarks, reflecting concern over large foundation models underpinning generative AI.
EU liability and IP responses to AI
In parallel with the AI Act, the EU has advanced an AI Liability Directive (AILD) and a revised Product Liability Directive (PLD) to address civil liability for AI‑related harm, knitting AI systems into both fault‑based and no‑fault liability regimes. The AILD harmonises access to evidence and eases the burden of proof for claimants injured by certain AI systems, while the updated PLD extends strict product liability to software and digital products, allowing compensation for material damage, including data corruption, when an AI system is defective.
On intellectual property, the EU has not adopted a dedicated AI‑copyright statute, but debates on text and data mining (TDM) exceptions, authorship of AI‑assisted works and copyright in training datasets are active; the EU’s copyright acquis already contains nuanced TDM provisions that, together with the AI Act’s transparency duties, are slowly shaping an ecosystem where rightsholders, AI developers and downstream users must navigate overlapping disclosure and licensing obligations. For India, this EU model highlights how sector‑agnostic AI rules can be integrated with existing product liability and IP regimes, offering a template for a future Indian AI statute that could sit alongside, rather than displace, the DPDPA and the Copyright Act.
The US: executive action, regulators and federal pre‑emption
Unlike the EU, the United States has not enacted a single comprehensive AI statute, instead relying on presidential executive orders, agency guidance and an emerging debate about federal pre‑emption of disparate state AI laws. A 2023 federal Executive Order on “safe, secure, and trustworthy” AI tasked agencies such as the Department of Commerce, NIST, the FTC and sectoral regulators with developing standards, risk‑management frameworks and enforcement strategies, including reporting obligations for developers of high‑risk foundation models and guidance on copyright and patent issues in AI‑related innovation.
More recently, an Executive Order titled “Ensuring a National Policy Framework for Artificial Intelligence” announced in December 2025 seeks to limit or override state AI regulations perceived as conflicting with a national pro‑innovation policy, directing the Attorney General to establish an AI Litigation Task Force to challenge state AI laws that unduly burden interstate commerce or conflict with federal policy, and linking federal funding to state compliance. This reflects a strong concern that a patchwork of state‑level AI rules, including on algorithmic accountability, facial recognition and automated decision‑making, could fragment the US market; for Indian observers, it offers a cautionary tale for centre‑state coordination if India ever sees divergent AI rules at Union and State levels.
In the IP sphere, the same US Executive Order framework instructs the USPTO and the US Copyright Office to issue guidance on patent eligibility for AI‑related inventions and on copyright issues in AI, including authorship of AI‑assisted works and the legal status of training on copyrighted content, with a requirement that these bodies jointly recommend further executive or legislative action to the President. This sustained, agency‑driven process contrasts with India’s more ad hoc, committee‑based approach where a DPIIT working paper and general AI‑governance guidelines shape debate without yet binding administrative practice in the same systematic way.
India’s present AI legal landscape
India currently has no dedicated, comprehensive AI statute; instead, AI systems are governed indirectly through horizontal laws such as the Digital Personal Data Protection Act, 2023, sectoral regulations (for instance, in financial services, health or telecom), constitutional rights jurisprudence and general civil and criminal liability doctrines. The DPDPA, enacted against the backdrop of the Supreme Court’s recognition of privacy as a fundamental right, is especially important for AI because it regulates collection and processing of digital personal data, provides limited grounds for lawful processing, grants data principal rights, and permits certain exemptions for research and publicly available data that can affect AI training practices.
Analysts note that the DPDPA’s consent‑centric regime, coupled with future Data Protection Impact Assessment (DPIA)‑style obligations for “Significant Data Fiduciaries”, will significantly influence how AI developers collect, label and repurpose personal data for model training and deployment. At the same time, exemptions for publicly available data and for research, along with the possibility of lighter obligations for classes of data fiduciaries such as startups, create a calibrated, if still uncertain, environment for Indian AI startups that must also watch for forthcoming subordinate legislation (DPDP Rules) and sector‑specific guidelines.
India’s AI governance guidelines and DPIIT working paper
In November 2025, India released AI Governance Guidelines through the Press Information Bureau, building on earlier reports by the Principal Scientific Adviser and emphasising principles such as safety, accountability, transparency and inclusivity, while explicitly linking AI policy to India’s digital public infrastructure model. These guidelines acknowledge generative AI as a general‑purpose technology with both economic promise and systemic risks, encouraging risk‑based oversight, tests for alignment with constitutional rights and promoting mechanisms like algorithmic audits without yet creating hard, enforceable statutory obligations analogous to the EU AI Act.
At the intersection of AI and copyright, the Department for Promotion of Industry and Internal Trade (DPIIT) in December 2025 published the first part of a Working Paper on Generative AI and Copyright, issued by an eight‑member committee constituted in April 2025. The paper foregrounds two core issues: whether training large AI models on copyrighted content without licence infringes reproduction rights, and to what extent AI‑generated outputs should be protected as copyright works or treated differently, noting Indian developments such as the ANI v. OpenAI lawsuit alleging infringement through unlicensed use of news content for model training.
The proposed “hybrid model” for AI training in India
Significantly, the DPIIT committee recommends a “Hybrid Model” that would combine elements of India’s existing statutory licensing regime with a structured framework permitting broad access to protected content for AI training, paired with fair compensation to rightsholders. The Working Paper cautions against overly broad training exceptions that would disproportionately benefit large AI developers and erode incentives for human creators, instead proposing permission‑free access subject to mandatory revenue‑sharing or similar remuneration mechanisms that could lower litigation risk while maintaining creative ecosystems.
From the standpoint of comparative law, this hybrid Indian proposal sits somewhere between the EU’s cautious expansion of TDM exceptions under a rights‑centric copyright regime and the more flexible, case‑law driven US fair use doctrine, which has been invoked to defend large‑scale data and text mining in several non‑AI contexts. For Indian practitioners, the key challenge will be integrating any new AI‑specific copyright provisions with existing statutory licences (for broadcasting, cover versions etc.), ensuring that collective management, rate‑setting and dispute resolution mechanisms can scale to the sheer volume and heterogeneity of AI training uses.
Data protection and AI: EU, US and India compared
The EU’s GDPR, and now the AI Act, create a dense web of obligations around automated decision‑making, profiling and data minimisation, with specific rights to explanation or meaningful information about automated decisions and the possibility of challenging such outcomes in certain contexts, all of which directly constrain AI deployment. The DPDPA, while less prescriptive than the GDPR, will still force AI deployers in India to clearly identify lawful bases for processing, implement notice and choice mechanisms and, for significant data fiduciaries, carry out DPIA‑like exercises that take into account the impact of AI‑driven processing on privacy and other rights.
In the US, privacy and data protection remain sectoral and state‑driven, with regimes like the California Consumer Privacy Act (CCPA) and health or financial privacy statutes; AI‑related governance there has come more from agencies like the FTC (on unfair or deceptive practices), CFPB (on credit decisions) and EEOC (on employment discrimination) than from a single privacy law akin to GDPR or DPDPA. For India, this divergence underscores that robust AI governance can emerge either from a centralised, omnibus data statute plus AI‑specific regulation (EU) or from a patchwork of sectoral powers and competition‑law enforcement (US), and that the future Indian model will likely need elements of both given the country’s federal structure and diverse sectoral regulators.
Liability, accountability and enforcement gaps in India
While the EU is moving towards harmonised AI‑specific liability rules, and the US benefits from well‑developed product liability, negligence and class‑action frameworks that can adapt to AI harms, India currently addresses AI‑related harm through general civil law doctrines such as negligence, breach of duty, product liability under consumer protection law, and public law remedies under writ jurisdiction. The absence of AI‑specific evidentiary presumptions or obligations around logging, documentation and audit trails can make it harder for Indian claimants to establish causation and defect in complex AI systems, particularly when decision‑making is opaque or dispersed across multiple actors in the AI supply chain.
Enforcement capacity also matters: the EU AI Act envisions designated national supervisory authorities, notified bodies for conformity assessments and a system of administrative fines that mirror GDPR‑style penalties, while US agencies are building AI enforcement expertise within existing staff and budgets. India’s AI governance guidelines contemplate oversight roles for existing ministries and regulators but do not yet create a specialised AI regulator; as AI becomes embedded in financial services, health, mobility and government decision‑making, questions about which regulator investigates AI failures and how remedies are coordinated will become more pressing in Indian public law and regulatory practice.
Strategic direction for India in light of global models
When India eventually codifies AI‑specific legislation, it will do so in a world where the EU has entrenched a rights‑driven, risk‑based model and the US has doubled down on innovation‑led, agency‑driven governance with federal pre‑emption of conflicting state rules, offering two powerful but contrasting templates. The existing Indian building blocks—the DPDPA, AI governance guidelines and DPIIT’s hybrid model proposal for generative AI and copyright, suggest that India may pursue a middle path: leveraging its experience with digital public infrastructure and sectoral regulators, adopting risk‑based obligations for high‑impact systems, and designing IP and data frameworks that keep domestic creators and startups in the fold rather than locking in foreign AI platforms.
For Indian lawyers and policymakers, the intersection of AI and law is therefore not only a doctrinal problem about how to fit AI into existing categories of personhood, liability and authorship but also a constitutional and economic question about how to design institutions, allocation of regulatory power and enforcement mechanisms in a way that reflects Indian values while remaining interoperable with EU and US frameworks. Over the next few years, litigation around AI‑enabled discrimination, content moderation, biometric surveillance, automated state decision‑making and large‑scale scraping for model training is likely to force Indian courts to articulate AI‑sensitive interpretations of privacy, equality, free speech and property rights even before Parliament enacts a dedicated AI law, making comparative analysis with EU and US developments indispensable for everyday Indian practice at the intersection of AI and law.