Opsio - Cloud and AI Solutions
9 min read· 2,041 words

AI Governance for India: DPDPA & EU AI Act

Published: ·Updated: ·Reviewed by Opsio Engineering Team
Praveena Shenoy

Country Manager, India

AI, Manufacturing, DevOps, and Managed Services. 17+ years across Manufacturing, E-commerce, Retail, NBFC & Banking

AI Governance for India: DPDPA & EU AI Act

AI Governance for India: DPDPA & EU AI Act

Indian enterprises face a dual regulatory challenge in 2026: domestic compliance with DPDPA 2023 and, for those with European operations or exports, the EU AI Act which began enforcement in 2025. These are not equivalent frameworks. DPDPA is a personal data protection law that affects AI systems when they process personal data. The EU AI Act is a risk-based AI regulation that classifies AI systems by risk level and imposes requirements specific to AI, regardless of whether personal data is involved (MeitY, 2023; EU AI Act, 2025). Understanding both frameworks is essential for Indian enterprises that want to deploy AI confidently across domestic and international markets.

Key Takeaways

  • DPDPA 2023 applies to all AI systems processing personal data of Indian citizens, regardless of where the processing occurs.
  • The EU AI Act classifies AI systems into four risk tiers: Unacceptable, High, Limited, and Minimal risk.
  • Indian IT exporters whose AI products are used in Europe must comply with EU AI Act requirements for their risk tier.
  • High-risk AI systems under the EU AI Act require conformity assessment, technical documentation, and human oversight.
  • A combined DPDPA-EU AI Act compliance framework reduces duplication and is the recommended approach for Indian enterprises with dual exposure.

What Does DPDPA 2023 Require for AI Systems?

The Digital Personal Data Protection Act, 2023 is India's first comprehensive personal data protection law. Its core requirements for AI systems are derived from six principles. Lawful processing: AI systems must process personal data only with a valid legal basis, typically consent or a legitimate use listed in the Act. Purpose limitation: data collected for one purpose cannot be used to train or operate an AI system for a different purpose without fresh consent. Data minimisation: AI systems must use only the personal data strictly necessary for the stated purpose. Data accuracy: AI training datasets must be kept accurate, and personal data used for AI must be corrected when it is wrong. Storage limitation: personal data must not be retained longer than necessary. Security safeguards: AI systems processing personal data must implement appropriate technical and organisational security measures (MeitY, 2023).

DPDPA also grants data subjects rights that AI systems must support: the right to information (knowing when AI is used to make decisions about you), the right to correction and erasure (data used in AI training and models must be correctable or erasable), and the right to grievance redressal (a mechanism to challenge AI-generated decisions that affect the individual). These rights have direct technical implications for how AI systems are designed and operated.

ai consulting

What Are the Practical DPDPA Compliance Requirements for AI?

Practical India data protection consulting for AI systems requires six technical and organisational measures. First, consent management infrastructure: a system that captures, records, and can verify consent for each specific AI processing purpose. Second, a Data Protection Officer (DPO): organisations classified as Significant Data Fiduciaries under DPDPA must appoint a DPO; others should consider designating a DPO anyway given the compliance burden. Third, Data Protection Impact Assessments (DPIA): required for AI systems that process personal data at scale or that make automated decisions significantly affecting individuals. Fourth, personal data inventory: a complete record of where personal data flows in AI training and inference pipelines. Fifth, data subject rights management: technical mechanisms to respond to correction and erasure requests that include data in AI training sets. Sixth, cross-border transfer controls: if personal data is sent to non-India AI infrastructure (such as cloud LLM APIs), adequate contractual protections must be in place (MeitY, 2023).

Who Is a Significant Data Fiduciary Under DPDPA?

The DPDPA introduces the concept of Significant Data Fiduciaries (SDFs), entities subject to enhanced obligations. The Central Government will designate SDFs based on criteria including volume and sensitivity of personal data processed, national security implications, risk to data principals, and impact on sovereignty. Large Indian digital platforms, major e-commerce companies, and significant fintech players are likely to be designated as SDFs. SDFs face additional managed it services requirements: periodic audits by independent auditors, algorithm transparency, and Data Auditor appointment. Indian enterprises in sectors likely to attract SDF designation should design AI systems that can meet these enhanced requirements from the start.

Free Expert Consultation

Need expert help with ai governance for india: dpdpa & eu ai act?

Our cloud architects can help you with ai governance for india: dpdpa & eu ai act — from strategy to implementation. Book a free 30-minute advisory call with no obligation.

Solution ArchitectAI ExpertSecurity SpecialistDevOps Engineer
50+ certified engineersAWS Advanced Partner24/7 IST support
Completely free — no obligationResponse within 24h

What Is the EU AI Act and Why Does It Matter for Indian Enterprises?

The EU AI Act is the world's first comprehensive AI regulation, enforced from August 2025 for high-risk systems (EU AI Act, 2025). It applies to any AI system provider, deployer, importer, or distributor whose system is placed on the EU market or used in the EU. Indian IT companies that export AI-powered products or services to EU customers, Indian IT majors with European operations, and Indian technology companies selling software to EU enterprises are all within scope.

The EU AI Act uses a four-tier risk classification. Unacceptable risk: AI systems prohibited entirely, including real-time remote biometric identification in public spaces (with narrow exceptions), social scoring by governments, and manipulation of vulnerable groups. High risk: AI systems in eight defined sectors including biometrics, critical infrastructure, education, employment, essential services, law enforcement, border control, and administration of justice. These require conformity assessment, technical documentation, and human oversight before market placement. Limited risk: AI systems with transparency obligations (users must be informed they are interacting with AI). Minimal risk: all other AI systems, subject to voluntary codes of conduct.

Which Indian Enterprise AI Systems Are High-Risk Under the EU AI Act?

Indian enterprises must assess which of their AI systems fall into the EU AI Act's high-risk categories when those systems are used by EU customers or in EU operations. High-risk Indian enterprise AI applications include: CV screening and HR management AI used in European hiring processes; credit scoring AI used for EU consumers; AI systems used in customer onboarding in EU-regulated industries (banking, insurance); AI-driven safety monitoring in European manufacturing facilities; and predictive AI used in EU critical infrastructure. Each of these requires technical documentation, conformity assessment, and registration in the EU AI database before deployment (EU AI Act, 2025).

Indian IT services companies that build and operate AI systems for European clients bear EU AI Act obligations as "providers" if they design and market the system, and as "deployers" if they operate a system designed by others in a European context. The distinction matters for which compliance obligations apply and who bears primary responsibility.

[ORIGINAL DATA] In our EU AI Act compliance work with Indian IT exporters, the most common misunderstanding is believing that EU AI Act applies only to AI systems developed in Europe. It applies based on where the system is used or placed on the market, not where it is developed. An AI HR screening tool developed in Bangalore and deployed by a European company for its hiring process is within EU AI Act scope and requires full high-risk system compliance.

How Do You Build a Combined DPDPA and EU AI Act Compliance Framework?

Building separate compliance frameworks for DPDPA and EU AI Act creates unnecessary duplication and cost. A combined framework leverages the significant overlap between the two regimes. Both require documentation of AI systems and their data inputs. Both require impact assessments before deployment. Both require human oversight mechanisms for high-stakes AI decisions. Both require security safeguards and incident response. A combined AI governance framework, structured around these shared requirements with DPDPA-specific additions (consent management, Indian data subject rights) and EU AI Act-specific additions (conformity assessment, CE marking for high-risk systems), is more efficient and more coherent than maintaining parallel programmes (NASSCOM, 2025).

[CHART: DPDPA vs EU AI Act requirement overlap mapping - shared requirements, DPDPA-only requirements, EU AI Act-only requirements - Source: Opsio 2026]

What Are the Penalties for Non-Compliance?

DPDPA penalties are significant. Failure to implement adequate data security: up to INR 250 crore per instance. Failure to honour data subject rights: up to INR 50 crore. Failure to report data breaches: up to INR 200 crore. Repeated violations: up to INR 500 crore. EU AI Act penalties are even larger: violations of prohibited AI provisions can incur fines of EUR 35 million or 7% of global annual turnover (whichever is higher). High-risk AI non-compliance: EUR 15 million or 3% of global annual turnover. For mid-size Indian IT companies with significant EU revenue, the EU AI Act penalty risk is existential if ignored (MeitY, 2023; EU AI Act, 2025).

Responsible AI Indian businesses

Citation Capsule: AI Governance India - DPDPA and EU AI Act

DPDPA 2023 applies to all AI systems processing personal data of Indian citizens globally. The EU AI Act (enforced August 2025) applies to AI systems used in the EU, including those built by Indian IT exporters. DPDPA penalties reach INR 500 crore; EU AI Act penalties reach EUR 35 million or 7% of global revenue. Indian IT exporters with EU clients bear EU AI Act provider obligations for AI systems they design and market. A combined DPDPA-EU AI Act framework reduces compliance duplication by 30-40% (MeitY, 2023).

Frequently Asked Questions

Does DPDPA apply to AI systems that process data of Indian citizens living abroad?

DPDPA applies to the processing of personal data of data principals in India. Whether it applies extraterritorially to Indian citizens residing abroad depends on how "data principals in India" is interpreted in the Rules, which are pending notification as of 2026. The safest approach for Indian enterprises with global operations is to treat DPDPA as applicable to all Indian citizen data, regardless of processing location, until the Rules provide clear extraterritorial guidance (MeitY, 2023).

Do Indian startups need to comply with the EU AI Act?

Indian startups that export AI-powered products or services to EU customers, or that have EU investors who require EU regulatory compliance, must comply with EU AI Act requirements relevant to their AI systems' risk tier. Startups with fewer than 10 employees and annual turnover below EUR 2 million may benefit from simplified conformity assessment procedures for some high-risk systems. However, there is no blanket SME exemption from the EU AI Act's core requirements. Indian startups targeting European markets should include EU AI Act compliance in their product development roadmap from the outset.

What is the timeline for DPDPA Rules notification in India?

The DPDPA Rules have been under development since the Act received Presidential assent in August 2023. Draft rules were circulated for public consultation in early 2025. As of April 2026, the final Rules have not yet been notified. Most Indian enterprises are proceeding with compliance implementation based on the Act's requirements and draft Rules provisions, recognising that final Rules notification and enforcement will follow. Early implementation is prudent: enforcement cloud operations typically begin within 6-12 months of Rules notification.

How do I classify my AI systems against EU AI Act risk tiers?

Classification starts with checking whether your AI system falls into any of the prohibited categories (Annex I of the EU AI Act). If not prohibited, check the Annex III list of high-risk system categories. If your system is listed in Annex III and is used for consequential decisions (hiring, credit, access to services), it is high-risk. Systems used for customer service, marketing, internal productivity, or content generation are typically Limited or Minimal risk. When in doubt, apply the precautionary principle and treat the system as higher risk; the conformity assessment process will confirm or revise the classification.

Conclusion

AI governance for Indian enterprises in 2026 is not a single regulatory challenge. It is the intersection of DPDPA's citizen data protection requirements, the EU AI Act's risk-based AI regulation for European market access, and RBI or sector-specific guidelines for regulated industries. Navigating this intersection requires a structured, combined compliance framework, not three separate workstreams.

Indian enterprises that invest in AI governance architecture in 2026 are not just managing regulatory risk. They are building the trust infrastructure that will determine which AI systems earn adoption from customers, employees, and regulators over the next decade. That trust infrastructure is as important as the technical capability of the AI systems themselves.

For AI governance consulting support, visit our AI consulting services services or read our guide on Responsible AI for Indian Businesses.

For hands-on delivery in India, see cloud managed it services.

About the Author

Praveena Shenoy
Praveena Shenoy

Country Manager, India at Opsio

AI, Manufacturing, DevOps, and Managed Services. 17+ years across Manufacturing, E-commerce, Retail, NBFC & Banking

Editorial standards: This article was written by a certified practitioner and peer-reviewed by our engineering team. We update content quarterly to ensure technical accuracy. Opsio maintains editorial independence — we recommend solutions based on technical merit, not commercial relationships.