x

Acts and Laws to Keep in Mind Before Implementing AI

16 九月 2025

The regulatory environment for AI is dynamic and varies by jurisdiction. Enterprises must navigate a combination of existing laws that apply to AI and new AI-specific regulations. Here are the key legal frameworks to consider:

  • Data Protection and Privacy Laws: Virtually every country has some form of data protection law, and these directly impact AI projects that use personal data. In the EU, the General Data Protection Regulation (GDPR) is a benchmark law requiring lawful, transparent processing of personal data and granting rights to individuals (like access, deletion, objection). It also has provisions relevant to AI, such as the right to not be subject to purely automated decisions that have significant effects without human intervention, unless certain conditions are met. California’s Consumer Privacy Act (CCPA) and the new CPRA have set similar standards in the U.S. at the state level. In India, the Digital Personal Data Protection Act, 2023 (DPDP Act) has been enacted, reshaping India’s data protection landscape[4]. The DPDP Act emphasizes consent, data minimization, and accountability, and it carries hefty penalties (up to ₹250 crore fines) for non-compliance. Companies implementing AI in India must ensure they comply with DPDP - for example, if you’re using customer data to train an AI model, you need a valid consent and purpose. Other countries have their own laws (e.g., Brazil’s LGPD, Singapore’s PDPA, etc.), so multinational enterprises need a privacy compliance strategy that covers all bases. Non-compliance can result in enforcement actions, fines, and lawsuits - not to mention damage to brand trust. Therefore, before launching an AI that processes personal data, involve your Data Protection Officer or legal counsel to do a thorough check against these laws.
  • Sectoral Regulations and Guidelines: Many industries have regulators that are now actively looking at AI. For instance, financial regulators (like RBI in India, or the SEC/FINRA in the US) want to ensure AI in trading, lending, or advising doesn’t harm consumers or markets. In healthcare, using AI for diagnostics must comply with medical device regulations and patient confidentiality laws (e.g., HIPAA in the US for health data, which mandates safeguards on health information). If your AI falls into an existing regulated activity, assume the same rules apply as if a human were doing it. For example, if an AI is making lending decisions, fair lending laws and credit reporting regulations still apply as if a loan officer were making the call. Sometimes regulators issue AI-specific guidance: e.g., the U.S. FDA has proposed frameworks for AI in medical devices, or financial authorities may require explainability for AI-driven credit decisions. In India, sector regulators (like SEBI for finance, IRDAI for insurance, etc.) have started discussing AI governance in their domains. Always check if there’s guidance from your industry regulator on AI, and factor that in. Compliance might involve things like algorithm audits, documentation, or even getting approvals (in some cases, like clinical AI tools needing regulatory clearance).
  • Emerging AI-Specific Regulations: A number of jurisdictions are creating laws that deal specifically with AI. The most significant is the European Union’s AI Act, which is expected to be fully in force by 2026. The EU AI Act takes a risk-based approach, categorizing AI uses into risk levels: unacceptable risk (banned outright, e.g., social scoring systems), high-risk (allowed with strict compliance requirements like transparency, risk assessments, logging, human oversight for things like AI in hiring, critical infrastructure, etc.), and lower risk (with fewer obligations, though basic transparency for AI that interacts with people). If you operate in the EU or process EU residents’ data or offer products there, this law could apply, and its reach is extraterritorial in many cases. Similarly, some U.S. states have started passing AI laws (e.g., specific AI hiring bias audit requirements in New York City, although that’s local). China has regulations on recommendation algorithms and deepfakes, and is shaping a comprehensive AI regulation too. The point is, keep your ear to the ground. AI law is evolving fast. It would be wise to assign someone (or a team) to monitor global AI regulatory developments. Being proactive can save you from having to scramble later. For instance, if you foresee that an AI system you’re building would be deemed “high-risk” under EU’s AI Act, you can start aligning it to those standards (like documenting the training data and building in explainability) now, rather than retrofitting under time pressure later.
  • Intellectual Property Law: As discussed earlier, IP law hasn’t fully caught up to AI, but existing frameworks still apply. Copyright law is pertinent if your AI processes copyrighted works or if you are generating content. For example, training an AI on copyrighted text without permission is legally risky (though some argue fair use in the US, but that’s unsettled). Be aware of ongoing legal cases in this space; they could set precedents (recent lawsuits against AI companies for using authors’ works in training data are worth noting). Patent law currently doesn’t allow AI to be an inventor in most jurisdictions, so any inventions derived via AI need human inventors named to be patentable. Trade secret law can protect your data or algorithms if you keep them confidential - and conversely, if you’re using a vendor’s AI, understand that their model might be proprietary (a trade secret) so you might not get full insight into its workings, impacting how you can explain its decisions. Additionally, if your AI creates something like a design or music, be mindful that without a human creator it might fall in the public domain by default in some regions - adjust your strategy accordingly (maybe have a human in the loop to claim authorship if needed). Licensing is also key: use proper licenses for software libraries and AI models you incorporate. Open-source AI models often have licenses that dictate how they can be used (some restrict commercial use). Compliance with those licenses is important to avoid legal complications. Engaging IP lawyers who understand AI is advisable when you’re charting into any new territory like training on third-party content or commercializing AI-generated products.
  • Laws on Liability and Accountability: When AI causes harm, who is liable? This is a question legislators and courts are grappling with. As of now, existing legal concepts typically apply - the company deploying the AI will likely be held responsible under product liability or negligence theories if something goes wrong. For example, if an AI-driven service gives flawed results that cause a client loss, the client can sue your company as if a human provided the flawed service - AI doesn’t shield you from responsibility. In the EU, discussions are underway to update liability laws to explicitly cover AI (the proposed AI Liability Directive). While those develop, ensure you have assessed the worst-case scenarios: What if your AI makes a serious mistake - are you prepared to respond, legally and financially? Professional indemnity or product liability insurance might need to be updated to include AI-related incidents. Internally, maintain records of how AI decisions are made to defend your practices if challenged - this ties back to governance. Also, be mindful of consumer protection laws and advertising laws - if you use AI to make marketing claims or personalization, truth-in-advertising rules still apply (AI can’t be used to make unfair or deceptive claims). If AI is used in pricing (like dynamic pricing algorithms), ensure it doesn’t unintentionally lead to collusion or antitrust issues (regulators have warned about that scenario too). In summary, while there isn’t a single “AI liability law” today, implementing AI doesn’t put you outside the reach of the law; it essentially means your company will be on the hook for AI’s actions, so treat AI outputs with the same diligence as human work product from a compliance perspective.
  • Indian IT Act and Rules: In the Indian context, apart from the DPDP Act mentioned above, the Information Technology Act, 2000 and its rules form the backbone of cyber law. The IT Act addresses offenses like unauthorized access, data theft, etc., which are relevant if AI systems are attacked or misused. It also has the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, which got attention recently with respect to social media and content moderation. Why does this matter for AI? Because the Ministry of Electronics and IT (MeitY) issued advisories in 2024 stressing that AI tools and intermediaries must ensure AI-generated content complies with the same content restrictions as any user-generated content. They specifically pointed out AI should not produce unlawful content or hate speech, and should not compromise election integrity through misinformation. This indicates that even absent a specific AI law, existing laws (like those banning certain online content) are being extended to AI outputs. If your enterprise plans to deploy any generative AI (for example, an AI social media bot or content creation AI), you must ensure it doesn’t end up producing content that violates Indian laws (obscenity, defamation, etc.), because regulators could hold you accountable for that output. Additionally, India’s central bank (RBI) and others have been exploring guidelines - keep an eye on sector-specific developments here as well.

In conclusion on laws: A helpful exercise is to map out all jurisdictions your business operates in or markets to, list the relevant laws as above, and then consult experts to interpret what they mean for your AI use cases. Given the global nature of business, you may have to comply with multiple regimes (the strictest usually setting the effective standard - e.g., you might apply GDPR-level controls everywhere). There’s also an international convergence towards emphasizing AI ethics - frameworks from OECD, UNESCO, etc., which, while not binding, reflect best practices that could become law in the future. Companies that stay ahead of the regulatory curve by adopting these principles early will find compliance easier when laws tighten.

Finally, after all these considerations and plans, you might wonder: how can a law firm like Lakshmikumaran & Sridharan (LKS) assist in this journey? We conclude with a brief overview of how we at LKS can support your enterprise’s AI initiatives.

Browse articles