The Indian government has recognized the need for responsible AI development and has taken steps to establish a regulatory framework. The national think-tank, NITI Aayog published guidelines for AI research and development pertaining to healthcare, agriculture, education, smart cities and infrastructure, and smart mobility and transformation in 2018 in their ‘National Strategy for Artificial Intelligence #AIForAll’ plan.
The Ministry of Electronics and Information Technology [MeitY] constituted 4 Committees for promoting AI initiatives and developing a policy framework in 2018 for the identification of areas central for the application of AI technologies and for suggestions for development of policy, regulations and legal framework related to AI, data privacy and cybersecurity. The Committee Reports were published in 2019; however, no significant action has been taken thereupon.
Following this, the NITI Aayog published ‘Responsible AI: Part 1 – Principles for Responsible AI’ in February 2021. This strategy paper examines the numerous ethical issues surrounding the implementation of AI solutions in India. Its ‘Part 2 – Operationalizing concepts for Responsible AI’ was published in August 2021, outlining the steps that the public and private sectors must take in collaboration with research institutes to address regulatory and policy interventions, capacity building, ethical design incentives, and developing frameworks for adhering to pertinent AI standards.
In 2023, MeitY called consultations to propose the Digital India Act to replace the old and outdated Information Technology Act, 2000. The proposed Act, however, remains in the drafting stages, still.
Most recently, on 1st March 2024, MeitY issued an advisory, as per which organisations were required to obtain prior approval from the Government before using unreliable AI Models, Large Language Models [LLMs], or Generative AI systems, as a response to growing cases of algorithmic bias, misinformation and deepfakes. This advisory was heavily criticised by the industry, leading to a clarification being published by Minister Rajeev Chandrasekhar on X [formerly, Twitter] and the subsequent publication of superseding advisory on 15th March 2024. The latter clarified that platforms as well as intermediaries must ensure that their use of AI models, LLMs or Generative AI systems is not contrary to the laws in force. These organisations are further required to notify users about their terms and services.
AI-related Legal Concerns to be Considered
- Data Protection and Privacy: AI heavily relies on data, raising significant concerns about data protection and privacy. The Digital Personal Data Protection Act, 2023 [yet to be enforced], holds substantial implications for AI applications. In this context, organizations must establish robust data protection measures and ensure compliance with the Act’s provisions. Implementing appropriate safeguards and obtaining informed consent are critical components of maintaining data privacy and complying with relevant regulations.
- Intellectual Property Rights and AI: The convergence of AI and intellectual property [IP] rights presents complex challenges. Questions arise regarding their eligibility for legal protection as well as issues concerning their inventorship, ownership, and licensing.
- Bias and Discrimination in AI: poses a significant challenge in AI systems, as they can unintentionally perpetuate biases and discrimination. The legal implications of biased AI systems are far-reaching. In-house counsels should be aware of the risks associated with algorithmic bias and work towards developing fair and unbiased AI systems.
- Liability and Accountability in AI: The question of liability and accountability for AI systems presents . As AI becomes more autonomous, determining who should be held responsible for AI actions requires careful consideration. The concept of "explainable AI" insight into the rationale used by AI] gains prominence, necessitating transparency and accountability in the decision-making processes of AI algorithms. Further, issues related to surveillance and consent raise privacy-related issues, further compelling transparency and accountability.
- Employment and Labour Laws in the Age of AI: AI’s impact on the workforce raises various concerns in the realm of employment and labour laws. Issues such as job displacement, reskilling/upskilling requirements, and ethical considerations in AI-based employment decisions must be carefully examined.
Suggestions for In-House Counsels
Hereunder are some tips that might help in-house counsels deal with compliance and legal risks:
- Staying updated with the laws, rules, regulations and guidelines issued by the Government and Sector-specific regulators and actively participating in consultations and discussions related to AI regulations. Conducting thorough Data Protection Impact Assessments for AI systems, identifying potential risks and implementing privacy-enhancing measures.
- Collaborating with Data Protection Officers and IT teams to ensure compliance with data protection laws and best practices.
- Developing IP strategies specific to AI technologies, considering issues such as inventorship, ownership, and licensing.
- Collaborating with technical teams to identify AI-related innovations that may be eligible for IP protection and working closely with IP attorneys to navigate the intricacies of AI-related patent and copyright laws.
- Collaborating with data scientists and AI developers to implement measures that mitigate bias and discrimination in AI systems.
- Establishing comprehensive data validation processes, employing diverse datasets, and conducting regular audits to identify and rectify biases.
- Ensuring transparency in AI decision-making processes and providing avenues for individuals to challenge or seek explanations for AI-driven decisions.
- Establishing clear contractual agreements with AI system providers, ensuring allocation of liability and accountability in case of AI-related incidents.
- Staying updated and collaborating with risk management teams to assess and mitigate potential legal risks associated with AI deployments.
- Collaborating with HR departments to establish transparent and fair AI-driven employment practices.
- Proactively addressing potential challenges arising from AI-induced job displacement through measures such as reskilling and upskilling programs.
- Advocating for ethical AI practices that prioritize fairness and avoid discriminatory outcomes.
By staying informed, adapting to emerging regulations, and proactively addressing legal challenges, in-house counsels can effectively guide organizations through the complexities of AI, enabling them to harness its benefits while maintaining legal and ethical integrity. Implementing the suggested strategies will help in-house counsels navigate the legal complexities of AI and mitigate potential risks effectively.

