On 5 September 2024, the UK took a historic step towards regulating artificial intelligence by becoming one of the first states to sign the Council of Europe’s Framework Convention on Artificial Intelligence. This legally binding treaty aims to ensure that AI systems respect human rights, democracy, and the rule of law. As AI increasingly shapes business and legal landscapes, in-house counsel must now assess how these regulatory developments will impact their organisations.
The International Bar Association (IBA) and the Centre for AI and Digital Policy (CAIDP) have emphasised the critical role legal professionals play in ensuring AI governance aligns with ethical standards. Their joint report, “The Future is Now: Artificial Intelligence and the Legal Profession,” highlights both the opportunities and challenges AI presents to in-house counsel and their legal departments.
Council of Europe Framework Convention on AI: A Global Legal Milestone
The Council of Europe’s Framework Convention on Artificial Intelligence, adopted in September 2024, is the first-ever legally binding international treaty addressing AI. Its scope is broad, covering the entire lifecycle of AI systems, from development to deployment, and emphasising the protection of human rights, democracy, and the rule of law. The Convention requires states to ensure risk assessments, transparency, accountability, and oversight throughout AI’s lifecycle. A key objective is to safeguard against AI misuse, ensuring that the systems are developed in a way that protects human dignity, privacy, and individual autonomy.
Importantly, the Convention promotes technology-neutral regulation, meaning it does not regulate AI technologies themselves but focuses on the ethical and legal obligations governing their use. This flexibility makes it future-proof, ensuring it remains relevant as AI technology evolves. The Convention also mandates states to conduct risk and impact assessments, develop mitigation measures, and maintain effective remedies to address any adverse impacts on human rights. By setting a clear framework, it empowers governments to introduce bans or moratoriums on AI systems that threaten democratic values.
AI Regulation: A Call for Strategic Action for In-House Counsel
The signing of the Framework Convention comes at a time of increasing regulatory activity around AI globally, underscoring the important role in-house counsel must play in ensuring that their organisations comply with new standards. The IBA’s report calls on legal professionals to embrace AI responsibly while proactively shaping its governance.
According to the IBA-CAIDP report, only 43% of law departments have implemented AI policies, and many are unaware of how AI regulation will affect their business. For in-house counsel, this represents an urgent need to get ahead of AI-related regulations before they take effect. The Framework Convention is just the start of a wave of AI governance that will demand increased diligence from corporate legal teams.
Steps In-House Counsel Should Take Now
To ensure their organisations comply with the forthcoming regulations, in-house counsel must take several critical steps:
- Develop AI Governance Policies: Legal departments should begin drafting or updating internal AI governance policies. These policies must cover key areas such as data usage, algorithmic transparency, non-discrimination, and human oversight of AI decision-making. The Council of Europe’s Framework Convention explicitly mandates transparency and accountability measures, meaning businesses will need clear processes for documenting AI system operations and managing risks.
- Conduct AI Risk and Impact Assessments: The Framework Convention requires risk and impact assessments on AI systems, which must consider potential harm to human rights, privacy, and fairness. In-house counsel should work with technical teams to implement regular assessments, particularly for systems that automate decision-making processes like hiring, credit scoring, or customer interactions. These assessments will ensure compliance with both international and UK-specific regulations.
- Establish Training Programmes: The IBA-CAIDP report highlights the lack of AI training across legal departments. In-house counsel should address this gap by introducing AI literacy programmes for both legal teams and the broader business. Training should focus on understanding AI’s capabilities, risks, and how to evaluate AI contracts and vendor agreements. As the Framework Convention emphasises ethical AI use, this education is crucial for identifying and mitigating risks.
- Update Contracts and Vendor Management: Given AI’s growing role in outsourcing and third-party partnerships, in-house counsel should review contracts with AI providers and vendors. Agreements must include clauses that address compliance with AI regulations, data protection, and liability for biased or faulty AI systems. These provisions should ensure that third-party AI tools meet the standards set by the Framework Convention and UK law.
- Monitor Regulatory Developments: AI regulation is still evolving, and the Framework Convention is just the beginning. In-house counsel must keep a close eye on upcoming UK legislation, including the government’s plans for highly-targeted AI laws, as announced in the King’s Speech in July. Staying informed about global AI governance initiatives will also help ensure the company’s AI strategies remain compliant with international standards.
What This Means for Legal Departments
The Framework Convention on AI and the growing body of AI regulation present both challenges and opportunities for in-house counsel. On one hand, AI systems offer businesses significant advantages in terms of efficiency and decision-making automation. However, without robust governance and compliance measures, AI can expose organisations to legal risks, including competition law breaches, privacy violations, and discrimination claims.
By focusing on AI governance, in-house legal teams can position their companies as leaders in responsible AI use. Moreover, having strong internal policies will reduce exposure to regulatory penalties and reputational damage, particularly if the business engages in sectors like finance, healthcare, or employment, where AI-related legal risks are highest.
Conclusion: A Strategic Approach for In-House Counsel
The signing of the Council of Europe’s Framework Convention on AI marks a critical turning point for how businesses develop and deploy AI systems. For in-house counsel, it is a clear signal that proactive engagement with AI governance is no longer optional—it is a necessity.
As the legal landscape shifts to regulate AI’s impact on human rights, democracy, and the rule of law, in-house legal teams must ensure their companies are not only compliant but also ahead of the curve. Developing policies, training teams, and ensuring transparency in AI usage are essential steps toward responsible innovation.
In-house counsel must embrace this new era of AI regulation by preparing their organisations, aligning with international standards, and ensuring that AI serves as a tool for positive transformation rather than a source of legal risk.
Stay Informed with FS Legal Services
Subscribe to our newsletter and get the latest legal insights, industry updates, and expert advice delivered straight to your inbox. Stay ahead of the curve with our tailored content, designed to keep you informed and empowered.