In this edition of the Neural Network, we look at key AI developments from February and March 2026.
In regulatory and government updates, the EU has published two important AI updates; the UK has opened a consultation on children’s digital safety; a new AI research lab has been launched in the UK; the UK House of Lords has published its report into AI, copyright and the creative industries; France’s Senate has introduced a draft bill on AI and cultural content exploitation; and Singapore has launched an agentic AI governance framework.
In technology developments and market news, the UK Information Commissioner has issued a public warning on the future of agentic AI.
More details on each of these developments are set out below.
There are two key recent developments to note on the EU’s regulation of AI:
The UK government has opened an ambitious consultation on measures to strengthen online protections for children, including potential age bans, curfews and new controls on AI-powered features. The proposals explore whether under 16s should be banned from social media, whether platforms should disable addictive design features such as infinite scrolling, and a introduce more effective age verification measures.
International models have already been adopted in countries such as Australia, which has introduced a nationwide ban on social media access for under 16s supported by mandatory age verification at app-store level. This is actively considered within the consultation and sets a strong benchmark for the UK government to draw on when determining its approach.
Digital services engaging younger users should monitor developments closely, as any resulting framework could bring significant operational and compliance change. The consultation remains open until 26 May 2026, supported by live pilots ensuring decisions are grounded in real world evidence.
Submissions can be made here.
In line with the UK government’s intention to capitalise on increasing AI adoption, the UK government has unveiled plans to establish a new £40 million artificial intelligence research lab. This follows the publication of the AI Policies and Progress Report, setting out UK government intentions (as we reported on in our January edition (here). The Department for Science, Innovation and Technology announced the lab will focus on developing advanced “frontier AI” models prioritising safety, transparency, and societal benefit.
This initiative forms part of the UK’s wider strategy to harness the transformative potential of AI while addressing associated risks. The lab will convene leading researchers, engineers, and policy specialists to accelerate progress in AI safety and reliability. By targeting the development of state-of-the-art AI systems, the government aims to ensure the UK can both compete with and collaborate alongside international counterparts, particularly China and the US.
Partnerships with industry and academia will drive innovation and skills development. This includes a call for AI experts across the country to bring their ideas and proposals forward (open here until 31 March 2026). The lab will also advise policymakers on AI regulation and safety, to contribute to the creation of global standards for responsible AI deployment.
Following an inquiry, the UK’s House of Lords Communications and Digital Committee (the “Committee”) has published its report into AI, copyright and the creative industries (the “Report”).
The Report comes in advance of the UK government’s report released on 18 March 2026 on the use of copyright works in the development of AI systems, and accompanying economic impact assessment, that it is required to produce under the Data (Use and Access) Act 2025. We will provide further analysis on this in our Data and Cyber Update to be published at the end of this month.
The Report proposes two different “choices” the UK can follow. One is a licensing led model in which AI developers are transparent about training data and pay fair remuneration to creators, allowing domestic industries to grow. The other would see the large scale, unlicensed use of copyrighted works and increased dependence on overseas models, at the expense of creators and their rights. Ultimately, and perhaps unsurprisingly, the Report advocates for the former choice, favouring UK industry.
The Report highlights several AI-related challenges facing the creative industries. It notes significant gaps in the UK’s legal framework, particularly around unauthorised digital replicas and “in the style of” AI outputs. These technologies can imitate a creator’s recognisable style, voice, or personality without consent or payment, devaluing the distinctive appeal of their work. Further concerns include the widespread unlicensed use of protected works to train AI models, with potential revenue impacts on creators as well as limited transparency around training data, making it difficult for rightsholders to determine when their works have been used and to enforce their rights effectively.
Below, we discuss the Committee’s key recommendations in the Report:
Despite previously being considered an amendment to UK copyright law, the Committee recommends against introducing a commercial TDM exception. It notes that reducing litigation risk for large AI developers is not a sufficient justification for legislative reform. Without this exception, AI developers must continue to operate within existing copyright law boundaries. We continue to await this judicial clarity in this area.
The Committee acknowledges difficulties in protecting against digital replicas such as “deepfakes” under existing UK law. Currently, the primary route to challenge misuse of someone’s likeness is passing off.
The Report recommends introducing new protections over how individuals’ likenesses are used by AI systems. However, the Report stops short of setting out mechanisms for implementing these protections.
While the EU AI Act requires AI providers to publish summaries of training data, there is no equivalent statutory obligation in the UK. Respondents to the inquiry noted that even the EU’s approach may not go far enough to protect rights holders.
The Committee suggests creating a mandatory, proportionate transparency regime for AI training data that goes beyond the EU AI Act, alongside support for the development domestic AI systems.
The Report notes that, with a strong collective rights management infrastructure, the UK is well-placed to develop a fair and inclusive licensing market, ensuring value flows to creators of all sizes.
The Committee advocates for the introduction of an “unwaivable right to equitable remuneration” for AI uses of works as training inputs and where appropriate training outputs, subject to mandatory collective management. Again, it remains unclear how such a right would work in practice.
Ultimately, it is uncertain if the Report’s recommendations will progress, with the government recently pausing wider copyright reforms in the UK. For now, the House of Lords has drawn a clear distinction between the interests of the UK’s creative industries and those of AI developers. If the UK is to adopt the licensing led approach favoured by the Committee, further action will be needed soon.
On 12 December 2025, the French Senate introduced a draft bill that would establish a legal presumption of exploitation of cultural content by AI providers (the “Draft Bill”). The proposal follows the breakdown of negotiations between AI developers and rights holders led by the Ministry of Culture in June 2025.
The Draft Bill aims to address the lack of transparency and the imbalance of power between technology companies and creators. The proposal would make it significantly easier for rights holders to prove that their works have been used by AI systems by introducing a legal presumption based on any credible indication linked to the development, deployment, or output of the AI system.
The Draft Bill contains a single article, which would amend the French Intellectual Property Code to state that: “unless proven otherwise, any object protected by copyright or a related right, within the meaning of this code, is presumed to have been exploited by the artificial intelligence system, as soon as an indication relating to the development or deployment of this system or to the result generated by it makes such exploitation plausible”.
This procedural shift is intended to rebalance the evidentiary burden, given that rights holders often lack access to technical information about AI training data.
In its current wording, the Draft Bill is not without controversy, particularly because of the vagueness of key terms such as “indication” and “plausibility.” The Senate acknowledges these challenges, but argues that the presumption is a necessary first step to address the current imbalance.
The Senate will publicly discuss the Draft Bill on 8 April 2026.
On 22 January 2026, the Ministry for Digital Development and Information (MDDI) in Singapore announced the launch of the new Model AI Governance Framework for Agentic AI developed by the Infocomm Media Development Authority (IMDA). To read more about this development, click here.
The UK Information Commissioner, John Edwards, has warned industry professionals that agentic AI is likely to pose significant privacy risks in the future. This follows the Information Commissioner’s Office (“ICO”) report on the data protection implications of agentic AI (as we reported on in our January edition here).
Speaking at the IAPP UK Intensive, Mr Edwards described these systems as “standing outside the door waiting to come in”, noting growing concern about tools such as the open source OpenClaw, which have been linked to security weaknesses and unintended data processing.
From an EU perspective, the Dutch data protection authority advised businesses and consumers not to use autonomous AI agents, calling them a “Trojan horse” vulnerable to misuse and emphasising that organisations remain fully responsible for GDPR compliance. The Italian regulator has similarly warned that agentic tools create new opportunities but carry heightened risks compared with traditional prompt based AI models.
Organisations deploying these tools remain controllers under GDPR and the UK GDPR and cannot rely on the open source or “local” nature of the software to diminish their responsibilities. Granting an agent broad access to mailboxes, shared drives or live databases increases the risk of large scale repurposing of personal data without a clear lawful basis or updated transparency information. Given this risk profile, deployments of agentic AI systems are likely to require “appropriate” technical and organisational measures and, in many cases, a Data Protection Impact Assessment.
While the ICO has not yet issued formal guidance, it has indicated it will monitor the area closely and deploy enforcement powers where necessary. Organisations exploring agentic AI should therefore ensure robust testing, oversight and risk mitigation before deployment.