In this edition of the Neural Network, we look at key AI developments from March and April 2026.
In regulatory and government updates, the EU’s proposals to revise AI Act deadlines have progressed; the UK House of Commons has amended proposed stronger controls on unsafe AI chatbots; the US FDA and EMA have issued guidance on AI-supported drug development; and new UK government analysis has examined copyright and AI.
In AI enforcement and litigation news, a US jury has found Meta and Google liable for harms linked to addictive platform design.
In technology developments and market news, Anthropic’s “Claude Mythos” has placed renewed attention on cyber security vulnerabilities; and a French start-up has secured major funding to expand European AI infrastructure.
More details on each of these developments are set out below.
The European Parliament has adopted its negotiating position on the European Commission’s “Digital Omnibus” package, which proposes to amend the AI Act (the "Omnibus"). While the text is not yet law, it establishes the European Parliament’s mandate for negotiations with the Council of the EU and the European Commission on targeted changes to timing, scope and enforcement of the regime. MEPs have proposed the following important developments:
The Omnibus would introduce new application deadlines for certain high-risk AI obligations. 2 December 2027 has been proposed as the date of application of obligations relating to Annex III high-risk systems (including AI used in biometrics, critical infrastructure, education, employment, essential services, law enforcement, justice and border management). For AI covered by EU sectoral product safety regimes (such as medical devices, radio equipment and toy safety), 2 August 2028 is the new proposed deadline, with potential for flexibility to avoid duplication.
The European Parliament also proposes adjustments to transparency obligations. MEPs have proposed 2 November 2026 as the date by which providers of generative AI systems would be required to ensure AI-generated audio, image, video and text content is machine-readable and detectable as artificially generated.
MEPs have backed an explicit prohibition on “nudifier” AI systems that generate or manipulate realistic sexually explicit or intimate images of identifiable persons without their consent. The ban would be subject to a narrow exception where systems are equipped with effective technical safeguards that prevent such misuse.
A potential key issue arises from the combination of delayed application dates and non-retrospective operation of the AI Act. High-risk AI systems placed on the market before the relevant deadlines would generally fall outside the high‑risk obligations, unless they are subsequently substantially modified. This could incentivise providers to deploy high‑risk systems ahead of late 2027, with the effect that such systems may remain in use without being brought within the full scope of the AI Act’s high‑risk regime.
Trilogue discussions on an agreement on the consolidated text of the Omnibus are scheduled to begin on 28 April 2026.
On 14 April 2026, the House of Commons rejected House of Lords amendments to the Crime and Policing Bill that would have introduced a new criminal offence targeting unsafe AI chatbots. The amendments, proposed by Baroness Kidron, sought to impose criminal liability on those developing or supplying chatbots that generate content linked to terrorism, violence or threats to public safety.
Under the amendments, developers would have been required to carry out risk assessments and take steps to mitigate identified risks before making chatbots available in the UK. Enforcement would have been linked to the existing Online Safety Act framework, while breaches could result in penalties of up to five years’ imprisonment. The amendments also provided for the potential use of Ofcom’s enforcement and business disruption powers.
The government opposed the amendments, arguing that they risked blurring the boundary between criminal and regulatory regimes. Ministers instead favoured extending the scope of the Online Safety Act to AI services via the introduction of secondary legislation, rather than the introduction of new criminal offences through primary legislation. Under that approach, compliance would primarily be enforced by Ofcom through regulatory measures, including significant financial penalties, such as fines of up to £18 million or 10% of global annual turnover and the use of business disruption powers.
MPs backed the government’s alternative approach, with ministers confirming their intention to use regulation‑making powers under the Online Safety Act to address harms arising from AI chatbot services. The government committed to reporting back to Parliament later in the year on any progress made in this space.
In January 2026, the FDA and the European Medicines Agency (the “EMA”) published a set of guiding principles for the use of AI across the medicine lifecycle, spanning research, clinical development, manufacturing and post‑market monitoring. These principles are intended to support international regulatory convergence and underpin future guidance in both jurisdictions. The principles are:
AI and machine‑learning technologies are increasingly being incorporated into medical devices, with applications ranging from imaging systems that assist in identifying skin cancer to smart sensors capable of estimating the risk of cardiac events. Unlike traditional software, AI‑enabled medical devices can adapt and improve their performance over time as they are exposed to real‑world data.
Regulators have recognised both the potential benefits of these technologies and the limitations of regulatory frameworks designed around static software. We are yet to see a regulator implement guidance in this space, but the FDA published two draft guidance documents in January 2025 on lifecycle management and marketing submission considerations for AI‑enabled medical devices and on the use of AI to support regulatory decision‑making for drugs and biological products.
The EMA’s approach, reflected in the European Commission’s proposed Biotech Act and ongoing pharmaceutical reform, similarly accommodates broader use of AI in regulatory decision‑making and encourages controlled experimentation with innovative AI methods.
Both regulators emphasise the importance of human oversight and ethical governance when deploying AI technologies in healthcare. As AI adoption continues to expand, regulators internationally are expected to complement principles based frameworks with further practical guidance, seeking to enable responsible innovation while maintaining patient safety.
On 18 March 2026, the UK government published its Copyright and Artificial Intelligence report in combination with an economic impact assessment. This report examines the relationship between copyright law and the development of AI systems against the backdrop of policy debate and ongoing litigation across multiple jurisdictions. Despite sustained stakeholder interest, the government has adopted a cautious approach, deferring decisions on key issues in favour of further evidence gathering. To read more about this development click here.
A US jury has found Meta Platforms and Google liable for designing social media platforms that are addictive and harmful to children, awarding $3 million in compensatory damages to a plaintiff who alleged that a childhood social media addiction contributed to their anxiety, depression, and body dysmorphia.
The California trial in the Los Angeles County Superior Court focused on features that the jury found were intentionally designed to maximise user engagement without adequate warnings of potential harm. Internal documents revealed that executives at Meta, including Mark Zuckerberg, were aware of associated risks but continued to prioritise user time spent on platforms. Of the total $6 million in damages, 70% was attributed to Meta and 30% to Google. Both companies have indicated their intention to appeal. The verdict followed a separate jury decision in New Mexico against Meta relating to child safety allegations under state law, which resulted in civil penalties. Meta has indicated it will appeal both outcomes.
The Californian case is the first of its kind to hold the platform design responsible for harm, bypassing the shield under US law that protects social media platforms from liability for the content generated by users. This was based on arguments that Meta and Google owed a duty of care to child users to take reasonable steps to prevent foreseeable harm arising from the design of their platforms. It was found that this duty was breached through the deliberate use of engagement‑driven features such as infinite scroll, “likes” and push notifications, that are known to promote compulsive use.
In assessing negligence, the jury considered evidence that the companies were aware of the potential mental‑health risks to children associated with these engagement-driven features yet continued to deploy and optimise those features without adequate warnings or safeguards. Internal documents presented at trial were used to demonstrate foreseeability of harm and a failure to mitigate known risks, supporting the conclusion that platform design choices materially contributed to the plaintiff’s injury.
The verdict comes amid heightened global scrutiny of online harms, with similar lawsuits pending and several jurisdictions, such as Spain and Australia, moving to restrict social media access for under-16s.
This decision underscores a growing shift in how international courts assess the responsibilities of developers of consumer-facing applications. By focusing on addictive design features, the ruling suggests that platforms may face negligence, and other tort claims even where content moderation measures are in place. Developers should urgently review engagement-driven features and implement safeguards to mitigate potential harm to users, particularly children, as courts and regulators increasingly prioritise their wellbeing.
In early April 2026, Anthropic announced and released to a limited number of parties Claude Mythos Preview (“Mythos”). Anthropic said that the model was too powerful to release more widely due to its ability to discover and exploit serious software vulnerabilities.
The UK government’s AI Security Institute has since evaluated Mythos’ cyber capabilities, finding that while performance on standard hacking tasks broadly tracked other leading models, Mythos stood out in multi‑step attack chaining. The analysis frames the practical risk as faster, cheaper exploitation of weak or poorly‑patched systems rather than a “push‑button” autonomous cyber weapon.
To read more about this development, see our analysis here.
French start‑up Mistral AI SAS has raised $830 million in its first debt financing to support the construction of Nvidia‑powered data centres across Europe. The move reflects growing demand from governments and businesses for “sovereign” European alternatives to US cloud and AI providers.
Mistral has previously announced plans to invest €4 billion in AI infrastructure, including facilities in France and Sweden. According to its CEO, Arthur Mensch, expanding computing capacity locally is seen as critical to maintaining European autonomy and meeting sustained demand for customised AI environments that do not rely on third‑party cloud platforms.
The financing will support Mistral’s first data centre near Paris, which is expected to become operational by the end of June. This forms part of a broader strategy by a group of multinational banks including BNP Paribas, HSBC and MUFG to secure up to 200MW of AI computing capacity across Europe by 2027.
The raise comes amid a wider trend of technology companies turning to debt markets to finance capital‑intensive AI infrastructure projects, despite warnings from analysts about potential oversupply and uncertain returns.