In this edition of the Neural Network, we look at key AI developments from December and January.
In regulatory developments, we look ahead to the expected developments to the EU Artificial Intelligence Act (“AI Act”) in 2026; the UK government has published new AI reports and policies; new US state AI laws in Illinois, Texas, and Colorado will be implemented in 2026, and the Information Commissioner’s Office (“ICO”) has published a report on the data protection implications of agentic AI.
In AI enforcement and litigation news, multiple investigations have been launched into X’s AI tool Grok.
More details on each of these developments is set out below.
The new year brings with it the next phase of the EU AI Act’s implementation, with additional provisions coming into effect according to its timeline. These will require companies to take on an increase in transparency obligations and further requirements relating to high-risk AI systems.
While delays have been proposed under the EU Commission’s Digital Omnibus proposal (see our article here), these have not (yet) been passed. This means that, as things stand, as of the 2 August 2026, the regime for AI systems used for high-risk use cases such as human resources management, lending, and essential services will come into effect. Providers, operators and others in the value chain for these types of use case will be required to implement numerous measures such as lifecycle risk management, technical documentation and incident reporting.
On the transparency front, companies will be subject to clearer disclosure requirements when their AI systems interact with individuals (for example, chatbots) and the labelling of AI generated content from August 2026. This will be supported by an EU Code of Practice on marking and labelling, which is currently due to be finalised in mid-2026.
The UK government has recently produced a report outlining the government’s “AI Growth Zones” policy and a progress report on copyright and artificial intelligence.
The “AI Growth Zones” policy outlines the UK government’s intention to capitalise on increasing AI adoption and increased data centre demand and capacity. The report highlights key policy aims for prospective data centres, including bringing down energy prices, accelerating grid connections, reducing planning barriers, and creating the best business environment.
In line with its obligations under section 137 of the Data (Use and Access) Act 2025 (“DUAA”), the UK government also published a copyright and artificial intelligence statement of progress in December. The statement sets out the steps the UK government is taking to prepare a report on the use of copyright works in the development of AI systems, and an economic impact assessment of the options presented in last year’s consultation on potential changes to UK copyright law. The UK government will use the consultation’s report to inform both publications, which are due in final form by 18 March 2026.
In 2026, Illinois, Texas, and Colorado are set to implement laws regulating workplace and consumer use of artificial intelligence, despite federal government pressures to eliminate state-level regulation on AI. Illinois now requires employers to notify workers when AI is used in employment decisions and bans ZIP code data in AI models. Texas has introduced consumer protections, an AI regulatory sandbox, and a state council, while Colorado’s law (coming into effect in June 2026) will mandate impact assessments, transparency, and anti-discrimination measures for high-risk AI systems.
The tension between state and federal approaches was heightened towards the end of last year after President Trump’s executive order was signed on 11 December 2025. Titled “Ensuring a National Policy Framework for Artificial Intelligence”, the order aims to pre-empt state-level AI rules in favour of a national approach.
Companies should ensure, when working on transactions with or in America, that they continue to review and seek advice on both state and federal laws to ensure compliance.
Agentic AI is artificial intelligence that is capable of making autonomous decisions and acting independently, beyond responding to commands or analysing data, often with minimal human oversight. The ICO’s tech futures report, published on 8 January 2026 (“Report”) examines agentic AI and the data protection issues it raises. The Report highlights that, while agentic AI presents significant opportunities for innovation and efficiency (for example, through the deployment of data and privacy agents), it also raises complex data protection challenges, including heightened cybersecurity risks and uncertainty over controller and processor roles. Any use of agentic AI should be carefully considered to ensure data protection obligations continue to be upheld. The Report also explores four scenarios of how adoption and capabilities might evolve over the next two to five years.
Looking ahead, in the Report the ICO says that it is taking steps to:
There has been a rapid response to a “nudification” tool available on X’s Grok. Ofcom has opened a formal investigation into X Internet Unlimited Company’s compliance with its duties under the Online Safety Act 2023. The investigation continues following social media platform X’s announcement that it has removed the functionality in its AI chatbot Grok from being able to generate and share harmful content, including intimate images, and child sexual abuse material, to users.
The action follows a recent Ofcom guidance note on the protection of AI chatbot users, which outlines the duties of online service providers to assess and mitigate the risk of harm to users, especially children.
The ICO published a statement on 7 January 2025 saying that it has contacted X and xAI to seek clarity on the measures they have in place to comply with UK data protection law and protect individuals’ rights.
The UK government has also been taking action, announcing it will imminently bring into force section 138 of the DUAA, which makes it a criminal offence to create or request the creation of non-consensual intimate images. On 12 January 2025, the Secretary of State for Science, Innovation and Technology announced plans to legislate in the Crime and Policing Bill (“Bill”), which is currently going through Parliament, to criminalise nudification apps. The new criminal offence will make it illegal for companies to supply tools designed to create non-consensual intimate images. The current draft of the Bill already criminalises AI models optimised to generate child sexual abuse material, and it is anticipated that these protections will be extended to cover the creation of non-consensual images of women. Once enacted, these provisions would likely apply to tools such as Grok.