Welcome to the latest edition of the Stephenson Harwood Data and Cyber Update, covering the key developments in data protection, digital regulation and cyber security law in April 2026.
In data regulation news, EU countries diverge on proposed amendments under the European Commission’s "digital omnibus" package; the ICO publishes further updated guidance following the enactment of the Data (Use and Access) Act 2025; China issues an Announcement on the Reporting Obligations regarding Compliance Audit of Minors' Personal Information Protection; and the EDPB publishes a new DPIA template and new guidelines on the processing of personal data for scientific research.
In our enforcement and civil litigation update, a settlement was reached in a claim regarding processing and automated decision-making in connection with a “know your client” database, and a judicial review challenge to Met Police live facial recognition has been dismissed.
EU institutions are progressing work on the European Commission’s "digital omnibus" package, which proposes targeted amendments to EU data and cybersecurity rules (the "Digital Omnibus"). A 19 March 2026 compilation of member states’ comments on the draft highlights particular sensitivities around data subject access requests, breach notification thresholds and cookies and tracking, which we focus on here as the proposals most likely to affect day-to-day GDPR compliance.
The text will not become law until it is agreed in trilogue between the Commission, the European Parliament and the Council of the EU, which is currently not expected until February 2027. While there is broad support for simplifying GDPR processes and aligning overlapping regimes, negotiations on the data elements of the Digital Omnibus remain at an early stage, with divergence between different member states reported as of late March 2026.
The Digital Omnibus proposes to expressly recognise "abusive" data access requests under the GDPR, allowing controllers to refuse them or charge a reasonable fee. There are discussions on how narrowly "abusive intent" should be defined, and whether similar rules should also apply to complaints before supervisory authorities. These proposals build on recent CJEU case law confirming that controllers may reject abusive access requests on the grounds that they are excessive, but only where this can be substantiated.
The Commission proposes that personal data breaches be notified to regulators only where they are likely to result in a high risk to individuals’ rights and freedoms. Some stakeholders favour this higher threshold but want clearer guidance on what "high risk" means in practice. Others are concerned that raising the bar could reduce oversight of security incidents and are arguing either for a lower threshold or for additional safeguards, such as longer deadlines or clearer examples in the legislation.
The Omnibus would keep consent as the default rule for storing and accessing information on users’ devices (including through cookies) but would list limited purposes for which consent would not be required. Ongoing debates focus on how narrowly these exemptions should be drafted, how closely they should be tied to existing legal bases, and whether security-related exemptions risk becoming too broad. There is also scepticism from some member states about whether the reforms will genuinely reduce "consent fatigue" or instead add another layer of complexity to an already crowded regulatory landscape.
Taken together, these debates suggest that, while reform is likely, the final shape and timing of the Digital Omnibus remain uncertain. The next phase will see the European Parliament and the Council each develop and adopt their own positions before entering trilogue negotiations on a compromise text. We would advise organisations to wait until there is greater clarity on the outcome before making significant changes to their GDPR compliance programmes in response.
For an update on the impact of the Digital Omnibus on AI regulation, please see our separate article in our Neural Network bulletin here.
Following the enactment of the UK Data (Use and Access) Act 2025 ("DUAA"), the Information Commissioner’s Office (“ICO”) has been working to update key areas of its data protection guidance. Recent updates focus on: (i) compatibility and re use of personal information; (ii) recognised legitimate interests; and (iii) automated decision making ("ADM") and AI, particularly in recruitment.
The ICO’s guidance on "Compatibility and the re use of personal information" clarifies when organisations may lawfully use existing personal data for new purposes without breaching the purpose limitation principle in Article 5(1)(b) UK GDPR.
The guidance explains that "re use" means using personal information for a purpose other than the original purpose of collection. Re use is only permitted where:
Organisations must carry out a compatibility assessment, considering the link between purposes, the collection context and relationship with individuals, the nature and sensitivity of the data, potential consequences for individuals, and safeguards such as encryption or pseudonymisation. The ICO stresses that this assessment is context specific and must be documented and kept under review.
The guidance sets out in more detail how to conduct this assessment, including examples of when new uses are likely to be considered "unexpected" or unduly invasive, and therefore incompatible absent consent. It also underlines that Annex 2 does not create a blanket permission: controllers must show that the processing is necessary and proportionate for a listed purpose and that appropriate safeguards are in place. In practice, organisations should build a standardised compatibility assessment into their change-management processes and consider a Data Protection Impact Assessment (“DPIA”) where re-use is novel, large scale or could significantly affect individuals.
The guidance provides clarity on the routes to re use data, including for some key public interest purposes.
The DUAA introduced a new "recognised legitimate interest" lawful basis, on which the ICO has now issued guidance. This basis sits alongside, but is distinct from, the general "legitimate interests" basis under UK GDPR.
Recognised legitimate interest can be used where processing is necessary for specific public interest scenarios listed in Annex 1 UK GDPR, including responding to disclosures requested by public bodies, safeguarding and crime prevention.
The ICO’s guidance clarifies that recognised legitimate interest is a deliberately narrow ground and cannot be used for general commercial activities or internal improvements. It expects controllers to: (i) be able to evidence why the Annex 1 condition applies and why the processing is necessary for that condition; (ii) reflect this basis clearly in records of processing and privacy notices; and (iii) handle objections using processes equivalent to those in place for standard legitimate interests.
Organisations engaging in processing related to a public interest (for example, responding to law enforcement requests or safeguarding vulnerable groups) should therefore review where recognised legitimate interest can simplify their assessments, while ensuring they do not attempt to stretch it beyond its strictly defined scope.
Finally, the ICO’s work on AI and ADM has been formalised through new Regulations requiring an ICO AI and ADM Code of Practice, and a draft report and guidance on automated decision making in recruitment.
The Code of Practice will set out good practice for processing personal data when developing and using AI and ADM, including specific guidance on children’s data and generative AI. At the same time, the recruitment guidance responds to widespread use of AI enabled tools that screen CVs, score candidates and assess behaviour.
The ICO’s key message is that many employers underestimate the extent to which they are using solely automated decision making. The guidance clarifies that "meaningful human involvement" requires a human with genuine authority and competence to change the outcome, not a rubber stamp. Where decisions are solely automated, organisations must:
The draft recruitment guidance also sets expectations around vendor management and testing: controllers remain responsible for assessing bias, accuracy and robustness, even where tools are supplied by third party providers. The ICO expects employers to run trials, monitor outcomes over time, and ensure that any human "review" is genuinely capable of changing results on a case by case basis. For many organisations, this will require updating DPIAs, transparency wording, procurement questionnaires and internal training to reflect the new ADM framework under the DUAA.
The practical effect is that organisations must either redesign processes to ensure real human oversight, or explicitly treat their systems as ADM and implement the full suite of safeguards expected by the ICO.
You can find our DUAA implementation tracker here, where you can monitor which provisions are already in force, when others will commence, and when the ICO is expected to publish its guidance.
On 29 December 2025, the Cyberspace Administration of China ("CAC") issued the Announcement on the Reporting Obligations regarding Compliance Audit of Minors' Personal Information Protection (the "Announcement"), which requires personal information handlers to carry out annual reporting on the compliance audit of minors' personal information protection to local CACs via the online system hosted by the CAC ("Annual Reporting"). The requirement for Annual Reporting was first mandated in the Regulations on the Protection of Minors in Cyberspace ("Minor Protection Regulations") effective from 1 January 2024 and the Announcement further extends such reporting obligation to an annual and formal reporting process.
Neither the Announcement nor the Minor Protection Regulations includes any volume threshold requirement, so any processing of minors’ personal data by a personal data handler subject to PIPL must be audited and reported on an annual basis. However, where the volume of minors' data is limited, the audit could be carried out in-house without engaging an auditor.
Failing to complete the Annual Reporting will be a violation of the PIPL and may result in penalties such as a fine, an administrative warning, an order to make correction and/or an order to suspend or terminate the operation of the relevant business. The personal information handler’s PIPO (China's equivalent of DPO) and/or directly responsible personnel may also be subject to personal liability if the violation is serious.
The EDPB has published a draft template for Data Protection Impact Assessments ("DPIAs"), aimed at enhancing consistency and supporting organisations in demonstrating compliance with the GDPR. The template forms part of the EDPB’s broader efforts to harmonise application of GDPR requirements across the EU.
The EDPB’s template DPIA - required where processing activities are likely to result in a high risk to individuals’ rights and freedoms - is designed to help organisations structure and document their risk assessment in a standardised way. The template is accompanied by an explainer document intended to clarify key concepts and address common areas of uncertainty.
The EDPB stresses that use of the template is not mandatory. Controllers may continue to use their preferred DPIA methodology, while the template’s predefined fields help ensure complete, structured responses and reduce the risk of omissions and errors. While the EDPB template is designed to be completed step‑by‑step, it is structured around seven main sections that map to the key DPIA elements. These include information about the data and systems involved, a legal analysis, a review of necessity and proportionality, an assessment of the risks and proposed mitigations and a statement of the conclusion reached. Of particular interest may be the EDPB’s separation of the risks if the processing goes as intended from the risks of unauthorised processing resulting from the activity.
The accompanying explainer document also notes that some deliberate overlap between sections is intended to ensure mandatory elements are addressed and cross‑referenced in a traceable way.
The template is open for public consultation until 9 June 2026. After the consultation, the EDPB expects supervisory authorities to take steps to adopt the template either as their standard approach or as a "meta‑template" to which national templates align.
In April 2026, the European Data Protection Board (the "EDPB") adopted draft Guidelines 1/2026 on the processing of personal data for scientific research purposes (the "Guidelines"), aiming to facilitate easier GDPR compliance and provide clearer guidance for researchers in life sciences and healthcare. A consultation on the Guidelines runs until 25 June 2026. We cover this development in more detail here.
Lazarevic (a Child) acting by Danica Karić Stojilković as litigation friend and another v Refinitiv Limited was set to be the first case concerning the automated decision-making restrictions under Article 22 UK GDPR to be considered by the English courts. But, like many cases of a similar nature, it settled before it was due to go to trial in the High Court of England and Wales.
Nadežda Lazarević and Danica Karić Stojilković, acting on behalf of their respective children, challenged Refinitiv in relation to the identification of their children, Luka Lazarević and Petar Stojilković, as relatives of politically exposed persons within its know-your-client database, World-Check.
The claimants alleged that Refinitiv’s processing breached a number of provisions under the UK GDPR, including that the processing was opaque, inaccurate and that the company had wrongly refused their request to have the children’s personal data removed from the database. Notably, the claim also alleged a breach of Article 22(1) UK GDPR, which restricts the ability of controllers to take decisions based solely on automated processing where those decisions have legal or similarly significant effects on individuals. Refinitiv’s counsel defended the claim on the basis that the data processed by World-Check was "mere corporate information" which did not relate to the two children.
The case had been listed for trial in the High Court on 20 April before it was settled. Had it proceeded, it would have been both the first data protection claim against a know-your-client database to reach trial in the English courts and the first case concerning an alleged breach of Article 22 UK GDPR.
The settlement means that, for now, organisations using automated tools in KYC and similar high-impact contexts must continue to navigate Article 22 without clear judicial guidance, but the claim itself underscores the growing litigation and reputational risks in connection with databases and profiling.
On 21 April 2026, the Divisional Court dismissed a judicial review claim challenging the lawfulness of the Metropolitan Police Service’s policy governing the overt use of live facial recognition ("LFR"), adopted in September 2024 (the "Policy").
The claimants, Shaun Thompson (a youth worker) and Silkie Carlo (director of advocacy group Big Brother Watch), argued that the Policy left too much discretion to police officers as to where, why and against whom LFR could be deployed, rather than contending that LFR is unlawful in principle. The claim relied on Articles 8, 10 and 11 of the European Convention on Human Rights ("ECHR"), alleging unlawful interference with private life and related expression/assembly rights.
The Court held that the Policy meets the required “quality of law” standard, holding that it is sufficiently accessible and foreseeable and provides adequate safeguards against arbitrary decision‑making.
From a data protection perspective, the Court described how LFR processes biometric data. It describes LFR as scanning faces in public places, deriving biometric data from images and comparing that data against watchlists of previously identified “Sought Persons”. Where no match is generated, the biometric data is automatically and immediately deleted, and images of non‑matched members of the public are blurred.
Although this was an ECHR challenge, it contains practical signposts for any controller deploying biometric identification. In particular, organisations should ensure deployments are for clearly defined use cases, supported by objective criteria and governance practices. It also underscores the importance of building data minimisation and deletion controls into system design (particularly for non‑matches) and maintaining a documented, auditable DPIA demonstrating why the processing is justified and how risks to individuals are mitigated.