12 September 2024

The Data Protection Implications of AI: What does your organisation need to know?

Written by Laura Cunningham

Regulation of Artificial Intelligence (AI) in the UK is still in its infancy, however, it is an operational necessity for organisations to consider the data protection implications of using AI tools. Whilst AI technology offers potential transformative benefits to organisations; it also raises a number of key data protection risks.

In this article we explore some of the data protection risks associated with AI, the current approach to regulation in this area and examine what organisations need to consider if they are using AI to process personal data.

Key Risks

• Lawfulness and Accuracy: The training of generative AI tools involves the collection and processing of a significant volume of personal data derived from various sources including data scraped from the internet. This raises questions surrounding whether this training data has been lawfully collected. Likewise, relying on publicly available data raises questions as to the accuracy and completeness of training data.

• Security Risks & Data Breaches: The vast amounts of data collated by AI models increases the risk that any security breach or cyber-attack would result in a significant personal data breach.

• Surveillance & Intrusion: AI exacerbates surveillance risks by increasing the scale and ubiquity of personal data collection. AI powered surveillance technology has caused concern regarding the large-scale collection and analysis of personal data in a manner non-compliant with data protection legislation.

• Lack of Transparency: The nature of many AI models makes it difficult to inform data subjects exactly how their personal data is processed.

• Secondary Use: AI also exacerbates the use of personal data for purposes other than originally intended through repurposing data. Organisations should confirm they have the appropriate permissions to ensure that their use of personal data is compliant.

• Data Subject Rights: There are also further data protection risks around whether it is possible for AI models to appropriately respond to data subject rights such as requests for right of access, right to rectification and right to erasure.

Regulatory approaches to AI

The previous Conservative Government adopted a light touch approach to AI regulation on the basis that this would help promote innovation and agile technology in the UK AI sector. In the July 2024 King’s Speech, the new Labour Government indicated an intention to regulate the developers of the most powerful artificial intelligence models, however, it stopped short of proposing an AI Bill.

In contrast, the EU has adopted a more robust stance on regulation with the introduction of the EU AI Act in August 2024 – the world’s first AI specific legislation. The Act introduces a new regulatory regime to sit alongside existing legal frameworks such as data protection and intellectual property laws. This regulation follows a risk-based approach imposing specific obligations which vary depending on the type of AI system and its associated category of risk. Significantly, in a similar manner to EU GDPR, the Act has extra-territorial effect and will therefore have an impact on UK organisations operating in the EU who are using AI. Since the use of AI is likely to result in the processing of personal data, organisations should also be aware of the close relationship between data protection law and the EU AI Act.

Mitigating data protection risk

Privacy by Design

Organisations should embed data protection by design into their use of AI systems from the outset. Adopting a proactive approach will ensure that privacy risks are identified and mitigated at the earliest possible stage in the process. This will minimise the risk of non-compliance, data breaches and the misuse of data.

AI Policies and Training

Organisations should develop a specific AI Policy and align their overall AI governance and risk management strategy into their internal structures, roles and responsibilities, and training requirements. An organisation should ensure that its data protection policies, procedures, and privacy notices are all up to date and reflect its use of AI technology. Organisations should also make sure that employees receive adequate training in relation to AI processing, introduce simple ways for a data subject to request human intervention or challenge a decision and carry out regular checks to ensure their systems are working as intended.

Contractual Protections

Organisations should ensure that their contracts with AI suppliers contain appropriate provisions dealing with, inter alia; data processing, minimum organisational and technical measures to ensure the security and integrity of data, international transfers of data and appropriate liability caps.

Conclusion

With the UK AI market projected to surpass $1 trillion by 2035 it is clear that AI is here to stay. However, organisations must consider how AI systems can be implemented in a manner which complies with existing regulatory obligations. Failure to do so may expose organisations to regulatory sanctions, litigation and reputational damage.

If you would like any further information or advice on these issues, please contact Laura Cunningham from the Commercial team.

*This information is for guidance purposes only and does not constitute, nor should be regarded, as a substitute for taking legal advice that is tailored to your circumstances.

About the author

Laura Cunningham

Partner

Laura Cunningham is a Partnerin the Commercial team at Carson McDowell. She is qualified to practice in Northern Ireland, Republic of Ireland and England and Wales. Laura specialises in all aspects of information law including: privacy, confidentiality, data protection, General Data Protection Regulation (GDPR), and freedom of information (FOIA).