The UK’s Science & Technology Minister, Peter Kyle, recently confirmed that an AI Bill is coming, most likely in 2025. If passed, the Bill will be the UK’s first legislation that is wholly focused on AI.

Details are limited on what the Bill will contain. However, it is believed key aspects will be the legal enforcement of a voluntary agreement signed by AI companies earlier this year and ensuring the UK’s AI Safety Institute has the right level of independence to protect citizen rights and liberties.

So where does this leave organisations keen to leverage AI-powered video analytics for improved safety and security risk detection but worried about future compliance? If this sounds like you, here are some details that will help.

1. Leverage tools that will help protect privacy

Since several other UK laws, including the Human Rights Act and GDPR, already apply to aspects of AI, safeguarding privacy is likely to feature in any future AI-related legislation.

Interestingly, AI itself provides a useful answer to this concern. It can be trained to spot and redact (blurring or completely covering) specific details from video footage, such as faces, bodies, vehicles, and license plates. This means incident reviews, for example, in response to suspected criminal activity, can be extremely focused.

Should evidence gained via AI-based video analytics need to be shared with third parties – for example, law enforcement – any individuals or items identified as not pertinent to the investigation can be obscured accordingly.

2. Prioritise transparent and consistent usage

The Data (Use and Access) Bill is another piece of legislation related to AI that is currently in parliament.

While not specifically about surveillance applications, it does cover the very relevant concept of automated decision-making – generally, supporting the use of AI in this context as long as safeguards are in place to ensure rights are protected and that there is scope for meaningful human intervention. These safeguards will likely include the right to request details on how data, including video footage and biometrics, has been used.

Using AI-based analytics in conjunction with workflows can help meet these requirements. In this instance, AI is used to spot and present potential issues to surveillance operators to aid in more accurate decision-making and consistent responses (free from bias or subjectivity). Moreover, software like Synergy ensures that all automated and operator actions are always securely logged, delivering a full and transparent audit trail.

3. Always work to the principle of legitimate interests

The European Union (EU) AI Act was enacted in August 2024. It became the world’s first comprehensive AI law and ‘sets the standard’ for those to come.

While many commentators believe a UK version will not be as stringent as EU law, the idea of ‘legitimate interest’ exceptions will likely be replicated. Such exceptions include, but aren’t limited to, AI being clearly used as part of safety and law enforcement pursuits that are in the public interest, for example, to “support of the human assessment of the involvement of a person in a criminal activity.”

Proving legitimate interest is another task ideally suited to surveillance technology that delivers clear audit trails. Built-in reporting capabilities that can present data on usage, incident statistics, and trends will also demonstrate that AI is being used for the correct and intended purpose.

These are just a few areas to consider. Nothing is yet set in stone. However, by considering factors such as privacy, consistency, and fulfilling specified purposes, organisations can be sure their adoption of AI is on the right track for future compliance.

How AI Can Better Protect Busy Public Space Environments

This guide covers everything from the tech you’ll need to real-world applications. It will help you understand how AI can become a valuable part of your security and surveillance.

Download eBook