ChatGPT causing more than 1.3 million data breaches per year in the UK
ChatGPT causing more than 1.3 million data breaches per year in the UK
24th March 2025
Carrying out a Google search is second nature to us, and typing something into ChatGPT is quickly becoming the new norm. For a lot of people, using AI is transforming the way we work, but many people don’t realise that every single piece of information sent to ChatGPT is stored by OpenAI and this data often contains sensitive information.
It is estimated that there are 1.3 million pieces of sensitive data that are sent to ChatGPT every year by people in the UK. This can be in the form of client names, to more precarious information like account numbers and information that could be used to hack into accounts. This has led to companies like JP Morgan banning ChatGPT outright, but just like blocking Google wouldn’t stop people using the internet to search, a more nuanced approach is needed.
At Bespoke we have developed a way to utilise the power of AI tools such as ChatGPT while ensuring client data is protected. We call this method Encrypted Queries.
It works by running code that identifies any sensitive pieces of information and then encrypts and anonymises it. We then send this encrypted data to a third party tool like ChatGPT and after the response is received, we internally decrypt it.
This allows clients to leverage the power of AI tools while staying completely safe, secure and compliant with GDPR legislation. This could also open the door to an even more secure process, whereby data is encrypted before internal staff see the information. For example, if a company was working on an extremely sensitive account, we could create a tool whereby the sensitive account name and any other sensitive information was encrypted and so any staff coming in contact with the account only saw anonymised data, keeping the process safe and secure, even within internal communications.
It's estimated that around 8% of staff are currently using generative AI tools such as ChatGPT at work, and many of these do so without letting their boss know. This lack of transparency is what is fuelling such data breaches and it will often be too late before the company realises the data has been compromised. Technology within the AI space is developing rapidly and it's important that internal policies move just as fast to keep up with it.
We believe that Encrypted Queries could be a step towards companies allowing their staff to use tools such as ChatGPT without the fear of sending sensitive data and breaching contracts. This resolves the issue of liability that these processes may cause. We don’t need to be scared of AI, but we do need to be diligent. Taking steps towards more secure searches and better ways to protect our data will make the technology safer and more accessible for everyone.
Paul Willis
Head of Product, Bespoke
More Articles