top of page

Michelle Donelan announces £8.5 million funding package to combat AI cyber threats

  • Oliver Cole
  • May 22, 2024
  • 2 min read

Today (22/05/2024) Michelle Donelan has announced that the UK Government will provide £8.5 million in funding to help combat AI cyber security threats including Deep Fakes at the AI Seoul Summit hosted in Seoul, Korea.


This comes due to the rise in usage of AI for malicious actions like creating realistic fake images and videos, writing convincing phishing emails and even creating 'undetectable' malware.


Christopher Summerfield, AI Safety Institute research director stated "We need to think carefully about how to adapt our infrastructure and systems for a new world in which AI is embedded in everything we do" and later, "This program is designed to generate a huge body of ideas for how to tackle this problem, and to help make sure great ideas can be put into practice"


This funding is designed to help mature artificial intelligence in a way that isn't detrimental to cyber security and in my opinion will make a huge difference to the current AI landscape. But it's not just the funding that will make this happen. A press release from the official gov.uk site shows that we aren't just throwing money at the problem, instead we are gathering some of the best and brightest from 10 countries (and growing) to also develop a safer AI landscape -"A new agreement between 10 countries plus the European Union". The AI Safety Institute started development back in April 2023 but has recently gained world-wide recognition.


The AISI have already released some incredible work, showing on the 20th May 2024 they could develop jailbreaks for at least 4 of the largest LLMs currently available to the public. They tested these models with the publicly available HarmBreach question set and a question set developed in-house showing that in almost 100% of cases they could fully compromise these models in 5 attempts.

A table showing the success rate for LLM jailbreaks
Source: AI Safety Institute

There's no word currently on when or even how this issue is going to be resolved, however, we can be assured that it's a priority for the organisation and the LLM creators due to the malicious activity this could spawn.


This new organisation, along with the "Frontier AI Safety Commitments" announced recently by the big 16 AI research companies will completely reshape AI in a hopefully positive way.



  • GitHub
  • Instagram
  • Linkedin
bottom of page