11 Sep 2024

A Look at Advai’s Assurancfdgfdgs fgsdfg sdfg sd

In lieu of standardisation, it is up to the present-day adopters of #ArtificialIntelligence systems to do their best to select the most appropriate assurance methods themselves.

Here's an article about a few of our approaches, with some introductory commentary about the UK Government's drive to promote transparency across the #AISafety sector.

Words by
Alex Carruthers
Assurance Techniques

Aye Aye AI podcast!

Our very own Chris Jefferson and Matt Sutton were guests on the latest episode of the Aye Aye AI podcast!

 

In the episode, they delve into the critical security threat of indirect prompt injection—a vulnerability that allows attackers to manipulate GenAI systems using malicious instructions embedded in data like emails or documents.

 

From the risks of disinformation, phishing, and denial of service to strategies for mitigating these challenges, Chris and Matt share invaluable insights. They also discuss how the integration of large language models (LLMs) into organisational systems expands the attack surface—and why strong safeguards are essential.

 

Key takeaway: RAG (Retrieval-Augmented Generation) is powerful, but without the right protections, it’s vulnerable.

 

 

Assurance Techniques

Please have a Listen

Not on You Tube, please try some of these other locations

Apple Podcasts Spotify