top of page

Protecting the generative generation: reducing the impact of AI hackers

A Terminator T-800 robot in a newsroom

AI is everywhere. Generative AI tools in particular have infiltrated all aspects of the professional world and deep fake images are increasingly becoming an issue for media outlets.

Realistically, it won’t be long until a generation will be considered ‘AI natives’. I’m 33 and there’s an argument that I qualify: I first watched The Terminator aged seven and ended up afraid of the red light on the VCR opposite my bed, because it resembled the titular character’s eye. Clearly, artificial intelligence was impacting my way of life from a very early age!

Much like The Terminator though, AI isn’t solely being used for good. So-called backdoor attacks have been reported for a year or so now, outlining how hackers can input code into an AI algorithm, sabotage systems and even steal data.

Poisoned AI

The National Institute for Standards & Technology (NIST) has warned that hackers are also now aiming to “poison” AI systems by targeting them during the training phase. The poisoning attacks see cybercriminals hack into AI systems while the algorithm is learning, shaping it using erroneous data.

The results? A chatbot that has been trained not to inform and educate, but to drive a very specific, often political, agenda with heavily biased responses. In a year with more than 60 countries due to have elections, and over half the world’s population voting, this could be catastrophic. 

There is also a growing trend of evasion attacks on AI systems that are already active. NIST’s report referenced the potential harm that could be done by programming AI within autonomous vehicles to misinterpret road signs, endangering road-users and pedestrians alike.

The most worrying aspect of all of this is that NIST’s report states that there are currently no ways of combatting these attacks. There are ways to mitigate, but not to ensure AI systems are completely secure.

AI is here to stay

While this may sound somewhat doom and gloom, AI isn’t about to become a thing of the past. It’s doing too much good in the world - driving productivity in businesses of all sizes, and streamlining access to data and insight for anyone with an internet connection. 

This is visible in the amazing work some of our clients do as well, from AI-driven programmatic advertising that supercharges audience targeting, to helping consumers find their next favourite outfit with visual searches. We’re currently working on a visual AI project for one of our clients that will blow you away - watch this space!

Critically evaluating insight

The majority of you reading this will largely use generative AI tools, so it’s key to keep in mind the potential issues behind the scenes. Whether you’re searching for a killer stat to use in your next sales and marketing presentation, or you need inspiration for a creative idea, it’s important you don’t take all responses at face value. So, much like in the pre-AI era, and like when you see a news headline on social media, it’s important to verify your data before using it.

So, here are three simple tips to follow to reduce the likelihood of falling foul to erroneous AI content:

  1. Cross-reference: when generative AI references news as fact, always verify this against reputable media publications. If the only websites referencing the so-called story are ones you’ve never heard of, it’s likely not credible information.

  2. Find the source: generative AI regularly cites third-party data and should reveal the source. Since it gets its information from scraping the internet, you should be able to locate the data yourself. If it isn’t available, it may be an AI fabrication.

  3. Think critically: always critically evaluate what you see. News, data and imagery can now be created using AI and even journalists have been caught out by high quality deep fakes in recent times. If something seems too good (or wild) to be true, it usually is.

bottom of page