RBH and AI: A way forward

At RBH, we’re always looking to adopt new tools and innovations early for the benefit our clients, and AI is no exception. The difference from previous technological advancements is in its availability. No longer do you need specialist equipment or knowledge to access it. The challenge (again nothing new) is to harness the opportunity it creates, understand the issues and use the tech to expand our creative and marketing capabilities as well as improve the processes that lead to their output.

The speed of development is another unique challenge, with AI making assessment an ongoing, somewhat fluid situation. However, over the last three months alone we have run over 50 separate trials of various AI software and platforms, which has allowed us, and our insight and innovation partner, to draw some thoughts together and develop our own way forward, at least for now…

What types of AI are there?

As we know, AI has been around for years, and we’ve all been using it without really being aware that we were, in tools as commonplace as online booking platforms and chatbots. These are examples of Performative AI – programs which are created in order to complete a specific task. There is another type of AI, and this is largely the reason why AI is being talked about so much recently – Generative AI. These are programs which are designed to be creative and to actually generate original content. This is where we see the likes of AI producing imagery, writing copy, and making music.

A huge volume of new generative AI software has exploded onto the scene in the past year or so, with some of the most well-known being language models such as ChatGPT. ChatGPT is a language model in the form of a chatbot, which uses previous conversations and context input to generate natural back and forth speech patterns. The more people use this program, and at higher frequencies, the more sophisticated it becomes, as this is how it learns. This model has become infamous in the world of academia, with worries from the public that students can generate whole essays or assignments through this program by just inputting a short prompt. However, it is becoming commonplace in universities to run submitted essays through AI detection software (ironically an AI itself), similar to plagiarism detection, before marking.

Image-based generative AIs have also become more commonplace/popular recently, with programs such as DALL-E, which generate any kind of image based on a short text prompt. You could ask it to create images of a panda standing on a crocodile, eating a burger, and it would generate several images in slightly different styles for you.

Haven’t we seen shifts like this before?

Despite recent advancements in its sophistication, generative AI is not a new idea. In the industry we’ve seen shifts like this before, e.g. in the 2010 release of Photoshop CS5, which introduced us to content-aware fill. Content-aware fill uses AI to select the best replacement pixels from your image and blend them to fill in the selected area (the healing tool in Photoshop). A function embraced by users at all levels to make their jobs easier and achieve results previously unattainable, all without fear of it replacing them. Yet today this is one of the biggest industry fears with AI.

How is AI changing the creative marketing industry?

Where we use AI in the creative process to maximise its potential will vary depending on the project. For example, when gathering initial ideas and thought-starters using AI as a jumping-off point and then building on the generated ideas, we save time in researching and brainstorming, increase the breadth of considerations and provide marketers with more time to focus on strategic tasks.

AI is looking more and more game-ready, but where is it learning from?

AI is an environment rife with complicated-sounding terms. As you’d expect, the jargon masks relatively simple meanings. One of these is ‘machine learning’. This is a term we’ve been hearing about since the dawn of programmatic display around a decade ago (fancy display campaigns for modern ad campaigns), so it’s nothing new in itself. Machine learning essentially refers to any calculation that decides that one decision is better than another. This is the method that AI uses to learn over time. This could mean it’s chosen a better or worse animated caterpillar for you based on which one you like or which you are more likely to download. When this is applied to the generative AI models you are most familiar with, this process starts with feeding large amounts of data to an AI model and letting it find patterns and make predictions. Generally, a human will help it along in the early stages by tweaking the program to drive more accurate results. This is the same process which powers your Netflix recommendations, sorts your Instagram feed, and filters out your spam emails.

So, where AI learns from ultimately depends on which data it is fed or is allowed to access by humans. Therefore, this typically means that it is open to the same biases that humans are. For example, when we asked AI to create some images of a ‘creative director’, it generated multiple images of very similar looking white men with glasses. Not a single woman or person from another ethnicity was generated. And when we asked it to generate images of a ‘nurse’, it generated images of young white women. Although AI itself has no prejudices, it inherently takes on the prevalence and preferences of the data it learns from.

Take Google Bard for example. This popular and sophisticated AI gathers a large amount of its knowledge from Google platforms, such as Google search, Google maps and YouTube. Since the beginnings of YouTube, this platform has generally been dominated by Western content creators, with channels like Mr Beast or PewDiePie leading the pack for a significant amount of time. Seven out of 10 of the most subscribed YouTube channels in 2023 are US-based. Therefore, it makes sense that Bard gives Western-biased answers. However, as of August 2023, Indian record label T-Series has taken over the top spot with over 247m subscribers. This channel posts Bollywood music, with the primary language being Hindi. One channel is unlikely to turn a whole AI program on its head, but if we continue to see a shift towards Eastern cultures, it is possible that Bard will begin to tilt its responses more towards this culture.

In contrast, ChatGPT is responsive to different cultures. You can ask the same question in two different countries and get two completely different answers, depending on cultural norms, values, and methods of communication. Therefore, this program is likely to remain similar no matter which media becomes more popular, as it is sophisticated enough to understand cultural patterns and differences.

The downfalls of AI in creative and marketing

Generally AI learns from the information, writing, images, and art it can find on the internet. The problem is the media available on the internet generally isn’t open source. A letter signed by over 8,500 writers from the Authors Guild has called for AI programs to stop using their work for learning without proper compensation or authorisation. Their work has been fed into these models for learning purposes, with nothing in return. Generative AI has been making this problem worse, as now the internet is becoming saturated with AI writings based on other authors’ work. Amazon has recently had to take action against AI ‘authors’ due to generated works filling up the bestseller lists on their site.

In January, Getty Images launched a complaint about Stability AI (an image generative AI) for using millions of their images for the program’s input without authorisation. As humans, we can easily recognise that a watermark is not part of a photograph or piece of artwork. AI however, is unable to make this distinction – it appears on millions of images in the same place and in the same style, so it is considered an integral part of images in this style.

We have seen many uses of the Firefly beta in Photoshop which would make it an invaluable tool for creatives and marketers alike. Imagine you have been provided an image from a client to use in an advert for a magazine. The image they’ve provided is in landscape format, while the magazine pages are portrait. Using these new features, you can extend the image upwards, with AI generating new content to fill the blank space and thus create a portrait advert. However, this software is not for commercial use, meaning that if you extend an image or use generative fill in Firefly, legally you are not able to use it in a commercial environment.

The use of AI commercially is a very controversial topic right now, centred on the fact that many of these programs work on ‘black box’ systems. This means that nobody, including the programmers, knows exactly how the AI gets from one step to another. It is fed input data, and generates output data, but what happens in between is not fully understood. This means that we do not know exactly how these programs are using the content they have been fed to create new work – and therefore can it really be classed as ‘fair usage’?

This is inherently why RBH have not been posting relentlessly on social media about the benefits of using generative AI for all clients and all briefs as the ultimate answer to creativity in the workplace.

Moving forward, how do we want to use AI?

The world of AI is ever changing and as we’ve spoken about already, we’ve been using AI in one form or another across the agency for a long time (since Photoshop was still on CDs!), but when it comes to client output and generating the best solution to a brief; we are looking at AI for very specific use cases, not the whole solution.

One example would be our enhanced research methodology. Combining Bard with Bing and ChatGPT, we can efficiently broaden our source data, scouring articles across the internet then summarising findings of longform research with remarkable precision before our team analyse and draw our conclusions. This enables us to better understand both consumer trends and insights. Another might be our combination of Khroma and fontjoy to visualise palettes and combine with typefaces which either contrast or work in sympathy.

You’ll notice trends in the tools that we talk about in this piece. We only use tools that provide us with some modicum of control or with full transparency on where they have learnt from, what the output is likely to look like – and finally, that they are used early in the process, enhancing ideas, and not generating our final renditions of creative.

There are tools that will be impactful in many areas of business for creative marketing professionals. These are not as ‘market ready’ as they may advertise but we will be implementing many across our various workstreams in the coming months, with training across the agency and our clients to better understand which workflows, efficiencies and outputs will be affected by some of these fantastic uses of language and visual models.

However, we stand by our AI mission statement: RBH will never allow our client output to be generated by AI. Not until we have full control over the learning model and the input will we be wholly happy with the output.

Written by Joe Hepburn

Head of Insight at RBH Creative Communications. Helping your brand work smarter.

You may also like…

Alright! Alright! Alright! The way to SXSW

Alright! Alright! Alright! The way to SXSW

I’m sat in a 100-year-old cinema in Austin, Texas, about to watch the premiere of John Wick 4, and Keanu Reeves – yes, really – brushes past my seat. Later that night I’ll be in a small venue to watch New Order blow the roof off. Later that week I’ll attend talks by Robert Downey, Jr and William Shatner. I’ll enjoy free Guinness at the Irish Music Embassy on St Patrick’s Day. I’ll walk 200,000 steps, see countless talks and attend 27 gigs in 11 days.

So, how did this happen?

Get in touch