Both DALL·E 3 and Google’s Search Generative Experience were launched within a few weeks of each other this autumn. In this article, I take a closer look at these latest AI image generators and explore the wider implications – both good and bad.
AI.
It’s not going away, is it? If 2023 is remembered for anything, it'll be the year that artificial intelligence truly exploded into the mainstream. When people aren’t using it, they’re talking about it. And when they’re not talking about it, they’re reading about it online or telling their kids off for using it to write their school essay.
According to Ahrefs, AI poster boy ChatGPT was the top trending topic in the US back in June 2023. As we await the end of year roundups, it's a safe bet that GPT/AI will clinch all top spots across the board. It’s the new Alexa, the new Little Black Dress…heck, it’s the new Taylor Swift.
But just as the first quarter of the year saw AI chat tools fighting it out, the final quarter appears to be all about the Battle of the Image Generators. At the back end of September, Microsoft announced that OpenAI’s DALL·E 3 would be integrated into Bing Chat. Not to be outdone (and evoking memories of their knee jerk launch of Bard), Google promptly introduced its own contender, Search Generative Experience (SGE), a few weeks later.
While it would be remiss of me not to give a nod to the other AI image generator tools out there such as Canva, Midjourney and DeepAI, this blog will focus predominantly on the two new kids on the block – SGE and DALL·E 3 - and explore not only their impact on the digital landscape, but why not everyone will be welcoming this burgeoning technology with open arms.
What is DALL·E 3 and how does it differ from DALL·E 2?
DALL·E 3 is a recent upgrade from OpenAI’s DALL·E 2, which was introduced back in January 2021. Chances are you’ll have come across DALL·E at some point, but for the uninitiated, it’s an AI system that can create realistic images from a simple text prompt.
DALL·E 3 takes it to the next level by understanding significantly more nuanced and detailed requests than its predecessor, enabling it to better recognise what the user wants to create, then translating the idea into often exceptionally accurate images.
Having had the chance to play around briefly with DALL·E 3, I can confirm the difference in quality is pretty astonishing, as I’ve demonstrated below.
As a little experiment, I provided both DALL·E 2 and DALL·E 3 with the same prompt:
‘Create an image of a boy jumping over a basket full of potatoes’
DALL·E 3 produced this as its first image:
While DALL·E 2 generated this:
Don’t have nightmares, folks.
The detail on the first image is incredible. From the dirt on the potatoes to the creases on the boy’s t-shirt, DALL·E 3 has created in seconds an ultra-realistic image that is so far ahead of the DALL·E 2 output in terms of quality, it’s embarrassing.
DALL·E 3 is not only a powerful tool for creativity, but also a responsible one. OpenAI has designed it to decline requests that ask for an image in the style of a living artist and limit its ability to generate violent, adult or hateful content. The user can also request their images are omitted from training of their future image generation models.
Google’s SGE
It’s no secret that Google got caught napping earlier this year, hastily rolling out Bard in the wake of ChatGPT and Bing Chat. In the latest round of ‘Whose multi modal platform is best?’, this latest tool from Google promises to ‘create an image that can bring an idea to life’, as demonstrated below:
Hang on…how is this different from DALL·E 3?
The simple answer is…it isn’t, or not to any great extent as far as I can ascertain without having personally used SGE yet (I’m currently stuck on a waiting list, patiently awaiting access). Like DALL·E 3, SGE will create four varieties of images for every prompt and from there, users can refine the description to tweak the image accordingly. Google also promises to ‘block the creation of images that run counter to our prohibited use policy for generative AI, including harmful or misleading content’.
The only discernible differences I can see are that SGE will feature embedded metadata and watermarking, indicating AI creation. It is also only accessible to users aged 18+, whereas DALL·E 3 can be used by people aged 13 and over.
A question of ethics
As with any advancement in AI, there comes a Pandora's box of moral conundrums.
Imagery is no exception. With a few clicks and a suitable prompt – voila - you've got yourself a piece of art. Want a picture of Bournemouth beach in the style of Monet? No problem. Or a photograph of teddy bears marching down the road, led by a carrot? You got it.
But as we revel in the wonders of AI capabilities, a murkier, ethical underbelly emerges. It’s not just a question of copyright infringement, but a deeper exploration into the essence of creativity and the moral tenets that protect and uphold our artistic endeavours.
The likes of DALL-E, SGE or Midjourney are not just tools; they are emblematic of a deeper moral quandary. The ease and accessibility they offer are of course exciting and intriguing. Who wouldn't want to conjure up a Van Gogh or Picasso-themed design without the hefty price tag of a commissioned work? But herein lies the dilemma. Every pixel generated by AI is a detour from the time, effort, love and skill embodied in the work of a human artist or photographer. It bypasses the journey of creation, with its frustrations, elations and the profound satisfaction of bringing an idea to life.
Shaky legal ground
The legal questions around AI image generation are similarly unclear, especially when it comes to copyright. Under current law, the ‘author’ of a work is the creator, but it's unclear who the author is in AI-generated art: the AI system's creator - or the user prompting the AI.
Not all AI-generated works are protected since copyright requires ‘originality’ traditionally tied to human skill. The UK government is reviewing these issues as the AI landscape evolves, with potential future amendments to copyright protections. Ownership of AI-generated works varies; for instance, OpenAI lets users own the content generated by DALL-E. With some AI systems though, there's a risk of infringing existing copyright by reproducing stored or online-accessed images, as seen in the Getty Images vs. Stability AI case. As AI continues to dilute art’s authenticity, conversations around clearer legal framework to safeguard creative property rumble on.
Can AI-generated images be used commercially?
Most AI platforms allow royalty-free commercial use of the images created, as long as they comply with the site’s policies and guidelines. However, as mentioned above, legal concerns surrounding the true ownership of these images and potential copyright infringements have been raised. The models' training datasets, for example, are crucial, as there are concerns they might contain copyrighted material, so users should approach with caution while the legal landscape remains hazy.
The winners and losers
So who is likely to benefit the most from AI image advancements? And who will currently be sticking pins into voodoo dolls of AI image developers as we speak?
Winners:
- Content creators: AI offers a treasure trove of infinite visuals, enabling creators to craft bespoke imagery with ease.
- Architects and interior designers: With instant design inspirations at their fingertips, designers can visualise and iterate their concepts seamlessly, refining and re-imagining until they hit on their perfect design.
- Teachers: AI image generators will help teachers illustrate concepts, ideas and stories in a visual way, making learning more fun and interactive.
Losers:
- Artists: Now DALL·E and co can churn out art at an unprecedented rate, traditional artists will undoubtedly suffer as a result.
- Photographers: The emergence of AI-crafted photo-realistic images poses a threat to the conventional demand for photography, especially when it comes to stock photos.
- Consumers: While treated to a wider selection of visual content, consumers may find it increasingly difficult to distinguish authentic human-made creations from AI-generated images.
Which category are marketers in?
Definitely the winner section, if used strategically and appropriately.
Here are some recommendations on how marketers can harness the potential of AI image generation:
- Social media content: Keeping up with ever-evolving social media trends is paramount. AI image generators allow swift and simple creation of custom graphics, ensuring you have the potential to create relevant images for every post.
- Online marketing materials and blogs: AI tools can efficiently produce digital assets used in Meta or Google Ads, email visuals and infographics, reducing costs associated with hiring professionals for design tasks.
- Email campaigns: AI enhances email marketing by generating visuals tailored for specific audience segments. For instance, distinct images can be created for regular consumers and business customers, even personalising with the recipient's name.
- Personalised branding: Building a recognisable brand identity is essential. AI image generators can adhere to specific branding guidelines, producing visuals consistent with a company's colour scheme, font and overall aesthetic.
Final thoughts
As we hurtle towards the end of this crazy, AI-drenched year, we're compelled to reflect: what next for AI image generators and the whirlwind of moral, ethical and legal questions they evoke? While tools like DALL·E 3 and SGE offer unparalleled conveniences and advancements, they also challenge the very essence of creativity and authenticity.
I’ll leave you with this. It took Michelangelo four years to paint the Sistine Chapel. It took me 40 seconds to create the masterpiece below. AI art, eh? It’s a sketchy business. I’ll leave you to draw your own conclusions…or get DALL·E to draw them instead.