Tech News

Is it real? AI increases April Fools risk for enterprise brands


Renamed “Voltswagen”. I’m closing Trader Joe’s. Email confirmation of a $750 food delivery.

The range of April Fools’ Day marketing pranks gone wrong is as varied as their reception. In the face of everything from smiles and social media shares to confusion, derision, or even fury and falling inventory, the playful promotional tactic represents a risk that can attract customers to a brand as quickly as it does. can sour them.

“One person’s humor is another person’s offense,” said Vivek Astvansh, a marketing professor at McGill University.

As April 1 approaches, consumers would do well to be even more skeptical, with experts saying artificial intelligence is increasing the potential of high-tech promotional ploys. Whether it’s generative text-to-video tools that conjure rich scenes from rushed instructions or chatbots that come up with countless ad ideas on command, AI is raising new questions of authenticity and could make it even harder to distinguish between jokes, facts and deepfakes.

“In the coming days, we will see many advertisements motivated by GPT-4 or other generative AI tools,” Astvansh said in reference to the most recent version of OpenAI’s popular ChatGPT program.

Even before the AI ​​breakthroughs of the past 16 months – OpenAI launched ChatGPT in November 2022 – the technology’s ability to transcend human capabilities played a role in corporate diversions.

On April 1, 2019, Google announced that it had figured out how to communicate with tulips in their own language, “Tulipish.” He proposed a translation between the petals of the perennial plant and dozens of human dialects, citing “great advances in artificial intelligence.” The video ended by indicating that Google Tulip would only be available that day, leaving little doubt about the joke.

But past misunderstandings suggest that future misunderstandings could arise, enhanced by the capabilities of AI.

As April 1, 2021 approaches, Volkswagen AG issued a press release stating that its US division will change its name to “Voltswagen”. Several media outlets reported this statement, despite some doubts about its authenticity. The confusion that greeted the announcement was further compounded when the company told reporters who asked if it was an April Fool’s joke that the auto giant was serious – and then admit the blow a few hours later.

The joke fell flat like an old tire in the wake of Volkswagen’s “diesel dupe” scandal a few years earlier, when U.S. authorities discovered that the company had installed on more than half a million cars with software that allows them to cheat on diesel emissions tests.

Other April Fool’s Day ruses that backfired include when Yahoo News falsely reported in 2016 that Trader Joe’s would close all of its 457 stores in less than a year, and when the British delivery company Online food retailer Deliveroo sent its customers fake order confirmation emails in 2021. of $750, leading thousands to believe their accounts had been hacked.

Now, the ready accessibility and low cost of using many AI tools opens the door to more companies deploying the technology, including for April Fool’s fun that could go wrong.

“GPT-4 can instantly create content for multiple ad campaigns, which can be videos or still images. And then, in a very short period of time and with very little expense or investment, the internal advertising team or the marketing team can sift through the results that GPT-4 would have generated,” Astvansh said. All that remains is to select one, polish it with the changes and publish it.

To guard against deception, Astvansh said disclosure of methods and intentions will be essential, especially on April 1.

“I hope they state or give information in their content that the original idea or the original content was created by a generative AI tool,” he said.

Digital watermarking – embedding a pattern into AI-generated content to help users distinguish real images from fakes and identify who owns them – is one such disclosure method.

“It’s basically about making sure that the images or videos produced by these platforms are labeled in a way that when they subsequently appear on the internet, labels are put on them so that… users know that what they see is AI. ” said Sam Andrey, executive director of Dais, a public policy think tank at Toronto Metropolitan University.

The potential for deception of this technology is already well established. Witness scams that use a loved one’s voice to convince their partner to transfer money to fraudsters, or recent robocalls that impersonate high-profile political figures. Combine them with sophisticated images or digitally generated characters and you have the potential for large-scale deception, including from commercial actors.

“Just a year ago it was more cartoonish,” Andrey said of AI-created graphics.

“If it generates normal, harmless media and reduces production costs, it’s less of a concern” — for example, if AI had been applied to Tim Hortons’ square Timbits, Tim Hortons’ meatball vending machines Ikea Canada or the full flannel interior from Jeep Canada. “keeping you as comfortable as a lumberjack in the Canadian wilderness.” These were all April Fool’s pranks from last year.

“But we shouldn’t use AI to deceive people,” Andrey said.

This report by The Canadian Press was first published March 30, 2024.


Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button