AI-generated fake content could unleash a virtual arms race

AI-generated fake content could unleash a virtual arms race


This article is part of a VB special issue. Read the series here: Power in AI.


When it comes to AI’s role in making online content, Kristin Tynski, VP of digital marketing firm Fractl, sees an opportunity to boost creativity. But a recent experiment in AI-generated content left her a bit shaken. Using publicly available AI tools and about an hour of her time, Tynski created a website that includes 30 highly polished blog posts, as well as an AI-generated headshot for the non-existent author of the posts. The website is cheekily called ThisMarketingBlogDoesNotExist.com.

Although the intention was to generate conversation around the site’s implications, the exercise gave Tynski a glimpse into a potentially darker digital future in which it is impossible to distinguish reality from fiction.

Such a scenario threatens to topple the already precarious balance of power between creators, search engines, and users. The current flow of fake news and propaganda already fools too many people, even as digital platforms struggle to weed it all out. AI’s ability to further automate content creation could leave everyone from journalists to brands unable to connect with an audience that no longer trusts search engine results and must assume that the bulk of what they see online is fake.

More troubling, the ability to weaponize such tools to unleash a tidal wave of propaganda could make today’s infowars look primitive, further eroding the civic bond between governments and citizens.

“What is alarming to me about this new era of high-quality, AI-generated text content is that it could pollute search engine results and clog the internet with a bunch of garbage,” she said. “Google could have a difficult time figuring out if “Google could have a difficult time figuring out if [content] was … The SEO and content marketing industry has grown increasingly complex in recent … was mass-generated. Even if it is possible for Google to do it, the time and the resources it would take to incorporate this into search would be difficult.”

AI versus artists

The intersection between AI and creativity has been expanding rapidly as algorithms are used to create music, song lyrics, and short fiction. The field compels attention because we like to believe that emotions and creativity are primal urges that define aspects of our humanity. Using machines to replicate these qualities is an intriguing technical challenge that brings us a step closer to bridging the human-machine divide while sending some into an existential quagmire.

Earlier this year, the OpenAI project stepped squarely into this battlefield when it announced it had developed powerful language software that was so fluent it could nearly match human capabilities in producing text. Worried that it would unleash a flood of fake content, OpenAI said it would not release the tool for fear that it would be abused.

This was simply catnip to other developers who raced to create equivalents. Among them were two masters students at Brown University, Aaron Gokaslan and Vanya Cohen. The pair said they managed to create a similar tool even though they didn’t possess particularly strong technical skills. That, of course, was their point: Virtually anyone could now create convincing AI-powered content generation tools.

Gokaslan and Cohen took issue with OpenAI’s decision not to release its tools because they felt access to the technology offered the best hope for constructing defensive measures. So they published their own work in protest.

“Because our replication efforts are not unique, and large language models are the current most effective means of countering generated text, we believe releasing our model is a reasonable first step toward countering the potential future abuse of these kinds of models,” they wrote.

This disclosure philosophy is shared by the Allen Institute for Artificial Intelligence and the University of Washington, which together created Grover, a tool to detect fake news generated by AI. They posted the tool online to allow people to experiment with it and see how easy it is to generate an entire article from just a few parameters.

Grover was the tool Tynski used in her experiment.

Reality or illusion?

Fractl touts itself as a one-stop shop for organic search, content marketing, and digital PR strategies. To that end, Tynski said the company had previously experimented with AI tools to help with tasks such as data analytics and some limited AI content creation that formed the basis for human-created content.

“We’re incredibly excited about the implications of how AI could support high-quality content — to parse data and then help us tell stories about that data,” she said. “You could see where AI-generated text could be used to supplement the creative process. To be able to use it as a starting point when you’re stuck, that could be a huge boon to creatives.”

Then she paused before adding: “Like any of these technologies, there are implications for nefarious purposes.”

The SEO and content marketing industry has grown increasingly complex in recent years. Creating content that feels authentic is more difficult when the internet is bombarded by bots on social media platforms and overseas clickfarms, where low-paid workers bang out copy for pennies. This is not to mention the rise of video “deepfakes.” But as Tynski has previously written, when it comes to AI, “our industry has yet to face its biggest challenge.”

To explore those dangers, Fractl wrote out 30 headlines and placed them into Grover. In a blink, it spit out extremely fluent articles on “Why Authentic Content Marketing Matters Now More Than Ever” and “What Photo Filters are Best for Instagram Marketing?” The latter reads (in part):

Instagram Stories first made people’s Instagram feeds sleeker, more colorful and just generally more fun. They could post their artistic photos in the background of someone else’s Story — and secretly make someone jealous and/or un-follow you while doing it.

That post-publishing feature still makes for some very sweet stories, particularly when you show a glam shot of yourself, using your favorite filter. And that’s why the tech-focused publication Mobile Syrup asked a bunch of Insta artists for their faves. (You can check out the full list of their best Instagram Stories.)

It’s not Shakespeare. But if you stumbled across this after a search, would you really know it wasn’t written by a human?

“It works in that voice really well,” Tynski said. “The results are passable to someone just skimming. It sets up the article, it made up influencers, it made up filter names. There’s a lot of layers to it that made it very impressive.”

The stories are all attributed to a fictional author named Barry Tyree. Not only is Barry not real, neither is his photo. The image was generated using a tool called StyleGAN. Developed by Uber software engineer Philip Wang, the technology builds on work Nvidia did generating images of people with an algorithm that was trained on a massive data set of photos. Anyone can play with it at ThisPersonDoesNotExist.com.

The combination is powerful in that it puts these tools within just about anyone’s reach. Proponents argue that this kind of advance further democratizes content creation. But if past is prologue, any potential benefits will likely be turned to darker purposes.

“Imagine you wanted to write 10,000 articles about Donald Trump and inject them with whatever sentiment you wanted?” Tynski said. “It’s frightening and exciting at the same time.”

Closer to home, Tynski is worried about what this means for her company and its industry. The ability to help companies and clients market themselves and connect with customers already resembles low-level warfare as Fractl tries to stay current with Google search changes, new optimization strategies, and constantly evolving social media tools. With search and social driving so much discovery, what happens if users no longer feel they can trust either?

On a broader level, Tynski recognizes the potential for AI-generated content to further tear at our already frayed social fabric. Companies like YouTube, Facebook, and Twitter already seem to be fighting a futile battle to stem the tide of fake news and propaganda. They’re using their own AI and human teams in the effort, but the bad guys still remain well ahead in the race to distract, disinform, and divide.

To make sense of it all, one thing is certain. We will need increasingly better tools to help us determine real from fake and more human gatekeepers to sift through the rising tide of content.

This article is part of a VB special issue. Read the series here: Power in AI.




Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com