How computer vision may impact the future of marketing


When people think about computer vision (sometimes called “machine vision”), they often think of smartphones and autonomous cars.

Snapchat can give you a puppy dog face thanks to facial recognition (a subset of machine vision). Autonomous cars can identify a human walking across a street. But did you know that machine vision plays a role in future marketing applications as well?

In this article, we’ll explore three current applications for computer vision in marketing. It’s important to note that these applications are most likely to be found in retail or broad B2C markets — I covered the reasons for this in my “5-year trends in artificially intelligent marketing” article here on MarTechToday (which may be a useful read for people with a strong interest in AI’s wider marketing applications).

1. Contextual ads/in-image ads

When Google AdSense or Google Display Network is embedded on a site, users will see a text or image ad that’s either (a) relevant to the text on that page, or (b) based on retargeting data of that particular user.

But what about images? As it turns out, there are companies (GumGum is one of them) that can display advertisements over images, by contextually identifying what is in the image and displaying relevant ads on the image itself.

For example, an image featuring playing kittens might be a good place to advertise a cat food brand — or an image of a tropical beach might be a good place to advertise vacation rentals in the Bahamas. One of GumGum’s YouTube videos shows this technology in action in a short highlight reel:

This is a challenging task that hasn’t been possible until relatively recently — thanks to major developments in machine vision in the last two to three years.

“Until very recently, it hasn’t been possible for a computer to get a semantic — that is to say, a human-level understanding of pictures,” machine vision guru Nathan Hurst, a distinguished engineer at Shutterstock, told me. In a recent interview, he explained how past approaches almost always boiled down to tagging images to identify their contents — until engineers built machine learning models that could be trained on massive image data sets.

With algorithms that can distinguish not just a “car,” but a “2004 Honda Civic,” and not just a “dog,” but a “cocker spaniel,” advertisers now have the ability to target specific image contexts to target their ads. An e-commerce business targeting Honda owners can not only target branded search terms (in Google AdWords, for example), but might also target only the images of Honda cars on related websites.

2. Programmatically generating advertising creatives

The online world is moving to video — with Cisco research predicting that 80 percent of web traffic by 2019 will be from engagement with video. Because of this trend, not only are major journalistic sites (such as Mic and Verge) pivoting to video, but brands are also aiming to win in the video game — but it’s not easy.

If a sunglasses brand has 100 images of its newest design, how does the company know which of those images should be used to garner clicks or purchases from users on Facebook, Twitter or Pinterest?

Montreal-based Envision.ai is working on applications to parse through myriad image and video options to match the right media to the right user at the right time. Because a certain user or demographic group may change click-through behavior depending on the time of day, an AI system could be trained to adjust advertising media on these real-time factors.

[Read the full article on MarTech Today.]


Some opinions expressed in this article may be those of a guest author and not necessarily Marketing Land. Staff authors are listed here.




Source link

Comments are closed.

?
WP Twitter Auto Publish Powered By : XYZScripts.com