An introduction to markup for voice is a type of structured data markup – essentially, a kind of code – that you include in your webpages to make them easier for search engines to understand. It’s designed to be a universal “search engine language”, and was jointly developed by Google, Microsoft, Yahoo!, and Yandex.

It has numerous benefits for SEO, not least of which is the fact that if search engines can better understand what your content is about, it’s more likely to be surfaced as a relevant result for a user’s search.

Search Engine Optimization (SEO) Best Practice Guide markup also gives your website a much higher chance of appearing as a rich result, such as a featured snippet or quick answer, many of which are used as the sources for answers in voice search. And as more ways of indexing the internet’s vast array of content and making it available on voice devices are developed, structured data and markup are very often at the heart of them.

Therefore, while using to mark up your content doesn’t guarantee a presence on voice devices, it can significantly improve the odds of your website and your content being featured.

In this article, we’ll introduce you to the types of schema that are most relevant for voice, with some advice on how to implement them. For a more thorough introduction to using markup, check out the Getting started with using Microdata tutorial by

Speakable schema

This type of schema is officially still marked as Pending (meaning that it’s awaiting wider review and feedback), but Google is already pressing it into action via a beta program with news publishers.

With wider adoption, the Speakable schema has the potential to be a game-changer for making content available via voice search and voice assistants. It allows developers, marketers and SEOs to directly mark up the parts of a piece of content that are most suitable for being spoken aloud via a voice assistant, using text-to-speech playback.

In its Webmaster Central blog post announcing the Speakable beta program, Google explained:

“When people ask the Google Assistant — “Hey Google, what’s the latest news on NASA?”, the Google Assistant responds with an excerpt from a news article and the name of the news organization. Then the Google Assistant asks if the user would like to hear another news article and also sends the relevant links to the user’s mobile device.”

This is the closest thing we yet have to the scenario I imagined in my future of voice search article, in which voice search doesn’t stop short with a single answer to a question. It’s not yet a full audio browsing experience, but the fact that Google is offering users the option to listen to articles beyond the one that satisfies their initial result is significant.

READ ALSO  Who’s poised to win the brewing v-commerce wars?

A number of details about how the whole process works are still unclear, however: for example, if multiple publishers have relevant content that is marked up with Speakable Schema, how does Google pick which one to read aloud to the user? Does it come down to SEO, or are there other signals at work? How do you know if your content “ranks” for news-based queries?

Thus far, Speakable is only available via the Google Assistant. However, as author Michael Andrews noted on a Google+ Community post about the program,

“Google may be first out the gates implementing the speakable specification, but both Microsoft and Amazon are looking at using to support voice interaction, so it would be little surprise to see them adopt this in the future.”

The state of smart speaker voice search in 2018

Creating content-based Actions: news, recipes and podcasts

What do news, recipes and podcasts have to do with voice? Well, I’ll tell you – as of January 2018, Google Assistant (and by extension, Google’s voice devices) natively supports podcasts, recipes and news in the Assistant Actions Directory.

The Actions Directory is a comprehensive catalogue of every Action (the Google equivalent of an Alexa Skill, essentially an app that can be run within Google Home) available on Google Home devices.

This means that if you have a news story, recipe or podcast, by marking it up with the right schema, it will be eligible for inclusion in the Directory as a “Content-based Action”, and become available to anyone using a Google Home device.


Google’s Creating a News Action guide outlines the steps you should take to make your article, blog post or news story eligible for inclusion in the Google Assistant Actions Directory.

Note that adding the right structured data is only one of the requirements: Google also specifies that you should submit your site to Google News, and mark up your content for AMP (Accelerated Mobile Pages) in order for it to be eligible.

Once these conditions are satisfied, you can use any one of the following three Schema to mark up your news article:

READ ALSO  The Transformation of Search Summit: Strategies and tactics to harness the next generation of search marketing

News videos are also eligible to be included in the Actions Directory, and can be marked up using the VideoObject Schema.


You can find the main page on Recipe Schema with all the relevant properties at Google also has a tutorial on Google Developers on Creating a Recipe Action, which highlights the properties that it recommends using to mark up your content.

In particular, Google notes that you need to include the “recipeIngredient” and “recipeInstructions” properties to enable your recipe for guidance (i.e. step-by-step instructions) with the Google Assistant on Google Home devices.


Rather than using markup, creating a Podcast Action requires you to expose a valid RSS feed to Googlebot which fits with the requirements outlined in Google’s Creating a Podcast Action documentation page.

The documentation also outlines the RSS tags you should use at a podcast level and at an episode level, which will additionally make the podcast available for display as a rich result and in Google Podcasts.

How-To, Q&A and FAQ schema

In August, Search Engine Land revealed that Google has been testing multiple new types of featured snippet for the SERP: FAQs (frequently asked questions), Q&A (questions and answers) and How-Tos.

Previews of the new featured snippets were first shown at Google Dance Singapore, before Google confirmed that these snippets were officially being tested, and would be available in beta form to select publishers. Google also confirmed that these snippets are powered by structured data: HowTo, QAPage and FAQPage schema, respectively.

What is the significance of this for voice search? At the moment, there hasn’t been a direct link drawn between the new snippets and voice answers, but given that some 80% of Google Home answers are currently drawn from featured snippets and Quick Answer boxes, I’m willing to bet that when How-To, Q&A and FAQ snippets go live, they will also feed into voice search.

These three types of schema are all already operational and available to use, so you can get ahead by implementing them on your webpages now.

Now read our State of Voice Search in 2018 series to learn more about the opportunities presented by voice and how to optimise for it.

Source link

WP Twitter Auto Publish Powered By :