Aaron Bradley will be speaking at the “Schema, Open Graph, and Semantic Markup” session at SearchFest 2013 which will be taking place on February 22, 2013 at the Governor Hotel in Portland, Oregon. For more information or to purchase tickets, please click the following link.

1) Please give us your background and let us know what you do for a living.

Academically, I have a BA in English literature, with a specialization in post-structuralist literary theory. After obtaining that extremely useful (cough) degree, I worked for a decade or so as a technical librarian, then for another decade as a web designer.

(Those three biographical details – that is, sizeable stints working on semiotics, document classification and web development – basically summarizes why I’m now talking to you about search and the semantic web.)

In 2005 I turned my attention to organic search and have never looked back. Since then I’ve worked as an in-house SEO for enterprise online gaming, ecommerce and content sites. As with many SEOs, my areas of interest and responsibility have gradually morphed into other, related realms, with analytics, conversion optimization, email marketing, social media and advertising increasingly occupying my time.

At present, I head up Internet marketing efforts at InfoMine, Inc., which serves the international mining industry (“mining” as in “rocks,” not “data”). Their portfolio includes dedicated information, news and education sites in multiple languages, so the work is varied, interesting and challenging.

While it touches on my professional activities only indirectly, I am also engrossed by the drama of digital disruption in the news and publishing businesses. My interest in Italian cuisine is equally obsessive but less perverse.

2) What would you tell somebody who won’t do structured data because “Google Will Figure It Out”.

Strictly on logical grounds, I find it perplexing that any SEO should take this position. Optimizing for search entails – among other things – making on-page changes directed at improving a site’s visibility in search results. The addition of semantic markup is just another optimization activity.

To use a relevant analogy, Google can “figure out” the subject matter of a page if the page title tag is suboptimal, or even blank. Yet, of course, few search marketers would push back against making changes to a title tag on this account. Structured data is simply another mechanism that an optimization specialist can, and should, use to help sites perform better in search.

Can Google “figure out” information without having it provided specifically in machine-readable format (because, at a fundamental level, semantic markup is about adding a data layer for machine consumption that is separate from the presentation layer provided for humans)? Sure. Kind of. Sort of. Maybe.

But by providing structured data to Google, one decreases the ambiguity that may be present in unstructured data: you’re making Google guess less. Google might figure out that in a recipe the phrase “the whole thing takes about an hour to cook” means “preparation time is equal to one hour” but it might not. Modifying the code to tell Google explicitly that “preparation time is equal to one hour” vastly improves the chances that Google indexes the recipe’s preparation time and assigns the correct value to that time. This, in turn, of course improves the chances the recipe will appear in queries that include or are filtered for preparation time.

“Helping Google figure things out” is particularly important when it comes to named entities: people, places, organizations and the like. If a web document makes a reference to “London” is this London, England or London, Ontario? You can trust Google to “figure it out” or – without being required to change the text – you can eliminate that ambiguity for the search engine, and be confident that the page has a better chance of showing up in relevant “London” queries, without marring results (and your engagement metrics) by turning up in the queries for the other London.

3) How do you see semantic markup evolving in the next 1-2 years?

Strictly on the markup side, I don’t think we’ll see any radical changes to the vocabularies and markup protocols in broad use today.

schema.org will almost certainly be extended further, but I don’t expect any radical changes to that or the Open Graph protocol in the next couple of years (if anything major does happen with schema.org I think it will revolve around better mechanisms to meaningfuly link other vocabularies, rather than big growth in the core vocabulary).

On the syntax side, it will be interesting to see if the balance tilts decisively in favor of RDFa or microdata. RDFa is certainly the more robust markup protocol and clearly better loved by semantic web developers, but less technical webmasters continue to favor microdata (and it is anything but absent in the enterprise). So long as Google continues to promote it, I think we’ll probably see microdata become more ubiquitous.

Depending on the success of the Data Highlighter program for event markup, we might see Google extend the Highlighter to other data types. And it’s possible that Bing might follow suit with a code-free data structuring mechanism of its own. In general, I think the search engines will continue to work on tools and mechanisms that make it easier for website owners to provide structured data information to them.

I’m hopeful we’ll see more – and more useful – semantic markup tools being developed in support of the search engines’ structured data initiatives. Tool development has certainly lagged behind vocabulary and syntax evolution: for example, there’s still no WordPress plugin that allows marketers to markup schema.org types inline.

I think any really big changes that we’ll see in the next one to two years won’t come in the form of changes to semantic markup, but the uses that the search engines make of it.

Expect further enhancements to Google’s Knowledge Graph and Bing’s Snapshots. The most exciting prospect here (of which we’ve already seen some signs) is site-level data starting to inform the Knowledge Graph, and even links being generated from Knowledge Graph results to the sites that contribute to it. That is, Wikipedia and Freebase-derived information may increasingly be augmented with other qualified structured data sources.

And the $64,000 question (give or take a few zeros) still hovering in the air is “whither Facebook?” Graph Search is all fine and well, but at this point its utility still seems limited to the walled garden that is Facebook, and Bing results have been only nominally enhanced in the process. But there’s a veritable gold mine of structured data available via the Graph API if Facebook and its partners can figure out how to profitably leverage it.

Leave a Reply

Your email address will not be published. Required fields are marked *