Aaron Bradley will be speaking on the “Structured data for SEO: Which markup matters?” panel at SearchFest 2014 which will take place on February 28th, 2014 in Portland, Oregon. For more information and to purchase tickets, please click here.

1) Please give us your background and tell us what you do for a living.
After getting a degree in English literature I spent a big chunk of time as a librarian, followed by a big chunk of time as a web designer. Each of these things have greatly informed my approach to organic search marketing, the field in which I’ve more-or-less specialized in for the past ten years, and currently make my living as a consultant.

While organic search is my focus, I can’t see not also being proficient in analytics, social media development and conversion optimization, so I spend a lot of time with those search-related disciplines too.

For some time in the search marketing sphere my area of specialization has been the intersection of semantic web and search technologies – “semantic SEO”. Outside the search marketing realm I’ve a keen interest in many aspects of news and journalism in the digital age – the massive disruption of traditional news production and consumption, the evolution of business models for digital news, the future of different types of news media, and so on.

2) How does proper technical markup lead to more website traffic?
At the end of the day, a site with proper technical markup will drive more search traffic (and more traffic from other traffic-driving data consumers, like social networks) than a site with poor markup because the search engine has a greater degree of confidence in the information it has about that site, and potentially even knows more about a site with good markup than one lacking it.

That greater confidence in, and better understanding of, a site’s content leads, of course, to that site appearing with greater frequency in a greater number of query results (or, more to the point, appearing with greater frequency in the most relevant query results).

The better the markup behind a particular resource, the less a data consumer is forced to guess. Is this block of text an address? Google has to guess less if you say, using schema.org, “this is an address.” Is this the canonical URL for this page? Google has to guess less when you use rel=”canonical” – and so on and so on.

For most social networks, proper markup is the price of admission for certain types of visibility, and they won’t even try to guess if that markup is absent or corrupted – you’re not going to see the propagation of Twitter Cards or Pinterest Rich Pins without proper markup.

3) What is the Knowledge Graph and why should we care?
The Knowledge Graph is a collection of disambiguated entities returned in response to disambiguated queries.

Or, if you like, Google’s bunch of facts.

Facts with a number of important characteristics – facts that Google knows where to look up, facts that Google knows how to look up, and facts Google knows how to identify in search queries – even when they’re being expressed in different ways.

What I’ve described may in fact be a combination of the Knowledge Graph and Hummingbird, because they’re closely allied technologies. One can probably think of the collection, clean-up, maintenance and interlinking of facts (those disambiguated entities that are the stuff of the fact repository) as being the Knowledge Graph, while understanding and processing these facts – disambiguating queries – as the work of Hummingbird.

However they’re sliced and diced search marketers should care about the Knowledge Graph and Hummingbird because they represent the way Google now goes about its business – looking for and storing facts so it can look for those facts in queries, and deliver them up efficiently to the searcher (that thing being delivered increasingly being the fact itself, rather than just a URL reference to it).

More importantly, this is the direction all major data wranglers are going, such as Bing with its Satori-fueled Snapshots, or Facebook with its Open Graph-fueled Graph Search.

Google, Bing and Facebook are doing this because this approach makes it possible to deliver more relevant information more directly to users – information provided both in response to a query, and, increasingly, information passively or proactively provided as a result of observed behavior (that information being everything from ecommerce recommendations to Google Now alerts).

An understanding of these functions is invaluable for the contemporary search marketer, because it allows them to better address the search engines’ new hunger for the sort of things – those facts, those entities – that are the fodder for Knowledge Graph-like repositories and Hummingbird-like operations, and as their appetite for words alone ebbs.

Leave a Reply

Your email address will not be published. Required fields are marked *