Search Engine 2.0

Pat Kitano had a blog post back in March about the search engine marketing value of real estate blogging and in particular about the Long Tail concept in SEO. He also explained the fact that real estate professionals who get into blogging early on, have a good chance of locking up search engines, that is, the first page results on a particular search term. It made me thinking about where I predict search engines are heading nowadays.

I agree with Pat, that in the short term, there is a leverage in getting into the game early on. You can lock up Google for the time being, but I doubt that we can assume that once you get in the "club", you stay in for a very long time, even if you keep producing quality, relevant content.

First: Search engines are constantly changing algorithms, rather than static and constant systems. The main purpose of search engines is to deliver relevant AND quality content. Allowing the search engine to be locked by a few is against this philosophy (both for short and long tail terms). Once real estate content (blogging, listing data, market data) become a mainstream public information, and such trends become obvious, search engines will have to adjust to keep delivering their ultimate value: non-biased, quality, relevant content.

The blog-overload of search engines is not a new phenomena. When blogging went mainstream in the early 2000, there were a period of time when search engines got "confused" and algorithms had to be tuned so that page ranks were reduced on blogs, which got too favorable by the old algorithms, without delivering enough value to the consumer. It seems that now, when real estate blogging is going mainstream, similar thing is happening with long tail search terms related to real estate. Adjustment will certainly happen. See Marcus Blurk post about some absurd long tail search engine results.

Second: While no outsider knows the exact algorithms behind Google’ search engine, the general concept of page ranking – based on how many related and relevant content links in, links out and links across the site – is well known and elaborated on. But this algorithm is certainly changing in today’s social networking world. Page ranking, while still very effective, was invented during web 1.0, when content producing was more  of a newspaper publishing, that is, a limited set of authoritative editors created content and decided who to refer to via hyperlinks. Judging the relevancy of a page to a particular search term could be decided very effectively based on the semantic analysis of pages linking to each other. But the fact is, that since the process of content producing required expertise and many times serious investment in time and money, therefore the algorithm could safely assume the authority of the author and put most of the emphasize on links. The very terminology of PAGE ranking refers to the web 1.0 world, where internet content is assumed to be a web page (or blog article), which can be parsed easily for content and relationships are via hyperlinks.

Today, in the web 2.0 world, content is produced bi-directionally and not only textual content but rankings, recommendations, tags, social network entities, bookmarks etc. Relationships are more than just hyperlinks. It is a social relationship, a geographical relationship etc. I predict that the value of rankings by the public via non-traditional means will increase. The recent acquisition of StumbleUpon by Ebay and Google’s jump into the game the second day underlines this new trend. The relevancy is shifting from the authoritativeness of hyperlinks by a few content producer to the authoritativeness of the public once enough public opinion is aggregated via social bookmarks, social networks, recommendation engines and user behavior tracking systems. Again, in the web 1.0 world, the only authoritative sources we could assume were the content authors themselves, the public was a passive participant. In the web 2.0 world, the importance of the public as the authoritative source of relevancy AND quality is increased significantly. That is search engine 2.0 in my opinion.

Relevancy relates to quality, but it does not imply it. The search engines’ ultimate goal is to deliver quality results. Up till now, the best available method was to check content for relevancy. Page ranking was the ultimate solution for determining relevancy (and hoping for quality). But now that there are more data available via the information generated by Web 2.0, search engines will ultimately develop judgments based on more accurate methods. Take for example: When a home buyer is searching for a real estate term in a particular market, the consumer hopes to get not only related content, but content produced by "good quality" author. That is, a good sales person, with the quality of such: intelligence, emphaty and aggressiveness. The basis of the future relationship is an client-agent relationship. Consumer wants to find an agent who can represent his or her interest the best. Using a search engine to find related content, locate and trust the author, project this trust of authority into trust of professionalism is the mean, not the goal. Consumer’s ultimate goal is to have the best real estate business process provided by the agent.  How can a search engine tell who is a good sales person, just by analyzing the relevancy of the content produced by the real estate agent. Certainly not with Page Ranking alone. Just because a lot of other real estate professionals link to this particular one may imply intelligence, but not emphaty and aggressiveness, which are also required for a successful sales process. On the other hand, if a search engine can relate content relevancy with consumer generated information (how many consumer bookmarked a particular blog with a particular set of related tags), social networking relationship (how many "friends", "admirer", "connection" a particular agent has on different networks), we get closer to determining the overall quality of the content provided. I suspect that Yahoo’s purchase of MyBlogLog relates to this new concept. While on the surface, MyBlogLog delivers a social networking service for blog and website owners and visitors, in the background, it is capable of generating additional classification information by tracking what sites a particular user is visiting. They have now another authoritative source for determining quality using the information about the behavior of the general public. Returning to a site over and over again implies similar relevancy to the classic page ranking by links, but extends the qualification from the authors to the browsing public. We will see many other social behaviors tracked for relevancy purposes in the near future.

Third: Modern web applications are using more and more
client technologies (i.e. AJAX, Flash etc.) vs. pure HTML content, which is
traditionally non-indexable by search engines. There is lot of relevant
information generated nowadays, which are simple left out from the search
engines because of that. We, at RealBird, have to drop some of our user interface
concepts just because we want to make sure, that maximum search engine exposure
is provided for our clients, even when it effects our overall user experience
and innovations. This is changing though. For example, GetClicky indicated that
their log tracking system will support tracking AJAX interactions, which is not
yet provided by any system as far as I know and I believe search engines are
working on acquiring content from sources not queried before. Take for example
all the geographic content provided on Ajax maps (Google, MS Virtual Earth,
Yahoo, MapQuest). While they may be extremely relevant to a particular search
term, they are simple ignored by search engines currently, due to the fact that
they are not rendered as normal HTML markups, but rather created in the browsers
memory on the fly. Another example: If you put a blogroll
on your blog or website using server side technologies, (i.e. the links to
the 3rd party blogs are rendered as regular HTML), you most likely get scores for it by Google as outgoing links. On the
other hand if you use Google’s own, Ajax based feed reader API, the very same
links are rendered on the fly, and while users may see them and use those links
just like in the server-side rendering scenario, they are completely invisible
for search engine robots. The content, the relevancy, the quality is the same,
but current search engines punish you for using modern technologies to
deliver better user experience. This will obviously change. It has to.

Having all that said, I agree with Pat, that this is an opportunity for the
real estate community to get into the game and turning this short term
opportunity into business by blogging it away. But I also sense some major
changes coming to town: the way Google changed the search engine game by putting
emphasize on the connection between quality content from purely analyzing the
content itself, the new search engines will put more
and more emphasis on the relevancy established by the consumer public vs.
relevancy established by the authors of the content only.

So blog it away, but also start using social networks such as InmanWiki, ActiveRain, LinkedIn, MyBlogLog and so on. Your online relevancy will certainly be more than just the content you provide: it also your behavior and participation online.

— Zoltan Szendro

RealBird

 

Get Started With RealBird Today! Sign up below.

1 Comment

Leave a Reply

Your email address will not be published.


*