Mar 01 2005

Searching and Blogging

Published by at 12:48 am under Technology

This is an old article and the information contained within it may be out of date, not reflect my current views and/or contain broken links. If you feel this article is still valid and requires updating, you can use the contact form to let me know. However, I make no guarantee that it will get updated.

I’ve been thinking about current search engine technology and the fast paced world of blogging and wikis that has become a major part of the internet.

I’m not sure if its still the case but a few years ago when I was first really getting to web development I found out that Google only passed by the back waters of the internet (of which my site forms part of) once a month. At that time my site was mostly static. I would add new bits and change bits but generally the content would stay the same so what Google said was on my site was almost certainly on my site.


Enter the blog-o-sphere. The blog allowed anyone to post anything to their website with no more difficulty then sending an email. And suddenly BOOM… the information on the internet suddenly exploded.

So far I have been posting (on average) about 3 posts per week. If, as I think is still the case, Google (and other search engines) are still only crawling my site once a month surely by the time my site is catalogued and people have got a hit by searching Google then either the information is obsolete or has been superceded. How likely are people to click through to other sections of my site to try and find the more up-to-date information?

The classic examples of this happening on my site are with my Mojavi Builder application and with my F*ck Dabs.com area. I seeing people linking to the old pages but looking through my site to find the more up-to-date information.

Is this a problem with the way search engines work or is it a more fundamental problem with the internet itself? I suspect it might be a mixture of both. The effort required to search through and categories and tracking linking sites is a massive undertaking and will undoubtedly take vast amounts of time and processing power. Also, in the new blogging and wiki worlds of rapidly fluctuating dynamic information the old static hyperlink just doesn’t cut it any more. It relies on the person that created the link to make sure that they are linking to the most relevant, up-to-date information. When the targeted information changes that links pointing to it don’t change, thereby potentially losing their semantic meaning even completely losing their target.

What the internet requires is some form of linking engines that can be tailored to the users requirements and which can track fluctuations in the information on the internet and can also track semantic means in the information so that the user is only directed to information which is relevant to that user.

This probably sounds like sci-fi technology from the 31st century but I think that this technology will be required for the internet to make its next leap forward. I believe that Google is having a stab at auto linking with ISBNs and VINs on websites but this is only the tip of the iceberg… there’s a long way to go before we see the next true information evolution.

3 responses so far