If you think about websites as having different page types, with each page type having different sections within it, such as content sections, navigation sections, and footer sections, it becomes apparent that the value of a particular page is defined by the content on the page that is unique to that page. Sections like footers or site-wide navigation systems are repeated on each page, and give no specific extra value to that page.
So, it would be helpful to be able to instruct search engine robots to not index specific areas of the web page. Here’s a wireframe of what I’m thinking.
How? Well, I haven’t found a real solution. Here’s an idea though.
We could extend XHTML with a schema that would include the ability to add attributes to elements like DIVs, ULs, OLs, Ps, and so on.
The attributes could be along these lines:
<div robot-follow=”yes” robot-index=”no”>Stuff you don’t want indexed here</div>
Of course, then the makers of the bots would need to program to heed these attributes.
So, yeah, all-in-all, a fairly impractical idea as nothing is implemented. However, if it were, I would use it on many websites.