Historical inaccuracy

Here’s a fun little lawsuit to think about, even if it is ultimately doomed (unless there’s a really dumb judge out there who doesn’t understand some Web indexing fundamentals). First, one company sues another company for trademark infringement. So, the second company hires a lawfirm to go out and find historical data over use of the trademarked phrase, so the lawfirm visits the Wayback Machine, a standard action in such cases, to pull up old web pages and see how far back the use extends.
Here’s the catch: The first company complains that they had a robots.txt file in place to prevent the Wayback Machine from archiving certain pages, but that 92 times it ignored those instructions and allowed the lawfirm to get pages already on the Web that it had no right to get.
So the company is now suing the lawfirm, the company (again) and the non-profit Internet Archive for violating the DMCA because they supposedly circumvented “technological measures” to gain access to copyrighted material.
The lawfirm says it wouldn’t know how to bypass a block, and anyway the robots.txt convention is purely voluntary and robots don’t have to read the file or agree with its conventions anyway. In effect, once you post it — it’s out there for anyone to see.
P.S. Man, I sure used to love Photoshop filters.

This entry was posted in The Wonderful WWW. Bookmark the permalink.