Crawling the web is deceptively simple: the basic algorithm is (a)Fetch a page (b)Parse it to extract all linked URLs (c) For all the URLs not seen before, repeat (a)–(c).However, the size of the web (estimated at over 4 billion pages) and its rate of change(estimated at 7% per week) move this plan from a trivial programming exercise to aserious algorithmic and system design challenge. Indeed, these two factors aloneimply that for a reasonably fresh and complete crawl of the web, step (a) must beexecuted about a thousand times per second, and thus the membership test (c) must bedone well over ten thousand times per second against a set too large to store in mainmemory. This requires a distributed architecture, which further complicates themembership test.A crucial way to speed up the test is to cache, that is, to store in main memory a(dynamic) subset of the “seen” URLs. The main goal of this paper is to carefullyinvestigate several URL caching techniques for web crawling. We consider bothpractical algorithms: random replacement, static cache, LRU, and CLOCK, andtheoretical limits: clairvoyant caching and infinite cache. We performed about 1,800simulations using these algorithms with various cache sizes, using actual log dataextracted from a massive 33 day web crawl that issued over one billion HTTPrequests. Our main conclusion is that caching is very effective – in our setup, a cacheof roughly 50,000 entries can achieve a hit rate of almost 80%. Interestingly, thiscache size falls at a critical point: a substantially smaller cache is much less effectivewhile a substantially larger cache brings little additional benefit. We conjecture thatsuch critical points are inherent to our problem and ve...