>>CUTTS: Jacob in Denmark asked a fun question.
He asks, “How many bots/spiders does Google currently have crawling the Web?” Well, it’s
important to realize that it’s not really actual robots or actual spiders out there,
you know, open the door and you’re like, “Oh, it’s Googlebots.” Instead it’s banks of machines
at the Googleplex or–not really at the Googleplex, at Google’s Data Centers who open up an HTTP
connection and request a page and then get it back. So any bank of machines, even 50
machines could easily be requesting a bunch of different content. So we try to refresh
a large fraction of the Web every few days. So it turns out you really don’t need a ton
of machines. Even a relatively small amount of machines operated in parallel and fetching
pages in parallel can–can really be able to crawl or be able to find new pages on the
Web in a very quick way. So, I don’t think we give out the actual numbers but, you know,
probably more than 25, less than a thousand, you know, is a sort of range you can think
about, it doesn’t take that many machines. A lot more of the challenge is how do you
index those pages really well? How do you know which pages are reputable? And then it
takes a lot more machines to be able to search and return those matches very, very quickly.