Abstract:
Web crawlers are software programs that automatically traverse the hyperlink structure of the world-wide web in order to locate and retrieve information. In addition to crawlers from search engines, we observed many other crawlers which may gather business intelligence, confidential information or even execute attacks based on gathered information while camouflaging their identity. Therefore, it is important for a website owner to know who has crawled his site, and what they have done. In this study we have analyzed crawler patterns in web server logs, developed a methodology to identify crawlers and classified them into three categories. To evaluate our methodology we used seven test crawler scenarios. We found that approximately 53.25% of web crawler sessions were from “known” crawlers and 34.16% exhibit suspicious behavior.