This Web Crawler is designed to efficiently navigate and retrieve data from the web. It starts from a specified seed URL and fetches a list of all reachable URLs. Key features include asynchronous processing, customizable depth and caching options.
- Seed URL Processing: Begins crawling from a user-specified seed URL.
- Asynchronous Processing: Enhances efficiency and speed.
- Customization Options:
- Depth Specification: Allows users to define the depth of the crawl.
- Caching System: Includes a caching mechanism for storing and retrieving URLs.
- Crawl Mode: Option to use cached data or initiate a new crawl.
# Instructions for setting up the project environment and installing dependencies