Skip to content

Latest commit

 

History

History
24 lines (17 loc) · 1.14 KB

README.md

File metadata and controls

24 lines (17 loc) · 1.14 KB

Brief

Write a simple web crawler in Go. The crawler should be limited to one domain - so when crawling it would crawl the domain, but not follow external links, for example to the Facebook and Twitter accounts.

Given a URL, your program should output a site map showing each page's url, title, static assets, internal links and external links.

The number of pages that are crawled should be configurable. We suggest crawling wikipedia and limiting the number of pages to 100.

Approach

  1. Scrape website.
  2. As soon as internal link found - start scraping it concurently
  3. Continue extracting internal/external/static(Tags - "img", "audio", "script", "video", "embed", "source")
  4. Once all data is scrapped - Print out result of this page.

The way it is implemented is pretty wild and should be relatively fast but should be further tested to make it more stable.

To further optimise it:

  • Add workers
  • Reuse connection
  • Benchmark data extraction.
  • Consider GoQuery/Regex(Not recommended)

How to run

$ make runwiki