The XSCRAPERS package provides an OOP interface to some simple webscraping techniques.
A base use case can be to load some pages to Beautifulsoup Elements. This package allows to load the URLs concurrently using multiple threads, which allows to safe an enormous amount of time.
import xscrapers.webscraper as ws
URLS = [
"https://www.google.com/",
"https://www.amazon.com/",
"https://www.youtube.com/",
]
PARSER = "html.parser"
web_scraper = ws.Webscraper(PARSER, verbose=True)
web_scraper.load(URLS)
web_scraper.parse()
Note that herein, the data scraped is stored in the data
attribute of the webscraper.
The URLs parsed are stored in the url
attribute.
See this link for a good explanation. In short, the steps are:
-
Download the geckodriver from the mozilla GitHub release page, note to change the
X
for the version you want to downloadwget https://github.com/mozilla/geckodriver/releases/download/vX.XX.X/geckodriver-vX.XX.X-linux64.tar.gz
-
Extract the file with
tar -xvzf geckodriver*
-
Make it executable
chmod +x geckodriver
-
In the last step, the driver can be added to the
PATH
environment variable, moved to theusr/local/bin
folder, or can be given as full path to theWebdriver
class asexe_path
argumentexport PATH=$PATH:/path-to-extracted-file/ sudo mv geckodriver /usr/local/bin/