Repository to learn about crawling dynamic websites for pdf printing or checking websites for available appointments or product stock.
This is a collection of code I have written as small personal projects and wanted to share. Each file contains code for a specific task that I wanted to learn how to automate. They are the following:
- Reserving an appointment on a website. The sort of appointments that are filled within seconds!
- Downloading PDFs from a dynamic website (so. many. clicks.)
- Checking product stock on a website
P.S: Good knowledge of Selenium, HTML and websites in general is needed.
Please note that this repository is given for educational purposes only. The illegal use of web crawling and web monitoring is strongly discouraged. If web crawling is needed for a site that isn't your own, it is highly encouraged to reach out to the website owners and ask for their express permission.
To run this project, follow the steps below:
- Create a python virtual environment
- Clone the repository in the directory where the virtual environment folder is installed
- Install requirements from requirements.txt file
- Edit the code to your own needs
- Import the relevant class from the three available classes. For example:
from check_appointment import CheckAppointment
- Use the available commands
This is an initial commit to get the project rolling. It should be tailored to specific needs as it will most probably not work out of the box.
- Easier User Interaction (maybe through GUI)
- Customizable commands
The following table summarizes the most used commands:
Command | Code | Description |
---|---|---|
To be filled | To be filled |
To be filled |