This is an application made for scraping tweets from Twitter.
This application made possible thanks to the vladkens's twscrape. Please also checkout it: https://github.com/vladkens/twscrape
Python3.11 has to be installed. Python dependencies are
- twscrape
- loguru
- reportlab
- asyncio
To install dependencies use: python3 -m pip install -r requirements.txt
or install them one by one manually
python3 -m pip install twscrape
python3 -m pip install loguru
python3 -m pip install reportlab
Then you can run the applicatin with python3 main.py
Application acts as a normal user on twitter. So we have to provide accounts to for application to use.
![atesdijital.com Accounts page](https://private-user-images.githubusercontent.com/85938355/320584171-6f0ef485-d03e-4c1a-9f4f-e03975bf9cc8.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzg5Mzg5NDAsIm5iZiI6MTczODkzODY0MCwicGF0aCI6Ii84NTkzODM1NS8zMjA1ODQxNzEtNmYwZWY0ODUtZDAzZS00YzFhLTlmNGYtZTAzOTc1YmY5Y2M4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMDclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjA3VDE0MzA0MFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTkwMzMzN2ZjOGUwN2UxOGUwMzgxYmEzNmU1NDBkNTY3YzlmMzM4YTAyZDdkNTM2ODg1ZTQ3OTIwMmVjMmI0ZWImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.JGw7-9_nsHwAjv-XllaaMm3NJnJmFy0FMbfX4CoSG1g)
- username: username of the twitter account
- password: password of the twitter account
- email: email of the twitter account
- email_password: email password of the email
Warning
Not all email providers are supported Eg. @yandex.com. Only: yahoo, icloud, hotmail and outlook are supported for now. For more information: vladkens/twscrape#67
After adding accounts use "login all accounts" button to login. If accounts are not active head to the Tweet page and checkout application output to see what is wrong.
![atesdijital.com Tweet Scraper's Tweet Page](https://private-user-images.githubusercontent.com/85938355/320589778-7b2f719f-848b-46fb-9132-48205f0ae0d8.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzg5Mzg5NDAsIm5iZiI6MTczODkzODY0MCwicGF0aCI6Ii84NTkzODM1NS8zMjA1ODk3NzgtN2IyZjcxOWYtODQ4Yi00NmZiLTkxMzItNDgyMDVmMGFlMGQ4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMDclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjA3VDE0MzA0MFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTVlMmI4MzllZDY5OTYzM2NkNTIyMzhhNGViOWFlYThkN2MxNTY4MjBlYjM4OTY1ZGI5Mzc5ZjE4ODQ5ZjgyMWUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.KCQFq6Rj8dF6yhgCHbe3BEybmAzXhSnGZZqELLWzdWs)
Tweet Page has the output field of the program. Any errors or infos may be read here.
Input username then provide dates to scrape Tweets. Note that dates must be "Year-month-day" format like "2024-04-03".
Note
Scraped tweets will be stored in a database with the specified username. Eg. if you scraped tweets of "elonmusk" then tweets will be stored in a SQLite3 database called "elonmusk.db"
![atesdijtial.com Tweet Scraper's PDF Page](https://private-user-images.githubusercontent.com/85938355/320590288-5c021827-8416-4a49-b9b0-d0a8a9350bd4.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzg5Mzg5NDAsIm5iZiI6MTczODkzODY0MCwicGF0aCI6Ii84NTkzODM1NS8zMjA1OTAyODgtNWMwMjE4MjctODQxNi00YTQ5LWI5YjAtZDBhOGE5MzUwYmQ0LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMDclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjA3VDE0MzA0MFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTM1MTBkZjQ2NmYyYjliMjVkNmEwYTllNmY2YzU5YzM5YTViYzUyNjc4NzM5OGE1MjAxOWIzMDdhYTFiY2U3MzImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.Urs59Xk8333EWuY5qgJ2YwNDb6U4Ii6hK8j1R0nzffs)
There is an example database that has ~500 tweets. Use "TansuYegen" as username to test PDF Page. Input username then click import to fetch all the tweets of the user that is stored in the database. Select tweets that you want then click turn to pdf button. PDF will be stored in the same directory as "main.py" file.
Note
Note that you don't need an active account nor internet connection in order to do this. PDF Page only works with the tweets that are stored in the database.