A script using the Twitter API to do the following:
- Scrape a single account
- Scrape a list of accounts
- Get the follower count for a list of accounts
- Check a list of handles to see if any are suspended, private, or incorrect.
- Scrape tweets based on a keyword search.
All tweets and follower counts are saved to a formatted excel (.xlsx) file. All URLs will be converted to strings if they exceed the limit in excel (65,530).
Please note that this only scrapes up to the 500 recent tweets (the end result may be less than this depending on the account or keyword you are scraping for). This can be changed in the script itself.
Use the package manager pip to install the required libraries.
pip install -r requirements.txt
or
Using pipenv:
pipenv install
If you do not have a Twitter handle you must first sign up for one.
- Apply for API access
- Create your application.
- Get your authentication details.
- In the folder containing this README file (the main folder for this project)
- Open the .env file and enter the client id and secret like the following and save the file.
consumer_key = "YourConsumerKeyHere"
consumer_secret = "YourSecretKeyHere"
access_key = "YourAccessKeyHere"
access_secret = "YourAccessSecretHere"
More detailed instructions on how to obtain an api key can be found here.
When scraping multiple handles, this script will pause periodically to prevent the user from exceeding the rate limit.
To change the list of handles to be scraped, open the handles.txt file under the Account Lists directory. Do NOT include the '@' sign. Make sure to list one handle per line.
For example:
Twitter
Jack
BBCNews
Apple
Windows
msexcel
Android
Reddit
Discord
github