This web scraper is built using Ruby Programming language. It is a capstone project that is part of the Microverse curriculum, developed to showcase the author's ability to develop a project based on Ruby & any given business logic.
Project output screenshot.
This project is a Web Scraper that takes information from Medium Website. It scrapes some articles written by Microverse students & any other articles that have anything to do with Microverse in generall. The scraper takes the information retrieved from the website & uses Nokogiri gem to store everything as an object. I then took this object & retrieve each article information and display it in a way that's more readable & organised. The articles are displayed in discending order based on the number of claps ( which in this case is the popularity that you see when you run the code. )
- Ruby
- Rspec (testing)
- HTTParty
- Nokogiri
- Rubocop
- Colorize
- Pry (for debugging)
To start contributing to this project run:
git clone https://github.com/ClaytonSiby/Web_Scraper.git
Now create a feature branch & open a pull request based on development (develop) branch.
After cloning the project, you should cd into the (web_scraper) directory where the code is stored & nevigate into bin older using cd bin
command on the terminal, then run: bundle install
to install the necessary dependencies & lastly run: ruby main.rb
to see the scraper in action.
Open your terminal & run rspec
& the program will run tests for two files (scraper.rb & structure_data.rb).
- Visual Studio Code
👤 Clayton Siby
- Github: @Clayton Siby
- Twitter: @ClaytonSiby
- Linkedin: linkedin
- [email protected]
Contributions, issues and feature requests are welcome!
Feel free to check the issues page.
Give a ⭐️ if you like this project!
- Microverse.org
- StackOverflow
- tutorialspoint.com
This project is MIT licensed.