Perform the Extract, Transform and Load (ETL) process to create a data pipeline on Kickstarter datasets using Python, Pandas, Jupyter Notebook and PostgreSQL.
-
Updated
Apr 19, 2023 - Jupyter Notebook
Perform the Extract, Transform and Load (ETL) process to create a data pipeline on Kickstarter datasets using Python, Pandas, Jupyter Notebook and PostgreSQL.
Explore-Transform-Load: Worked collaboratively to explore Happiness Index data from csv files and web scraping. Transformed and cleaned data in a jupyter notebook. Created schema to load data into a SQL database. Created multiple quires to illustrate functionality of database.
Crowd-Quest: ETL Journey for Crowdfunding Data is a repository showcasing the ETL (Extract, Transform, Load) process. It involves extracting data from Excel files, transforming it into CSV format, designing an ERD and database schema, and loading the data into PostgreSQL. Tools used: Jupyter Notebook, VSCode, PostgreSQL, Quick DBD, Excel.
Add a description, image, and links to the erd topic page so that developers can more easily learn about it.
To associate your repository with the erd topic, visit your repo's landing page and select "manage topics."