Skip to content

Collect, combine, and clean data from Wikipedia and Kaggle for export into an SQL database.

Notifications You must be signed in to change notification settings

inregards2pluto/movies-etl

Repository files navigation

Extract, Transform, and Load with Wikipedia and Kaggle Movie Data

Project Overview

The script in the jupyter notebook ETL_create_database.ipynb combines and cleans data from Wikipedia and Kaggle for export into an SQL database. The script was prepared as part of a mock excercise where movie data from different sources were used to prepare a dataset for a media streaming hackathon. The exercise is designed to implement the Extract, Transform, and Load process (i.e. ETL).

Instructions for Use

Both Wikipedia and Kaggle datasets are available in this repository for use. The Kaggle dataset pulls from the MovieLens dataset of over 20 million reviews and contains a metadata file with details about the movies from The Movie Database (TMDb).

First, clone the repository to a local drive. Prior to opening and executing the jupyter notebook, create a new database in pgAdmin 4 called "movie_data". This is the database that the cleaned movie data will be written to.

After creating the SQL database, create a new text document called "config.py". Write the following text:

db_password = 'YOUR SERVER PASSWORD'

, where the text string is the user password used to access the server housing the newly created "movie_data" database.

Open the jupyter notebook. Scroll to the bottom of cell 3 and confirm that the SQL connection path is correct. It should be formatted as follows:

# Store connection string to local server as string. Format as
# connection_string = f"{username}:{password}@localhost:5432/{database_name}"
# For example, my connection string is:
db_string = f"postgresql:https://postgres:{db_password}@localhost:5432/movie_data"

After confirming the connection path is correct, execute the entire notebook. The last cell in the jupyter notebook will print statements updating progress on the export of the csvs to the SQL database. Once complete, open pgAdmin 4 and confirm that data has appeared.

Resources

About

Collect, combine, and clean data from Wikipedia and Kaggle for export into an SQL database.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published