{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Data Cleaning" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook goes through a necessary step of any data science project - data cleaning. Data cleaning is a time consuming and unenjoyable task, yet it's a very important one. Keep in mind, \"garbage in, garbage out\". Feeding dirty data into a model will give us results that are meaningless.\n", "\n", "Specifically, we'll be walking through:\n", "\n", "1. **Getting the data - **in this case, we'll be scraping data from a website\n", "2. **Cleaning the data - **we will walk through popular text pre-processing techniques\n", "3. **Organizing the data - **we will organize the cleaned data into a way that is easy to input into other algorithms\n", "\n", "The output of this notebook will be clean, organized data in two standard text formats:\n", "\n", "1. **Corpus** - a collection of text\n", "2. **Document-Term Matrix** - word counts in matrix format" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Problem Statement" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As a reminder, our goal is to look at transcripts of various comedians and note their similarities and differences. Specifically, I'd like to know if Ali Wong's comedy style is different than other comedians, since she's the comedian that got me interested in stand up comedy." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Getting The Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Luckily, there are wonderful people online that keep track of stand up routine transcripts. [Scraps From The Loft](http://scrapsfromtheloft.com) makes them available for non-profit and educational purposes.\n", "\n", "To decide which comedians to look into, I went on IMDB and looked specifically at comedy specials that were released in the past 5 years. To narrow it down further, I looked only at those with greater than a 7.5/10 rating and more than 2000 votes. If a comedian had multiple specials that fit those requirements, I would pick the most highly rated one. I ended up with a dozen comedy specials." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Web scraping, pickle imports\n", "import requests\n", "from bs4 import BeautifulSoup\n", "import pickle\n", "\n", "# Scrapes transcript data from scrapsfromtheloft.com\n", "def url_to_transcript(url):\n", " '''Returns transcript data specifically from scrapsfromtheloft.com.'''\n", " page = requests.get(url).text\n", " soup = BeautifulSoup(page, \"lxml\")\n", " text = [p.text for p in soup.find(class_=\"post-content\").find_all('p')]\n", " print(url)\n", " return text\n", "\n", "# URLs of transcripts in scope\n", "urls = ['http://scrapsfromtheloft.com/2017/05/06/louis-ck-oh-my-god-full-transcript/',\n", " 'http://scrapsfromtheloft.com/2017/04/11/dave-chappelle-age-spin-2017-full-transcript/',\n", " 'http://scrapsfromtheloft.com/2018/03/15/ricky-gervais-humanity-transcript/',\n", " 'http://scrapsfromtheloft.com/2017/08/07/bo-burnham-2013-full-transcript/',\n", " 'http://scrapsfromtheloft.com/2017/05/24/bill-burr-im-sorry-feel-way-2014-full-transcript/',\n", " 'http://scrapsfromtheloft.com/2017/04/21/jim-jefferies-bare-2014-full-transcript/',\n", " 'http://scrapsfromtheloft.com/2017/08/02/john-mulaney-comeback-kid-2015-full-transcript/',\n", " 'http://scrapsfromtheloft.com/2017/10/21/hasan-minhaj-homecoming-king-2017-full-transcript/',\n", " 'http://scrapsfromtheloft.com/2017/09/19/ali-wong-baby-cobra-2016-full-transcript/',\n", " 'http://scrapsfromtheloft.com/2017/08/03/anthony-jeselnik-thoughts-prayers-2015-full-transcript/',\n", " 'http://scrapsfromtheloft.com/2018/03/03/mike-birbiglia-my-girlfriends-boyfriend-2013-full-transcript/',\n", " 'http://scrapsfromtheloft.com/2017/08/19/joe-rogan-triggered-2016-full-transcript/']\n", "\n", "# Comedian names\n", "comedians = ['louis', 'dave', 'ricky', 'bo', 'bill', 'jim', 'john', 'hasan', 'ali', 'anthony', 'mike', 'joe']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# # Actually request transcripts (takes a few minutes to run)\n", "# transcripts = [url_to_transcript(u) for u in urls]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# # Pickle files for later use\n", "\n", "# # Make a new directory to hold the text files\n", "# !mkdir transcripts\n", "\n", "# for i, c in enumerate(comedians):\n", "# with open(\"transcripts/\" + c + \".txt\", \"wb\") as file:\n", "# pickle.dump(transcripts[i], file)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Load pickled files\n", "data = {}\n", "for i, c in enumerate(comedians):\n", " with open(\"transcripts/\" + c + \".txt\", \"rb\") as file:\n", " data[c] = pickle.load(file)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Double check to make sure data has been loaded properly\n", "data.keys()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# More checks\n", "data['louis'][:2]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Cleaning The Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When dealing with numerical data, data cleaning often involves removing null values and duplicate data, dealing with outliers, etc. With text data, there are some common data cleaning techniques, which are also known as text pre-processing techniques.\n", "\n", "With text data, this cleaning process can go on forever. There's always an exception to every cleaning step. So, we're going to follow the MVP (minimum viable product) approach - start simple and iterate. Here are a bunch of things you can do to clean your data. We're going to execute just the common cleaning steps here and the rest can be done at a later point to improve our results.\n", "\n", "**Common data cleaning steps on all text:**\n", "* Make text all lower case\n", "* Remove punctuation\n", "* Remove numerical values\n", "* Remove common non-sensical text (/n)\n", "* Tokenize text\n", "* Remove stop words\n", "\n", "**More data cleaning steps after tokenization:**\n", "* Stemming / lemmatization\n", "* Parts of speech tagging\n", "* Create bi-grams or tri-grams\n", "* Deal with typos\n", "* And more..." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's take a look at our data again\n", "next(iter(data.keys()))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Notice that our dictionary is currently in key: comedian, value: list of text format\n", "next(iter(data.values()))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# We are going to change this to key: comedian, value: string format\n", "def combine_text(list_of_text):\n", " '''Takes a list of text and combines them into one large chunk of text.'''\n", " combined_text = ' '.join(list_of_text)\n", " return combined_text" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Combine it!\n", "data_combined = {key: [combine_text(value)] for (key, value) in data.items()}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# We can either keep it in dictionary format or put it into a pandas dataframe\n", "import pandas as pd\n", "pd.set_option('max_colwidth',150)\n", "\n", "data_df = pd.DataFrame.from_dict(data_combined).transpose()\n", "data_df.columns = ['transcript']\n", "data_df = data_df.sort_index()\n", "data_df" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's take a look at the transcript for Ali Wong\n", "data_df.transcript.loc['ali']" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Apply a first round of text cleaning techniques\n", "import re\n", "import string\n", "\n", "def clean_text_round1(text):\n", " '''Make text lowercase, remove text in square brackets, remove punctuation and remove words containing numbers.'''\n", " text = text.lower()\n", " text = re.sub('\\[.*?\\]', '', text)\n", " text = re.sub('[%s]' % re.escape(string.punctuation), '', text)\n", " text = re.sub('\\w*\\d\\w*', '', text)\n", " return text\n", "\n", "round1 = lambda x: clean_text_round1(x)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's take a look at the updated text\n", "data_clean = pd.DataFrame(data_df.transcript.apply(round1))\n", "data_clean" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Apply a second round of cleaning\n", "def clean_text_round2(text):\n", " '''Get rid of some additional punctuation and non-sensical text that was missed the first time around.'''\n", " text = re.sub('[‘’“”…]', '', text)\n", " text = re.sub('\\n', '', text)\n", " return text\n", "\n", "round2 = lambda x: clean_text_round2(x)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's take a look at the updated text\n", "data_clean = pd.DataFrame(data_clean.transcript.apply(round2))\n", "data_clean" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**NOTE:** This data cleaning aka text pre-processing step could go on for a while, but we are going to stop for now. After going through some analysis techniques, if you see that the results don't make sense or could be improved, you can come back and make more edits such as:\n", "* Mark 'cheering' and 'cheer' as the same word (stemming / lemmatization)\n", "* Combine 'thank you' into one term (bi-grams)\n", "* And a lot more..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Organizing The Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I mentioned earlier that the output of this notebook will be clean, organized data in two standard text formats:\n", "1. **Corpus - **a collection of text\n", "2. **Document-Term Matrix - **word counts in matrix format" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Corpus" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We already created a corpus in an earlier step. The definition of a corpus is a collection of texts, and they are all put together neatly in a pandas dataframe here." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's take a look at our dataframe\n", "data_df" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's add the comedians' full names as well\n", "full_names = ['Ali Wong', 'Anthony Jeselnik', 'Bill Burr', 'Bo Burnham', 'Dave Chappelle', 'Hasan Minhaj',\n", " 'Jim Jefferies', 'Joe Rogan', 'John Mulaney', 'Louis C.K.', 'Mike Birbiglia', 'Ricky Gervais']\n", "\n", "data_df['full_name'] = full_names\n", "data_df" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Let's pickle it for later use\n", "data_df.to_pickle(\"corpus.pkl\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Document-Term Matrix" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For many of the techniques we'll be using in future notebooks, the text must be tokenized, meaning broken down into smaller pieces. The most common tokenization technique is to break down text into words. We can do this using scikit-learn's CountVectorizer, where every row will represent a different document and every column will represent a different word.\n", "\n", "In addition, with CountVectorizer, we can remove stop words. Stop words are common words that add no additional meaning to text such as 'a', 'the', etc." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# We are going to create a document-term matrix using CountVectorizer, and exclude common English stop words\n", "from sklearn.feature_extraction.text import CountVectorizer\n", "\n", "cv = CountVectorizer(stop_words='english')\n", "data_cv = cv.fit_transform(data_clean.transcript)\n", "data_dtm = pd.DataFrame(data_cv.toarray(), columns=cv.get_feature_names())\n", "data_dtm.index = data_clean.index\n", "data_dtm" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Let's pickle it for later use\n", "data_dtm.to_pickle(\"dtm.pkl\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Let's also pickle the cleaned data (before we put it in document-term matrix format) and the CountVectorizer object\n", "data_clean.to_pickle('data_clean.pkl')\n", "pickle.dump(cv, open(\"cv.pkl\", \"wb\"))" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## Additional Exercises" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "1. Can you add an additional regular expression to the clean_text_round2 function to further clean the text?\n", "2. Play around with CountVectorizer's parameters. What is ngram_range? What is min_df and max_df?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.8" }, "toc": { "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "toc_cell": false, "toc_position": {}, "toc_section_display": "block", "toc_window_display": false }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }