- Name: Natural Language Processing
- Code: 981
- Fall 2019, Wednesday 3:00pm – 6:15pm
- 15, Engineering Faculty, University of Guilan
- Credits: 3.0
- Lecturer: Javad Pourmostafa
- TA: Parsa Abbasi
- NB: Drop me a line to get the slides!
-
Main Approaches (Rule-based, Probabilistic Models, Traditional ML algorithms and Neural Networks), Confusion Matrix, Semantic Slot Filling, NLP Pyramid, Scenario of Text Classification, Gradient Descent, Tokenization, Normalization, Stemmer, Lemmatizer, BoW, N-Grams, TF-IDF, Binary Logistic Regression, Hashing Features for word representation, Neural Vectorization, 1-D Convolutional Layer
-
Chain Rule, Probability of a sequence of words, Markov assumption, Bi-gram, Maximum Likelihood Estimation (MLE), Generative Model, Evaluating Language Models (Extrinsic, Intrinsic), Perplexity, Smoothing (discounting), Laplace Smoothing (Add-one, Add-k), Stupid/Katz Backoff, Kneser-Ney Smoothing
-
Week 3: Hidden Markov Model for Sequence Labeling (POS)
Sequence Labeling, Markov Model Scenario, Markov Chain Model, Emission and Transition Probabilities HMM for POS tagging, Text generation in HMM, Training HMM, Viterbi Algorithm, Using Dynamic Programming for backtracing
-
Curse of Dimensionality, Distributed Representation, Neuron, Activation Functions, The Perceptron, The XOR problem, Feed-Forward Neural Networks, Training Neural Networks, Loss Function, Cross-Entropy Loss, Dropout, A Neural Probabilistic Language Model, Recurrent Neural Language Models, Gated Recurrent Neural Networks, LSTM
-
Word Similarities, Embeddings, Term-Document matrix, Document-Term matrix, Term-Context matrix, Visualizing Document Vectors, Word Window, Reminders from Linear Algebra, Cosine Computing Similarity, Pointwise Mutual Information (PMI), Dense Embeddings sources , Word2vec, Skip-gram Algorithm, Skip-Gram with Negative sampling
-
Probabilistic Latent Semantic Analysis (PLSA), Problem of probability of density estimation, MLE, Expectation-Maximization Algorithm, Using MLE in PLSA, Using EM in PLSA, E-Step in PLSA, M-Step in PLSA
-
Confusion Matrix, Whisker Plot, Using Supervised Models like Logistic Regression, Decision Tree, and so on.
-
Feedforward NN, Using MSE as a loss function, Updating weights in backpropagation, Gradient Descent Algorithm.
-
Bag of Words, Finding Unique Words, Creating Document-Word Matrix, TF-IDF Computation Finally, investigate our naive TF-IDF model in comparison with SKlearn TfidfVectorizer
-
Finding word similarities, Set hyperparameters, Generate training data, Fit an unsupervised model, Inference from testing samples
This project is licensed under the MIT License - see the LICENSE.md file for details