Skip to content

Commit

Permalink
add specification link
Browse files Browse the repository at this point in the history
  • Loading branch information
Cheng-Lin-Li committed Sep 29, 2017
1 parent 1509319 commit bd643e2
Show file tree
Hide file tree
Showing 3 changed files with 10 additions and 10 deletions.
2 changes: 1 addition & 1 deletion FastMap/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The objects listed in fastmap-data.txt are actually the words in fastmap-wordlis

## Technical Specification and Report

Click ** [Here](https://github.com/Cheng-Lin-Li/MachineLearning/blob/master/FastMap/TechnicalSpecification-%5BPCA_FastMap%5D-%5B1.0%5D-%5B20160929%5D.pdf) ** to read the detail specification and report.
Click [** Here **](https://github.com/Cheng-Lin-Li/MachineLearning/blob/master/FastMap/TechnicalSpecification-%5BPCA_FastMap%5D-%5B1.0%5D-%5B20160929%5D.pdf) to read the detail specification and report.


#### Usage: python FastMap.py
Expand Down
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,15 @@ Thanks,

|Algorithm|Description|Link|
|------|------|--------|
|Decision Tree|By measuring information gain via calculating the entropy of previous observations, decision tree algorithm may help us to predict the decision or results|[Source Code](https://github.com/Cheng-Lin-Li/MachineLearning/tree/master/DecisionTree)|
|Fast Map|An approach for dimensions reductions|[Source Code](https://github.com/Cheng-Lin-Li/MachineLearning/tree/master/FastMap)|
|Gaussian Mixture Models (GMMs)|GMMs are among the most statistically mature methods for data clustering (and density estimation)|[Source Code](https://github.com/Cheng-Lin-Li/MachineLearning/tree/master/GMM)|
|Decision Tree|By measuring information gain via calculating the entropy of previous observations, decision tree algorithm may help us to predict the decision or results|[Specification](https://github.com/Cheng-Lin-Li/MachineLearning/blob/master/DecisionTree/TechnicalSpecification-%5BDecisionTree%5D-%5B1.1%5D-%5B20160929%5D.pdf) and [Source Code](https://github.com/Cheng-Lin-Li/MachineLearning/tree/master/DecisionTree)|
|Fast Map|An approach for dimensions reductions|[Specification](https://github.com/Cheng-Lin-Li/MachineLearning/blob/master/FastMap/TechnicalSpecification-%5BPCA_FastMap%5D-%5B1.0%5D-%5B20160929%5D.pdf) and [Source Code](https://github.com/Cheng-Lin-Li/MachineLearning/tree/master/FastMap)|
|Gaussian Mixture Models (GMMs)|GMMs are among the most statistically mature methods for data clustering (and density estimation)|[Specification](https://github.com/Cheng-Lin-Li/MachineLearning/blob/master/GMM/INF552-TechnicalSpecification-%5Bk-means_EM-GMM%5D-%5B1.2%5D-%5B20170515%5D.pdf) and [Source Code](https://github.com/Cheng-Lin-Li/MachineLearning/tree/master/GMM)|
|Hierarchical clustering (HAC)|HCA seeks to build a hierarchy of clusters from bottom up or top down. This is a bottom up implementation.|[Source Code](https://github.com/Cheng-Lin-Li/MachineLearning/tree/master/HAC)|
|Hidden Markov model (HMM) and Viterbi|HMM is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (i.e. hidden) states. The Viterbi algorithmis used to compute the most probable path (as well as its probability). It requires knowledge of the parameters of the HMM model and a particular output sequence and it finds the state sequence that is most likely to have generated that output sequence. It works by finding a maximum over all possible state sequences. In sequence analysis, this method can be used for example to predict coding vs non-coding sequences.|[Viterbi Algorithm Source Code](https://github.com/Cheng-Lin-Li/MachineLearning/tree/master/HMM)|
|Hidden Markov model (HMM) and Viterbi|HMM is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (i.e. hidden) states. The Viterbi algorithmis used to compute the most probable path (as well as its probability). It requires knowledge of the parameters of the HMM model and a particular output sequence and it finds the state sequence that is most likely to have generated that output sequence. It works by finding a maximum over all possible state sequences. In sequence analysis, this method can be used for example to predict coding vs non-coding sequences.|[Specification](https://github.com/Cheng-Lin-Li/MachineLearning/blob/master/HMM/INF552-TechnicalSpecification-%5BHMM%5D-%5B1.0%5D-%5B20161203%5D.pdf) and [Viterbi Algorithm Source Code](https://github.com/Cheng-Lin-Li/MachineLearning/tree/master/HMM)|
|K-Means|One of most famous and easy to understand clustering algorithm|[Source Code](https://github.com/Cheng-Lin-Li/MachineLearning/tree/master/K-Means)|
|Neural Network|The foundation algorithm of deep learning|[Source Code](https://github.com/Cheng-Lin-Li/MachineLearning/tree/master/NeuralNetwork)|
|PCA|An algorithm for dimension reductions|[Source Code](https://github.com/Cheng-Lin-Li/MachineLearning/tree/master/PCA)|
|Neural Network and Long Short Term Memory (LSTM)|This is a project which implemented Neural Network and Long Short Term Memory (LSTM) to predict stock price in Python 3 by Tensorflow|[Source Code](https://github.com/Cheng-Lin-Li/MachineLearning/tree/master/TensorFlow)|
|Neural Network|The foundation algorithm of deep learning|[Specification](https://github.com/Cheng-Lin-Li/MachineLearning/blob/master/NeuralNetwork/INF552-TechnicalSpecification-%5BNeuralNetwork%5D-%5B1.0%5D-%5B20161104%5D.pdf) and [Source Code](https://github.com/Cheng-Lin-Li/MachineLearning/tree/master/NeuralNetwork)|
|PCA|An algorithm for dimension reductions|[Specification](https://github.com/Cheng-Lin-Li/MachineLearning/blob/master/PCA/INF552-TechnicalSpecification-PCA_FastMap-%5B1.0%5D-%5B20161011%5D.pdf) and [Source Code](https://github.com/Cheng-Lin-Li/MachineLearning/tree/master/PCA)|
|Neural Network and Long Short Term Memory (LSTM)|This is a project which implemented Neural Network and Long Short Term Memory (LSTM) to predict stock price in Python 3 by Tensorflow|[Project Report](https://github.com/Cheng-Lin-Li/MachineLearning/blob/master/TensorFlow/ProjectReport.pdf) and [Source Code](https://github.com/Cheng-Lin-Li/MachineLearning/tree/master/TensorFlow)|



Expand Down
4 changes: 2 additions & 2 deletions TensorFlow/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
## Project:
The one of the most challenging issue is stock price or indices prediction in the financial industry. On the other hand, machine learning and big data techniques in vision recognition has matured considerably over the last decade. This research adopts Multi-Layer Perceptron (MLP) and Long-Short Term Memory (LSTMs) neural networks to compete with Dynamic-radius Species-conserving Genetic Algorithm (DSGA) for short term stock price prediction. The result indicates that MLP may have a better potential than DSGA on short term stock price prediction and that LSTMs may require more training data to surpass DSGA.

## Technical Specification and Report
Click [**Here**](https://github.com/Cheng-Lin-Li/MachineLearning/blob/master/TensorFlow/ProjectReport.pdf) to read the report.
## The Project Report
Click [** Here **](https://github.com/Cheng-Lin-Li/MachineLearning/blob/master/TensorFlow/ProjectReport.pdf) to read the report.

### Usage: python StockPriceForecasting.py (or StockPriceForecasting-LSTM.py)

Expand Down

0 comments on commit bd643e2

Please sign in to comment.