https://www.kaggle.com/c/inria-bci-challenge
####Final result: 27th out of 260
Here are listed the major steps we should take and the results of those steps as we took them.
- Figure out good features
Different datasets on GBM using same model on all dataset: GBM(500, 0.05, 1)
Different datasets on Multinom using same model on all dataset: Multinom(100, 10)
- Find the best classifier
Each dataset has a separate file where all CV results are preserved
Cz; 1300ms; +meta
Cz; 1300ms; PCA; +meta
Cz; 1300ms
8ch; 700ms
8ch; 1300ms
8ch; 1300ms; PCA
Cz; 1300ms; FFT
- Add META data on top just in case
Kaggle allows to present two model for final judgment. We plan to present two model: one without meta data (subject id, session id, feedback time) and another one with this information. The theory is that if the dataset would be big enough the metadata should not have given any useful info. But in this case test set is only 10 subject, so it is better to be on the safe side.