Skip to content

This is my undergraduate thesis, which contains part of the code and experimental results.

License

Notifications You must be signed in to change notification settings

digbangbang/LR-SVM-principles-and-optimization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

LR-SVM-principles-and-optimization

This is my undergraduate thesis, which contains part of the code and experimental results.

Simulation on LR and SVM

I found that if there is no penalty in training LR, the LR will never converge, which means that the loss function will decrease continually but never arrive in 0.

Also interesting, the L2 of parameters in LR will keep increasing in training.

legend

legend

When I draw this parameters, it shows that training can reduce the loss function(SSE is decreasing), but it is actually useless work(have already found the separation boundary).

legend

The result in SVM, there is no situation above in SVM. Considering that SVM's hinge loss can descrease to 0, and in the linearly totally sperable situation, SVM can quickly find the seperabel boundary.

legend

I also provide the paper.ipynb, which is already runned. (The picture's $\theta$ is little different with the $\theta$ in paper.ipynb in value due to the paper.ipynb was newly runned.)

About

This is my undergraduate thesis, which contains part of the code and experimental results.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages