Skip to content

Natural language inference task on SNLI and carried out POS tag probing

Notifications You must be signed in to change notification settings

Shreyas-Bhat/SNLI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

SNLI

* Got training accuracy of 76.3 and validation accuracy of 75.57
  • Next I carried out POS tag probing
  • We probe on a model representation from the previous tasks i.e a specific frozen layer of the model and we start building the model from this layer. We take the 4th layer from last and add it to a keras.Sequential function.
  • We then add an RNN layer with 2 cells with returns sequences to the next layer
  • We see a significant increase in the validation accuracy due to the fact that the Bi-LSTM model is able to interpret the POS tags. To address the probe confounder problem, it may seem that due to the complexity of RNN as compared to MLP, the model may be memorizing the outputs from the probed layer with some supervision. But, in fact, the learned parameters of the probe is low of ~800 parameters only. In literature shallow MLPs are preferred which will be a part of future work. Literature also uses ‘selectivity’ as a metric for the performance of the probe, selectivity = linguistic accuracy - control accuracy

About

Natural language inference task on SNLI and carried out POS tag probing

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published