Skip to content

i-machine-think/awesome-compositionality

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 

Repository files navigation

awesome-compositionality Awesome

A list of resources dedicated to compositionality.

Contributions most welcome! Please check the Contribution guideline for an example.

Table of Contents

Architectural Bias

  • David Ha, Andrew Dai, Quoc V. Le. HyperNetworks, arXiv:1609.09106, ICLR, 2017
    • The main idea is to use a hypernetwork to generate the weights for another network. An “embedding vector” is generated by a hyper RNN. This embedding vector is used to dynamically scale the weights of a RNN at every timestep. The approach is evaluated on different language tasks: character-level language modeling (PTB, Wiki), machine translation and handwriting prediction. The results outperform the baselines on these tasks marginally.

Datasets

Interpretability

Miscellaneous

Psycholinguistics

  • Christiansen, Morten H., and Nick Chater. The Now-or-Never bottleneck: A fundamental constraint on language. Behavioral and Brain Sciences 39 (2016).
    • Combining psycholinguistic evidence to hypothesize that the way humans process language is highly constrained by limited time and memory resources, therefore implying a very particular process in which the brain chunks incoming input and processes it into increasingly abstract representations.

Cognitive Sciences