- Tagline: Learning a generative model for music data using a small amount of examples.
- Date: December 2017
- Category: Fundamental Research
- Author(s): Hugo Larochelle, Chelsea Finn, Sachin Ravi
Brainstorming for datasets phase: Currently collecting ideas for dataset collection for lyrics and MIDI data. See the Issues page for details.Collecting actual data for lyrics and MIDI.Decide and implement pre-processing scheme for data (specifically for MIDI).Release training script and model API code.- Experiment with new models on both datasets.
See Introduction section of the proposal.
See Introduction section of the proposal.
See Experiments section of the proposal.
See Datasets subsection of the proposal.
See Related Work section of the proposal.
- Please begin by reading papers from the Reading List to familiarize yourself with work in this area.
Both the lyrics and freemidi data can be downloaded here. Place the raw-data
directory in the home folder of the repository and make sure to unzip both .zip
files in the data sub-directories.
For example, for the lyrics data, make sure in the given path the following files and directories exist:
ls Few-Shot-Music-Generation/raw-data/lyrics/
>> lyrics_data test.csv train.csv val.csv
Sample run (check the different yaml files for different ways to run):
$ CONFIG=lyrics.yaml
$ MODEL=lstm_baseline.yaml
$ TASK=5shot.yaml
python -um train.train --data=config/${CONFIG} --model=config/${MODEL} --task=config/${TASK} --checkpt_dir=/tmp/fewshot/lstm_baseline
To view the tensorboard (only works for lstm_baseline.yaml
model):
$ tensorboard --logdir=/tmp/fewshot
If you have any trouble running the code, please create an issue describing your problem.
Please log all your results in this spreadsheet.