# make a new folder and turn it into a dat store
mkdir foo
cd foo
dat init
# put a JSON object into dat
echo '{"hello": "world"}' | dat --json
# stream a CSV into dat
cat some_csv.csv | dat --csv
cat some_csv.csv | cat --csv -d $'\r\n' # custom line delimiter, --delimiter= works too
# specify a primary key to use
echo $'a,b,c\n1,2,3' | dat --csv --primary=a
echo $'{"foo":"bar"}' | dat --json --primary=foo
# retrieve a single row by key (for debugging)
dat crud get foo
dat crud get hello
# stream the most recent of all rows
dat cat
# view raw data in the store
dat dump
# compact data (removes unnecessary metadata)
dat compact
# start a dat server
dat serve
# delete the dat folder (removes all data + history)
rm -rf .dat
There are subject to change. See lib/commands.js
for the source code
turns the current folder into a new empty dat store
dat init --remote https://localhost:6461/_archive
initializes a new dat store by copying a remote dat server
you can pipe line separated JSON data into dat
on stdin and it will be stored. otherwise entering dat
with no arguments will just show you the usage instructions
pulls new changes/data from a remote dat server
starts an http + tcp server on port 6461
that serves the data in the current dat store
removes duplicate copies of documents
writes line separated JSON objects to stdout, one object per key, only the newest version of each key, sorted by key
dumps out the entire dat store to stdout as JSON
used for debugging. usage: dat crud <action> <key> <value>
e.g. dat crud get foo
or dat crud put foo bar