A tool make interacting with a database much easier to script and automate.
This project is a major work in progress. This README is to guide development and nothing is expected to work (yet).
The initial design and implementation supports only Microsoft SQL-Server. Other databases will be added as time permits and the idea is to not make anything specific to any particular database, unless needed.
Connect to a database using -x
flag followed by a connection string.
To make the connection persistent, use -d
to keep the connection open in the background and used by subsequent commands. Without -d
the connection will immediately close after running any other operations (such as queries or commands, see below for details).
Example connecting to a SQL Server database:
$ db -d -x "Server=localhost;User=SA;Password=P@ssw0rd;"
default connection successful
Optionally the connection can be named with -n
flag:
$ db -d -n dev1 -x "Server=localhost;User=SA;Password=P@ssw0rd;"
dev1 connection successful
The active database connection can be queried by using the -q
flag followed by the query.
$ db -q "SELECT id, name FROM users"
id | name
----|--------------
1 | John Johnson
2 | Paul Paulson
Standard in can be used instead by passing -q -
:
$ echo "SELECT 1" | db -q - -o users.json
1
A new connection can be created by using -x
(see above). To use a connection other than default
, use the -n
argument to specify the existing connection:
$ db -n conn1 -q "SELECT 1"
1
Output can be saved to a file with the -o
flag. Output format is inferred from the output file name, defaulting to CSV if an unknown extension. To change the output format -f <format>
can be specified. Currently supported output formats:
csv
(default)json
markdown
Like queries, commands can be used with the -c
argument:
$ db -c "INSERT INTO users (id, name, email) VALUES (1, 'jake', '[email protected]')"
1 row affected
Command input can be read from a file, just like queries by using -c -
:
$ db -c -
10 rows affected
Parameterized commands can also be performed using the syntax $<variable>
and passing arguments by name with -p <variable>=<value>
syntax.
$ db -c "INSERT INTO users (id, name, email) VALUES ($id, $name, $email)" \
-p id=1 -p "name=Phillip Porter" -p "[email protected]"
1 row affected
Instead of passing them one-by-one, parameters can be read from a file using -p <filename>
where supported file formats are toml
, json
, csv
, and markdown
tables.
If more than one row of data is to be used, the -s
option enables streaming mode where each row of the paramter file is used and the command runs multiple times. If batch size is important, this can be specified with the -b <max-batch-size>
paramter.
$ db -c "INSERT INTO users (id, name, email) VALUES ($id, $name, $email)" \
-s -p ./users.csv
If generating a SQL script is needed, the -o
flag can be used with a filename ending in .sql
or by specifying -f sql
:
$ db -c "INSERT INTO users (id, name, email) VALUES ($id, $name, $email)" \
-s -p ./users.csv -o insert-users.sql
Example writing generated to standard out (and not executing):
$ db -c "INSERT INTO users (id, name, email) VALUES ($id, $name, $email)" \
-s -p ./users.csv -o -
INSERT INTO users (id, name, email) VALUES (1, 'Phillip Porter', '[email protected]')
Nothing special should be required:
cargo build -r
Inspired by the likes of jq.
The contents of this repository are dual-licensed under the MIT OR Apache 2.0
License. That means you can chose either the MIT license or the Apache-2.0
license when you re-use this code. See LICENSE-MIT
or LICENSE-APACHE
for
more information on each specific license.