Simple key/value storage focused on high data compression.
Note: library is unstable, compatibility with older versions is not guaranteed.
Keys stored in memory, while values stored on disk.
Data stored in log based structure. Records grouped into blocks to increase compression level.
Features:
- High compression ratio;
Disadvantages:
- Keys stored in memory;
- Medium speed writes;
- Slow reads;
- Deleting or replacing data does not recover free space;
- Every write request blocks whole storage.
File structure:
header [3]byte // []byte("zkv")
version [1]byte // major version
compressor id [1]byte
[]blocks
block length [8]byte // compressed block length
data length [8]byte // decompressed block length
[]record
action [1]byte // 1 - add/overwrite record, 2 - remove record
key []byte // binary-encoded
value []byte // binary-encoded, only for records with action == actionAdd
Open or create new storage:
import "github.com/nxshock/zkv"
db, err := Open("path_to_file.zkv")
defer db.Close() // don't forget to close storage
Open storage with custom config:
config := &zkv.Config{
BlockDataSize: 64 * 1024, // set custom block size
Compressor: zkv.ZstdCompressor, // choose from [NoneCompressor, XzCompressor, ZstdCompressor]
// or create custom compressor that match zkv.Compressor interface
ReadOnly: false} // set true if storage must be read only
db, err := OpenWithConfig("path_to_file.zkv", config)
List of available compressors:
zkv.ZstdCompressor
(default) - medium compression ratio, fast compression and medium speed decompression;zkv.XzCompressor
- high compression ratio, slow speed;zkv.NoneCompressor
- no compression, high speed.
Write data:
err := db.Set(key, value) // key and value can be any type
Read data:
var value ValueType
err := db.Get(key, &value)
Delete data:
err := db.Delete(key) // returns nil error if key does not exists
Flush data on disk (for example to prevent loosing buffered data):
err := db.Flush()
Often calls reduce compression ratio because written data on disk does not grouped into blocks. It you want to update data on disk on every record write, open storage with Config.BlockDataSize = 1.
Get number of stored records:
count := db.Count()
Iterate over keys and values in written order:
f := func(keyBytes, valueBytes []byte) bool {
var key KeyType
var value ValueType
err := zkv.Decode(keyBytes, &key)
err = zkv.Decode(valueBytes, &value)
// now you can work with key and value
return true // return true to continue iterating else return false
}
err := db.Iterate(f)
This provides maximum possible read speed.
Shrink storage size by deleting overwrited records from file:
err := db.Shrink(newFilePath)