A light and efficient implementation of the parameter server framework. It provides clean yet powerful APIs. For example, a worker node can communicate with the server nodes by
Push(keys, values)
: push a list of (key, value) pairs to the server nodesPull(keys)
: pull the values from servers for a list of keysWait
: wait untill a push or pull finished.
A simple example:
std::vector<uint64_t> key = {1, 3, 5};
std::vector<float> val = {1, 1, 1};
std::vector<float> recv_val;
ps::KVWorker<float> w;
w.Wait(w.Push(key, val));
w.Wait(w.Pull(key, &recv_val));
More features:
- Flexible and high-performance communication: zero-copy push/pull, supporting dynamic length values, user-defined filters for communication compression
- Server-side programming: supporting user-defined handles on server nodes
ps-lite
requires a C++11 compiler such as g++ >= 4.8
. On Ubuntu >= 13.10, we
can install it by
sudo apt-get update && sudo apt-get install -y build-essential git
Instructions for gcc 4.8 installation on other platforms:
Then clone and build
git clone https://github.com/dmlc/ps-lite
cd ps-lite && make -j4
ps-lite
provides asynchronous communication for other projects:
- Distributed deep neural networks: MXNet, CXXNET, Minverva, and BytePS
- Distributed high dimensional inference, such as sparse logistic regression, factorization machines: DiFacto Wormhole
- Mu Li, Dave Andersen, Alex Smola, Junwoo Park, Amr Ahmed, Vanja Josifovski, James Long, Eugene Shekita, Bor-Yiing Su. Scaling Distributed Machine Learning with the Parameter Server. In Operating Systems Design and Implementation (OSDI), 2014
- Mu Li, Dave Andersen, Alex Smola, and Kai Yu. Communication Efficient Distributed Machine Learning with the Parameter Server. In Neural Information Processing Systems (NIPS), 2014