This project is officially retired and no longer maintained.
Logplex is a distributed syslog log router, able to merge and redistribute multiple incoming streams of syslog logs to individual subscribers.
A typical logplex installation will be a cluster of distributed Erlang nodes connected in a mesh, with one or more redis instances (which can be sharded). The cluster may or may not be sitting behind a load-balancer or proxy, but any of them may be contacted at any time for ideal scenarios.
Applications sitting on their own node or server need to send their log messages either to a local syslog, or through log shuttle, which will then forward them to one instance of a logplex router.
On the other end of the spectrum, consumers may subscribe to a logplex instance, which will then merge streams of incoming log messages and forward them to the subscriber. Alternatively, the consumer may register a given endpoint (say, a database behind the proper API) and logplex nodes will be able to push messages to that end-point as they come in.
For more details, you can look at stream management documentation in doc/
.
Table of Contents
As of Logplex v93, Logplex requires Erlang 18. Logplex is currently tested againts OTP-18.1.3.
Prior versions of Logplex are designed to run on R16B03 and 17.x.
$ ./rebar3 as public compile
run
$ INSTANCE_NAME=`hostname` \
LOGPLEX_CONFIG_REDIS_URL="redis:https://localhost:6379" \
LOGPLEX_REDGRID_REDIS_URL="redis:https://localhost:6379" \
LOCAL_IP="127.0.0.1" \
LOGPLEX_COOKIE=123 \
LOGPLEX_AUTH_KEY=123 \
erl -name logplex@`hostname` -pa ebin -env ERL_LIBS deps -s logplex_app -setcookie ${LOGPLEX_COOKIE} -config sys
Given an empty local redis (v2.6ish):
$ ./rebar3 as public,test compile
$ INSTANCE_NAME=`hostname` \
LOGPLEX_CONFIG_REDIS_URL="redis:https://localhost:6379" \
LOGPLEX_SHARD_URLS="redis:https://localhost:6379" \
LOGPLEX_REDGRID_REDIS_URL="redis:https://localhost:6379" \
LOCAL_IP="127.0.0.1" \
LOGPLEX_COOKIE=123 \
ERL_LIBS=`pwd`/deps/:$ERL_LIBS \
ct_run -spec logplex.spec -pa ebin
Runs the common test suite for logplex.
Requires a working install of Docker and Docker Compose. Follow the installations steps outlined docs.docker.com.
docker-compose build # Run once
docker-compose run compile # Run everytime source files change
docker-compose up logplex # Run logplex post-compilation
To connect to the above logplex Erlang shell:
docker exec -it logplex_logplex_1 bash -c "TERM=xterm bin/connect"
docker-compose run test
create creds
1> logplex_cred:store(logplex_cred:grant('full_api', logplex_cred:grant('any_channel', logplex_cred:rename(<<"Local-Test">>, logplex_cred:new(<<"local">>, <<"password">>))))).
ok
hit healthcheck
$ curl https://local:password@localhost:8001/healthcheck
{"status":"normal"}
create a channel
$ curl -d '{"tokens": ["app"]}' https://local:password@localhost:8001/channels
{"channel_id":1,"tokens":{"app":"t.feff49f1-4d55-4c9e-aee1-2d2b10e69b42"}}
post a log msg
$ curl -v \
-H "Content-Type: application/logplex-1" \
-H "Logplex-Msg-Count: 1" \
-d "116 <134>1 2012-12-10T03:00:48.123456Z erlang t.feff49f1-4d55-4c9e-aee1-2d2b10e69b42 console.1 - - Logsplat test message 1" \
https://local:password@localhost:8601/logs
create a log session
$ curl -d '{"channel_id": "1"}' https://local:password@localhost:8001/v2/sessions
{"url":"/sessions/9d53bf70-7964-4429-a589-aaa4df86fead"}
fetch logs for session
$ curl https://local:password@localhost:8001/sessions/9d53bf70-7964-4429-a589-aaa4df86fead
2012-12-10T03:00:48Z+00:00 app[console.1]: test message 1
logplex_app | logplex_sup | logplex_db | ||
config_redis (redo) | ||||
logplex_drain_sup | logplex_http_drain | |||
logplex_tcpsyslog_drain | ||||
logplex_tlssyslog_drain | ||||
nsync | ||||
redgrid | ||||
logplex_realtime | redo | |||
logplex_stats | ||||
logplex_tail | ||||
logplex_redis_writer_sup (logplex_worker_sup) | logplex_redis_writer | |||
logplex_shard | redo | |||
logplex_api | ||||
logplex_syslog_sup | tcp_proxy_sup | tcp_proxy | ||
logplex_logs_rest |
Starts and supervises a number of ETS tables:
channels
tokens
drains
creds
sessions
A redo redis client process connected to the logplex config redis.
An empty one_for_one supervisor. Supervises HTTP, TCP Syslog and TLS Syslog drain processes.
An nsync process connected to the logplex config redis. Callback module is nsync_callback.
Nsync is an Erlang redis replication client. It allows the logplex node to act as a redis slave and sync the logplex config redis data into memory.
A redgrid process that registers the node in a central redis server to facilitate discovery by other nodes.
Captures realtime metrics about the running logplex node. This metrics are exported using folsom_cowboy and are available for consumption via HTTP.
Memory Usage information is available:
> curl -s https://localhost:5565/_memory | jq '.'
{
"total": 27555464,
"processes": 10818248,
"processes_used": 10818136,
"system": 16737216,
"atom": 388601,
"atom_used": 371948,
"binary": 789144,
"code": 9968116,
"ets": 789128
}
As is general VM statistics:
> curl -s https://localhost:5565/_statistics | jq '.'
{
"context_switches": 40237,
"garbage_collection": {
"number_of_gcs": 7676,
"words_reclaimed": 20085443
},
"io": {
"input": 9683207,
"output": 2427112
},
"reductions": {
"total_reductions": 6584440,
"reductions_since_last_call": 6584440
},
"run_queue": 0,
"runtime": {
"total_run_time": 1140,
"time_since_last_call": 1140
},
"wall_clock": {
"total_wall_clock_time": 207960,
"wall_clock_time_since_last_call": 207748
}
}
Several custom logplex metrics are also exported via a special /_metrics
endpoint:
> curl -s https://localhost:5565/_metrics | jq '.'
[
"drain.delivered",
"drain.dropped",
"message.processed",
"message.received"
]
These can then be queried individually:
> curl -s https://localhost:5565/_metrics/message.received | jq '.'
{
"type": "gauge",
"value": 1396
}
Owns the logplex_stats
ETS table. Prints channel, drain and system stats every 60 seconds.
Maintains the logplex_tail
ETS table that is used to register tail sessions.
Starts a logplex_worker_sup process, registered as logplex_redis_writer_sup
, that supervises logplex_redis_writer processes.
Owns the logplex_shard_info
ETS table. Starts a separate read and write redo client for each redis shard found in the logplex_shard_urls
var.
Blocks waiting for nsync to finish replicating data into memory before starting a mochiweb acceptor that handles API requests for managing channels/tokens/drains/sessions.
Supervises a tcp_proxy_sup process that supervises a tcp_proxy process that accepts syslog messages over TCP.
Starts a cowboy_tcp_transport
process and serves as the callback for processing HTTP log input.
Logplex can send realtime metrics to Redis via pubsub and to a drain channel as logs. The following metrics are currently logged in this fashion:
* `message_received`
* `message_processed`
* `drain_delivered`
* `drain_dropped`
To log these metrics to an internal drain channel, you'll need to set the
INTERNAL_METRICS_CHANNEL_ID
environment variable to a drain token that has
already been created.