Skip to content

Client side encryption can be part of this complete breakfast. Also Lua.

License

Notifications You must be signed in to change notification settings

cagerton/dropthat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Building an OpenResty events server

DropTh.at is a chat service demo built with OpenResty. It secures your messages using a combination of traditional https and clientside crypto. Chat rooms are randomly generated by the client and can be shared by sharing the room's full url. The encryption keys are stored in the #fragment-identifier and are not sent to the server. That means the url is the room password. The core server and client demo are open source.

Reality check: It still relies on a trustworthy host and on SSL to deliver the initial html+js. It is just a is a demo. See Below.

Context: Javascript EventSource API:

I'm using the EventSource API for delivering messages. It is supported by all the browsers that I currently care about (and can be shimmed into some by Microsoft). The protocol is just a chunked HTTP stream with content-type "text/event-source" and formatted event messages. It has a very low overhead and it plays nicely with the pub-sub pattern.

Here are the headers and the first event in a stream (as rendered by curl):

< HTTP/1.1 200 OK
< Content-Type: text/event-stream
< Server: dropth.at
< Connection: keep-alive
< Transfer-Encoding: chunked
<
event:count
data:count=2

And here's how you could capture it:

var events = new EventSource( '<event-source-url>' );
events.addEventListener('count', function(c){ console.log("Got ", c); });

Got count=2

Nginx, Lua, and OpenResty:

The server is built with OpenResty; that's Nginx, Lua, and a few extra modules. Leafo has a nice intro to building with OpenResty if you're not familiar. I highly suggest you check it out.

I'll assume you understand the basics of Nginx and Lua from this point. Here are my core pub/sub locations in the nginx config.

worker_processes 1;
http {
    init_by_lua 'chat = require "chat"';

    server {
        listen 80 so_keepalive=20s:3s:6;
        set_by_lua $dontcare 'chat.check_init()';

        location ~ "^/sub/(?P<channel>[a-zA-Z\d_-]+)$" {
            lua_socket_log_errors off;
            lua_check_client_abort on;
            content_by_lua 'chat.event_source_location(ngx.var.channel)';
        }

        location ~ "^/pub/(?P<channel>[a-zA-Z\d_-]+)$" {
            client_max_body_size 32k;
            content_by_lua '
                ngx.req.read_body()
                local message = ngx.escape_uri(ngx.var.request_body)
                chat.publish_event(ngx.var.channel, nil, nil, message)
            ';
        }
  • I've loaded the chat module as a thread global (happens once during while loading the config).
  • I'm start the a light background thread by calling chat.check_init()
  • The /sub/<channel-id> location calls chat.event_source_location() function.
  • Likewise, the pub location really just hits chat.publish_event() function.

Now that the special parts of the Nginx config file are out of the way, we'll jump in to chat.lua. The first thing we need is a way to format the event stream.

local http_header = "HTTP/1.1 200 OK\r\n"..
                    "Content-Type: text/event-stream\r\n"..
                    "Server: dropth.at\r\n"..
                    "Connection: keep-alive\r\n"..
                    "Transfer-Encoding: chunked\r\n\r\n"

local function format_http_chunk(message)
    -- http chunk format is hex-encoded length, newline, data, newline
    local len = string.format("%x\r\n", string.len(message))
    return len..message.."\r\n"
end

local function format_event(id, event, message)
    local buff = "data:"..message.."\r\n\r\n"
    if event then
        buff = "event:"..event.."\r\n"..buff
    end
    if id then
        buff = "id:"..id.."\r\n"..buff
    end
    return format_http_chunk(buff)
end

Well, that's pretty simple. Next we'll handle the incoming subscriber connections.

function event_source_location(channel_id)
    local sock, err = ngx.req.socket(true) -- hijack the request socket
    sock:send(http_header)

    local function cleanup()
        remove_socket(channel_id, sock)
        ngx.exit(499)
    end
    local ok, err = ngx.on_abort(cleanup)  -- handle disconnect

    add_socket(channel_id, sock)           -- add socket to data structure
    local loops=0
    while 1 do
        ngx.sleep(19.31)
        send_blank(sock)     -- periodic tick message :\r\n
    end
end

local function add_socket(channel_id, socket)
    local channel = get_or_create_channel(channel_id)
    channel.sockets[socket] = ngx.now()
    update_channel(channel_id)
end

local function update_channel(channel_id)
    local channel = channels[channel_id]
    if not channel.announce_queued then
        push_update_queue(channel_id)
        channel.announce_queued = true
    end
end

For reference, ngx.sleep is nonblocking. In this case, the lua "light thread" handling this request wakes up every 19ish seconds just send a simple no-op message to keep the connection alive.

When a web client posts a message, our Nginx config hands off to here:

function publish_event(channel_id, event_id, event, message)
    local channel = channels[channel_id]
    if channel then
        local chunk = format_event(event_id, event, message) --chunk & event-stream format
        for sock,start_time in pairs(channel.sockets) do
            local bytes, err = sock:send(chunk)
            if bytes ~= string.len(chunk) then
                -- handle error here.
                ngx.log(ngx.ERR, "failed to write event", err)
            end
        end
    end
end

The server sends count events when the number of subscibers for a channel changes. I found out the hard way that these need to be rate limited while load testing with 20k clients on a virtual machine. Here's how the notification thread works:

function check_init()
    ngx.timer.at(0, notify_thread) -- ngx.timer.at creates a new light-thread
    check_init = function() end -- we only need that to happen once
end

local function notify_thread(premature)
    local loops = 0
    while true do
        local channel_id = pop_update_queue() -- just has channel_ids
        if channel_id then
            local now = ngx.now();
            local channel = channels[channel_id]
            if channel and now < 0.373 + channel.last_announce then
                ngx.sleep(0.373) -- oversleeps a tiny bit,
                                 -- but queue order means it's ok.
            end
            channel = channels[channel_id]
            if channel then
                channel.announce_queued = false
                publish_channel_count(channel_id)  -- just counts clients & sends event
            end
        else
            ngx.sleep(.127)
        end
    end
end

Pretty cool, huh? Try it out. Here's a Dockerfile.

I test this on a local VM.

$ curl localhost:8080/sub/0000000000000000000000
event:count
data:count=1

event:count
data:count=4663

event:count
data:count=15001

event:count
data:count=13977

event:count
data:count=13465

event:count
data:count=1

the other tab:

cda@ubuntu:~$ sudo bash -c 'ulimit -n 16384 && ab -n 15000 -c 15000 localhost:8080/sub/0000000000000000000000'
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, https://www.zeustech.net/
Licensed to The Apache Software Foundation, https://www.apache.org/

Benchmarking localhost (be patient)
^C

Server Software:        dropth.at
Server Hostname:        localhost
Server Port:            8080

Document Path:          /sub/0000000000000000000000
Document Length:        0 bytes

Concurrency Level:      15000
Time taken for tests:   6.447 seconds
Complete requests:      0
Failed requests:        0
Write errors:           0
Total transferred:      2607156 bytes
HTML transferred:       762156 bytes
cda@ubuntu:~$ 

On Javascript Cryptography

Preface

Javascript Cryptography isn't inherently doomed (Matasano), it's just useful for different types of problems. (JS Crypto can be an important part of this complete breakfast!)

Intro

My usage of encryption on the DropTh.at demo is very simple. The interesting crypto is handled by SJCL, the keys are stored in the fragment-identifier of the url (that part that doesn't get sent with the request; handle that url like a password), and the room channel ids are derived from a one way SHA hash of the room keys. Messages and images are encrypted using the same default settings in SJCL. It handles building random IVs, ensuring availability of sufficent entropy and encrypting with AES-CCM mode. Remember that none of this is secure unless you can safely get the javascript and html (over SSL) in the first place. There's lots of room for improvement here.

Here are some of the things that JS crypto can add to traditional HTTPS.

Client Crypto is an implicit contract:

When web services let clients encrypt their data without giving up their keys, it gives the clients an expectation of pricavy and forms an implicit contract like this:

  • We won't casually read your stuff on our severs.
  • We won't target ads based on your content.
  • We won't leak information by de-duplicating your data.
  • You won't delete your private media because of a bogus DMCA takedown on another user with the same files.
  • Our backups will never contain your plaintext.
  • Our cross-data-center links will never expose your unencrypted data.
  • We won't report your private data to the police(Okay this time, but how about leaked NSA docs?)

Normal cloud companies could get away with any of those things, but a client-encrypted service would lose all credibility over the first violation.

As a legal protection for the host from the client data:

Lets take Mega as an example (I haven't verified this completely): they use client encryption and never see media files that are uploaded to them. Since they're insulated from the data, it'll be hard to argue that they are complicity in copyright violations.

Untrusted Public CDNs:

We like Public CDNs (bootstrapcdn, cdnjs, googlecdn, etc), but we shouldn't have to trust them. I'm sure they've all got great security, but an attack on a big public cdn could have wide reaching implications. You can use Javascript to verify assets (both shared and private) before using/running them. Proof of concept: https://dropth.at/cors-cdn-demo

Password Hashing:

You all use BCrypt/SCrypt/PBKDF2, right? These take up lots of CPU time by design and can be targeted for a DoS attack on your server. Rate limiting can help, but it's not a silver bullet - especially if you're chewing through 100ms+ of cpu time for each attempt. They also make RasPi servers cry. With the careful application of client-side js crypto, you can offload some of the expensive work from your server to the client during heavy load (or all the time). I'd reccomend wrapping the result again with a SHA and random salt on the server. Previously

PBKDF2 Sha256 with 25000 iterations:

var b64 = sjcl.codec.base64, sha2 = sjcl.hash.sha256, kdf = sjcl.misc.pbkdf2;
function preHash(site, username, password){
    var salt = b64.fromBits(sha2.hash(site + username)).slice(0,10);
    return = b64.fromBits(kdf(password, salt, 25000));
}
var username='cda', password='password', start = Date.now(),
    output = preHash('dropth.at', username, password);
console.log("Hashing took approx ",Date.now()-start, "ms");

Chrome: 168ms, Firefox: 150ms, Safari: 1260ms, Chrome on my Galaxy Nexus: 1343ms, Android browser: 1628ms.

django.utils.crypto.pbkdf2: 105ms on my laptop.

TODO

  • iPhone canvas problem? Meh. Don't have one to test with; works on iPad.
  • Needs responsive layout. Sucks on a small screen.
  • Add disclaimer about old and/or shitty browsers.
  • Some misc fixups. Maybe CSRF (less important since rooms are secret) + asset domains?

Some ways you'll get boned:

  • Someone will try to run this without https and you'll get boned.
  • Someone will intercept the key when you send the room link...
  • You'll share the link#key with the wrong person...
  • Someone will dig through your history and get your key...
  • Someone will pwn my server or find an xss hold and inject js to steal the key....
  • Someone will get a "trusted" CA to issue a cert then mitm you. Then you'll get boned.
  • https://xkcd.com/538/ (Could be me + a court order to inject js to steal keys).

About

Client side encryption can be part of this complete breakfast. Also Lua.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published