Skip to content

austinheap/grassland-docker

Repository files navigation

Grassland for Docker

grassland-docker banner from the documentation

License Release Pulls Stars Layers Size Automated Status

Dockerfiles and scripts for Grassland network, based on configuration settings.

The purpose of this project is to create a set-it-and-forget-it Docker image that can be installed without much effort to access and contribute to the Grassland network. It is therefore highly opinionated but built for configuration.

Grassland is an open-source, permissionless, peer-to-peer network of incentivised and anonymous computer vision software that turns video feeds from fixed-perspective cameras around the world into a (politically) stateless, indelible public record of the lives of people and the movement of physical assets as a simulated, real-time, 3D, bird's-eye-view similar to the games SimCity® or Civilization®, but with the ability to rewind time and view events in the past.

This repo is not affiliated with the owners of the aforementioned software titles: they are owned by EA Games, Firaxis, and Microsoft, respectively.

Table of Contents

Requirements

Installation

Step 1: Configure

Copy env.list to env.local, edit the values and validate them using:

    $ docker run --interactive                    \
                 --tty                            \
                 --rm                             \
                 --env-file=env.local             \
                 --device=/dev/video0:/dev/video0 \
                   austinheap/grassland validate

Step 2: Initialize

Initialize Grassland, the AWS services, and calibrate the camera using:

    $ docker run --name=grassland                 \
                 --restart=always                 \
                 --env-file=env.local             \
                 --device=/dev/video0:/dev/video0 \
                 --cpus=4.0                       \ # Limit CPU usage
                 --memory=4g                      \ # Limit memory usage
                   austinheap/grassland

Once initialized a video feed opens showing real-time bounding boxes around detected objects. Calibrate the camera with the GUI (http:https://<docker-host-ip>:3000/) and validate it using:

    $ docker exec --interactive --tty grassland grassland validate:calibration

Step 3: Restart

Restart the container (docker restart grassland) to bring Grassland up in 'ONLINE' mode. This will not open a real-time video display but will start the web GUI. If the video display opens again then calibration wasn't successful and your container is re-initializing.

Usage

The following commands are available for controlling the container:

    $ docker exec --interactive --tty grassland help

              grassland-docker  .:.  v0.1.0
    -----------------------------------------------------

    help                - Display this help
    version             - Display the version
    shell               - Open a shell
    start               - Start the service

    init                - Initialize instance
    init:calibration    - Initialize calibration data
    init:config         - Initialize config files
    init:data           - Initialize data files
    init:lambda         - Initialize AWS Lambda stack
    init:s3             - Initialize AWS S3 buckets

    destroy             - Destroy instance
    destroy:calibration - Destroy calibration data
    destroy:data        - Destroy data files
    destroy:lambda      - Destroy AWS Lambda stack
    destroy:s3          - Destroy AWS S3 buckets

    validate            - Validate instance
    validate:aws        - Validate AWS credentials
    validate:camera     - Validate camera device
    validate:data       - Validate downloaded data
    validate:lambda     - Validate AWS Lambda stack
    validate:s3         - Validate AWS S3 buckets
    validate:variables  - Validate environmental variables
    validate:versions   - Validate package versions

The following envrionmental variables are available for configuring the container:

Name Type Required Description Example
AWS_DEFAULT_REGION string True Specifies the AWS Region to send requests to. us-west-1
AWS_ACCESS_KEY_ID string True Specifies the AWS access key associated with an IAM user or role. XXXXXXXXXXXXXXXXXX
AWS_SECRET_ACCESS_KEY string True Specifies the AWS secret key associated with the AWS access key. AXXZZZZZZZZZZZZZZZZZZ
CONTAINER_DEBUG bool False Enables verbose output when present. true
CONTAINER_QUIET bool False Disables convenience output when present. false
DISPLAY string False Specifies the X Window Server for output. host.docker.internal:0
GRASSLAND_FRAME_S3_BUCKET string True Specifies the S3 bucket to queue unprocessed frames in. grassland-frame-s3-bucket
GRASSLAND_MODEL_S3_BUCKET string True Specifies the S3 bucket to store model data in. grassland-model-s3-bucket
MapboxAccessToken string True Specifies the Mapbox access token for Webpack. pk.XXXXXXXXXXXXXXXXXX...

FAQ

What does node calibration do?

Calibrating a Grassland instance lets the node know where the camera its viewing is positioned in the real world. When initializing, the node GUI simulates a 3D map of the world to virtually set a position and viewing angle that matches that of the camera in the real world.

Improper calibration in future versions could cause the node to fail and should cause objects tracked by the node to be rejected by the network ledger.

What is the proper way to calibrate a node?

From the node_lite documentation:

Once the map loads, use your mouse's scroll wheel to zoom and the left and right mouse buttons to drag and rotate the map until you've adjusted your browsers view of the map to match the position and orientation of your camera in the real world. Once you've narrowed it down, click on the 'CALIBRATION' toggle button. The GUI's frame dimensions will adjust to match your camera frame's dimensions. Continue adjusting your position until it matches the position and orientation of the real precisely.

As you're adjusting, your node should be receiving new calibration measurements and placing tracked objects on the GUI's map. Continue adjusting while referring to the node's video display until objects tracked in the video display are in their correct positions in the GUI's map.

In other words, you should have the video window that shows you the video that's streaming from the camera up on your computer screen (because the command you used to start the node included the "--display 1" option). Using your mouse, align the virtual map's viewport so it's looking from exact the same vantage point (latitiude, longitude, altitude, angle etc.) as the real camera is in real life.

Once that's done, your calibration values should be set inside the node's database. Now click the 'CALIBRATION' toggle button again to turn CALIBRATING mode off.

Why isn't it displaying anything?

If a window with the video feed and detected objects (in bounding boxes) then the X server cannot be reached by the container. Instructions for that are beyond the scope of this project but Google is your friend. Make sure to exported the DISPLAY environmental variable with the correct location.

Why can't Docker for Mac see the web camera?

Docker for Mac uses HyperKit for virtualization which is not compatible with AVFoundation in macOS. The fastest workaround is to use docker-machine with this purpose-built boot2docker ISO, as it includes the uvcvideo.ko kernel module not found in the official distribution. As explained by the author:

docker-machine create --driver virtualbox \
                      --virtualbox-boot2docker-url http:https://bit.ly/boot2uvcvideo \
                        default

Then install the VirtualBox extension, attach the webcam device, you are good to go!

Why doesn't camera X/Y/Z work?

If the Docker host cannot see the device (i.e.: has the correct drivers for the device) it will not function. Most UVC devices however will function out-of-the-box. Cameras that do not function likely require customizations outside the scope of this project or will not work at all inside Docker without 1) running in the container in privileged mode and 2) breaking portability across Docker hosts.

Credits

This is a fork of janza/docker-python3-opencv, which was a fork of docker-library/python, which was based on earlier work.

The Grassland software is developed by @grasslandnetwork.

Contributing

Pull requests welcome! Please see the contributing guide for more information.

License

The MIT License (MIT). Please see the license file for more information.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published