Skip to content

interfaces and algorithms for event based cameras, lidars, and actuators

Notifications You must be signed in to change notification settings

idsc-frazzoli/retina

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ch.ethz.idsc.gokart Build Status

Software to operate the go-kart in autonomous and manual modes. The performance of the go-kart hardware and software are documented in reports.

The code in the repository operates a heavy and fast robot that may endanger living creatures. We follow best practices and coding standards to protect from avoidable errors - see development guidelines.

Gallery Autonomous Driving

usecase_gokart

Trajectory pursuit

planning_obstacles

Navigation initial, demo

autonomous_braking

Autonomous braking

visioneventbased

Event-based SLAM, Fig. 8

gokart_mpc

Kin. MPC outside, inside

dynamic_mpc

Dyn. MPC outside, inside

mpc_ann

ANN MPC

Student Projects

The student projects are supervised by Andrea Censi, Jacopo Tani, Alessandro Zanardi, and Jan Hakenberg. The gokart is operated at Innovation Park Dübendorf since December 2017.

2017

  • Noah Isaak, Richard von Moos (BT): micro autobox programming, low-level actuator logic

2018

  • Mario Gini (MT): simultaneous localization and mapping for event-based vision systems inspired by Weikersdorfer/Hoffmann/Conradt; reliable waypoint extraction and following
  • Yannik Nager (MT): bayesian occupancy grid; trajectory planning
  • Valentina Cavinato (SP): tracking of moving obstacles
  • Marc Heim (MT): calibration of steering, motors, braking; torque vectoring; track reconnaissance; model predictive contouring control; synthesis of engine sound; drone video

2019

  • Michael von Büren (MT): simulation of gokart dynamics, neural network as model for MPC
  • Joel Gächter (MT): sight-lines mapping, clothoid pursuit, planning with clothoids
  • Antonia Mosberger (BT): power steering, anti-lock braking, lane keeping
  • Maximilien Picquet (SP): Pacejka parameter estimation using an unscented Kalman filter
  • Thomas Andrews (SP): Torque Controlled Steering extenstion to MPC.

Gallery Rendering

velocity

Doughnuts

clothoid_pursuit

Clothoid pursuit

lane_keeping

Lane keeping

mpc_lidar

MPC

clothoid_rrts

Clothoid RRT*

Gallery Manual Driving

torquevectoring

Torque Vectoring

doughnuts

Doughnuts

Press Coverage

Architecture

  • tensor for linear algebra with physical units
  • owl for motion planning
  • lcm Lightweight Communications and Marshalling for message interchange, logging, and playback. All messages are encoded using a single type BinaryBlob. The byte order of the binary data is little-endian since the encoding is native on most architectures.
  • io.humble for video generation
  • jSerialComm platform-independent serial port access
  • ELKI for DBSCAN
  • lwjgl for joystick readout

ethz300

ch.ethz.idsc.retina Build Status

Sensor interfaces

retina

Features

  • interfaces to lidars Velodyne VLP-16, HDL-32E, Quanergy Mark8, HOKUYO URG-04LX-UG01
  • interfaces to inertial measurement unit Variense VMU931
  • interfaces to event based camera Davis240C with lossless compression by 4x
  • interfaces to LabJack U3

LIDAR

Velodyne VLP-16

  • point cloud visualization and localization with lidar video

Velodyne HDL-32E

  • 3D-point cloud visualization: see video

distance as 360[deg] panorama

velodyne distances

intensity as 360[deg] panorama

intensity

Quanergy Mark8

  • 3D-point cloud visualization: see video

HOKUYO URG-04LX-UG01

urg04lx

our code builds upon the urg_library-1.2.0

Inertial Measurement Unit

VMU931

Event Based Camera

IniLabs DAVIS240C

Rolling shutter mode

05tram

04peds

00scene

Global shutter mode

dvs_2500

2.5[ms]

dvs_5000

5[ms]

Events only

dvs_noaps_1000

1[ms]

dvs_noaps_2500

2.5[ms]

dvs_noaps_5000

5[ms]

AEDAT 2.0, and AEDAT 3.1

  • parsing and visualization
  • conversion to text+png format as used by the Robotics and Perception Group at UZH
  • loss-less compression of DVS events by the factor of 2
  • compression of raw APS data by factor 8 (where the ADC values are reduced from 10 bit to 8 bit)

Device Settings

Quote from Luca/iniLabs:

  • Two parameters that are intended to control framerate: APS.Exposure and APS.FrameDelay
  • APS.RowSettle is used to tell the ADC how many cycles to delay before reading a pixel value, and due to the ADC we're using, it takes at least three cycles for the value of the current pixel to be output by the ADC, so an absolute minimum value there is 3. Better 5-8, to allow the value to settle. Indeed changing this affects the framerate, as it directly changes how much time you spend reading a pixel, but anything lower than 3 gets you the wrong pixel, and usually under 5-6 gives you degraded image quality.

We observed that in global shutter mode, during signal image capture the stream of events is suppressed. Whereas, in rolling shutter mode the events are more evenly distributed.

streaming DAT files

hdr

streaming DAVIS recordings

shapes_6dof

generating DVS from video sequence

cat_final

synthetic signal generation

synth2

synth1

References


ethz300