This environment is based on the efabless.com FOSS-ASIC-TOOLS.
IIC-OSIC-TOOLS is an all-in-one Docker container for open-source-based integrated circuit designs for analog and digital circuit flows. The CPU architectures x86_64/amd64
and aarch64/arm64
are natively supported based on Ubuntu 22.04LTS (since release 2022.12
). This collection of tools is curated by the Institute for Integrated Circuits (IIC), Johannes Kepler University (JKU).
Table of Contents
- IIC-OSIC-TOOLS
For great step-to-step instructions of installation and operation of our tool collection, please checkout Kwantae Kim's Setting Up Open Source Tools with Docker!
It supports two modes of operation:
- Using a complete desktop environment (XFCE) in
Xvnc
(a VNC server), either directly accessing it with a VNC client of your choice or the integrated noVNC server that runs in your browser. - Using a local X11 server and directly showing the application windows on your desktop.
- Using it as a development container in Visual Studio Code (or other IDEs)
Use the green Code button, and either download the zip file or do a git clone --depth=1 https://github.com/iic-jku/iic-osic-tools.git
.
See instructions on how to do this in the section Quick Launch for Designers further down in this README
.
Enter the directory of this repository on your computer, and use one of the methods described in the section Quick Launch for Designers to start up and run a Docker container based on our image. The easiest way is probably to use the VNC mode.
If you do this the first time, or we have pushed an updated image to DockerHub, this can take a while since the image is pulled (loaded) automatically from DockerHub. Since this image is ca. 4GB, this takes time, depending on your internet speed. Please note that this compressed image will be extracted on your drive, so please provide at least 20GB of free drive space. If, after a while, the consumed space gets larger, this may be due to unused images piling up. In this case, delete old ones; please consult the internet for instructions on operating Docker.
If you know what you are doing and want full root access without a graphical interface, please use ./start_shell.sh
.
As of the 2022.12
tag, the following open-source process-development kits (PDKs) are pre-installed, and the table shows how to switch by setting environment variables (you can do this per project by putting this into .designinit
as explained below):
SkyWater Technologies sky130A |
---|
export PDK=sky130A |
export PDKPATH=$PDK_ROOT/$PDK |
export STD_CELL_LIBRARY=sky130_fd_sc_hd |
Global Foundries gf180mcuC |
---|
export PDK=gf180mcuC |
export PDKPATH=$PDK_ROOT/$PDK |
export STD_CELL_LIBRARY=gf180mcu_fd_sc_mcu7t5v0 |
IHP Microelectronics sg13g2 |
---|
Not yet ready to use |
More options for selecting digital standard cell libraries are available; please check the PDK directories.
Below is a list of the current tools already installed and ready to use (note there are some adaptions in our container vs. efabless.com):
- amaranth a Python-based HDL toolchain
- cace a Python-based circuit automatic characterization engine
- cocotb simulation library for writing VHDL and Verilog test benches in Python
- covered Verilog code coverage
- cvc circuit validity checker (ERC)
- edalize Python abstraction library for EDA tools
- fusesoc package manager and build tools for SoC
- gaw3-xschem waveform plot tool for
xschem
- gdsfactory Python library for GDS generation
- gdspy Python module for the creation and manipulation of GDS files
- gds3d a 3D viewer for GDS files
- gf180mcu GlobalFoundries 180nm CMOS PDK
- ghdl VHDL simulator
- gtkwave waveform plot tool for digital simulation
- sg13g2 IHP Microelectronics 130nm SiGe:C BiCMOS PDK (partial PDK, not fully supported yet;
xschem
andngspice
simulation works incl. PSP MOSFET model) - irsim switch-level digital simulator
- iverilog Verilog simulator
- hdl21 analog hardware description library
- klayout layout viewer and editor for GDS and OASIS
- libman design library manager to manage cells and views
- magic layout editor with DRC and PEX
- netgen netlist comparison (LVS)
- ngspice SPICE analog and mixed-signal simulator, with OSDI support
- ngspyce Python bindings for
ngspice
- nvc VHDL simulator and compiler
- open_pdks PDK setup scripts
- openlane2 rewrite of OpenLane in Python, 2nd generation
- openram OpenRAM Python library
- openroad RTL2GDS engine used by
openlane2
- osic-multitool collection of useful scripts and documentation
- padring padring generation tool
- pulp-tools PULP platform tools consisting of bender, morty, svase, verible, and sv2v
- pygmid Python version of the gm/Id starter kit from Boris Murmann
- pyopus simulation runner and optimization tool for analog circuits
- pyrtl collection of classes for pythonic RTL design
- pyspice interface
ngspice
andxyce
from Python - pyuvm Universal Verification Methodology implemented in Python (instead of SystemVerilog) using
cocotb
- pyverilog Python toolkit for Verilog
- RF toolkit with FastHenry2, FasterCap, openEMS, and scikit-rf.
- qucs-s simulation environment with RF emphasis
- riscv-pk RISC-V proxy kernel and boot loader
- rggen code generation tool for configuration and status registers
- schemdraw Python package for drawing electrical schematics
- slang SystemVerilog parsing and translation (e.g. to Verilog)
- spike Spike RISC-V ISA simulator
- spyci analyze/plot
ngspice
/xyce
output data with Python - surelog SystemVerilog parser, elaborator, and UHDM compiler
- vlog2verilog Verilog file conversion
- volare version manager (and builder) for open-source PDKs
- risc-v toolchain GNU compiler toolchain for RISC-V cores
- siliconcompiler modular build system for hardware
- sky130 SkyWater Technologies 130nm CMOS PDK
- verilator fast Verilog simulator
- vlsirtools interchange formats for chip design.
- xschem schematic editor
- xyce fast parallel SPICE simulator (incl.
xdm
netlist conversion tool) - yosys Verilog synthesis tool (with GHDL plugin for VHDL synthesis), incl.
eqy
(equivalence checker),sby
(formal verification), andmcy
(mutation coverage)
The tool versions used for OpenLane2
(and other tools) are documented in tool_metadata.yml
. In addition to the EDA tools above, further valuable tools (like git
) and editors (like gvim
) are installed. If something useful is missing, please let us know!
Download and install Docker for your operating system:
Note for Linux: Do not run docker commands or the start scripts as root (sudo
)! Follow the instructions in Post-installation steps for Linux
The following start scripts are intended as helper scripts for local or small-scale (single instance) deployment. Consider starting the containers with a custom start script if you need to run many instances.
All user data is persistently placed in the directory pointed to by the environment variable DESIGNS
(the default is $HOME/eda/designs
for Linux/macOS and %USERPROFILE%\eda\designs
for Windows, respectively).
If a file .designinit
is put in this directory, it is sourced last when starting the Docker environment. In this way, users can adapt settings to their needs.
This mode is recommended for remote operation on a separate server or if you prefer the convenience of a full desktop environment. To start it up, you can use (in a Bash/Unix shell):
./start_vnc.sh
On Windows, you can use the equivalent batch script (if the defaults are acceptable, it can also be started by double-clicking in Explorer):
.\start_vnc.bat
You can now access the Desktop Environment through your browser (https://localhost). The default password is abc123
.
Both scripts will use default settings, which you can tweak by settings shell variables (VARIABLE=default
is shown):
DRY_RUN
(unset by default); if set to any value (also0
,false
, etc.), the start scripts print all executed commands instead of running. Useful for debugging/testing or just creating "template commands" for unique setups.DESIGNS=$HOME/eda/designs
(DESIGNS=%USERPROFILE%\eda\designs
for.bat
) sets the directory that holds your design files. This directory is mounted into the container on/foss/designs
.WEBSERVER_PORT=80
sets the port on which the Docker daemon will map the webserver port of the container to be reachable from localhost and the outside world.0
disables the mapping.VNC_PORT=5901
sets the port on which the Docker daemon will map the VNC server port of the container to be reachable from localhost and the outside world. This is only required to access the UI with a different VNC client.0
disabled the mapping.DOCKER_USER="hpretl"
username for the Docker Hub repository from which the images are pulled. Usually, no change is required.DOCKER_IMAGE="iic-osic-tools"
Docker Hub image name to pull. Usually, no change is required.DOCKER_TAG="latest"
Docker Hub image tag. By default, it pulls the latest version; this might be handy to change if you want to match a specific version set.CONTAINER_USER=$(id -u)
(the current user's ID,CONTAINER_USER=1000
for.bat
) The user ID (and also group ID) is especially important on Linux and macOS because those are the IDs used to write files in theDESIGNS
directory. For debugging/testing, the user and group ID can be set to0
to gain root access inside the container.CONTAINER_GROUP=$(id -g)
(the current user's group ID,CONTAINER_GROUP=1000
for.bat
)CONTAINER_NAME="iic-osic-tools_xvnc_uid_"$(id -u)
(attaches the executing user's id to the name on Unix, or onlyCONTAINER_NAME="iic-osic-tools_xvnc
for.bat
) is the name that is assigned to the container for easy identification. It is used to identify if a container exists and is running.
To overwrite the default settings, see Overwriting Shell Variables
This mode is recommended if the container is run on the local machine. It is significantly faster than VNC (as it renders the graphics locally), is more lightweight (no complete desktop environment is running), and integrates with the desktop (copy-paste, etc.). To start the container, run the following:
./start_x.sh
or
.\start_x.bat
Attention Windows and macOS users: The X-server connection is automatically killed if there is a too-long idle period in the terminal (when this happens, it looks like a crash of the system). A workaround is to start a second terminal from the initial terminal that pops up when executing the start scripts ./start_x.sh
or .\start_x.bat
and then start htop
in the initial terminal. In this way, there is an ongoing display activity in the initial terminal, and as a positive side-effect, the usage of the machine can be monitored. We are looking for a better long-term solution.
Attention macOS users: Please disable the Enable VirtioFS accelerated directory sharing setting available as "Beta Setting," as this will cause issues accessing the mounted drives! However, enabling the VirtioFS general setting works in Docker >v4.15.0!
The following environment variables are used for configuration:
DRY_RUN
(unset by default), if set to any value (also0
,false
, etc.), makes the start scripts print all executed commands instead of running. Useful for debugging/testing or just creating "template commands" for unique setups.DESIGNS=$HOME/eda/designs
(DESIGNS=%USERPROFILE%\eda\designs
for.bat
) sets the directory that holds your design files. This directory is mounted into the container on/foss/designs
.DOCKER_USER="hpretl"
username for the Docker Hub repository from which the images are pulled. Usually, no change is required.DOCKER_IMAGE="iic-osic-tools"
Docker Hub image name to pull. Usually, no change is required.DOCKER_TAG="latest"
Docker Hub image tag. By default, it pulls the latest version; this might be handy to change if you want to match a specific Version set.CONTAINER_USER=$(id -u)
(the current user's ID,CONTAINER_USER=1000
for.bat
) The user ID (and also group ID) is especially important on Linux and macOS because those are the IDs used to write files in theDESIGNS
directory.CONTAINER_GROUP=$(id -g)
(the current user's group ID,CONTAINER_GROUP=1000
for.bat
)CONTAINER_NAME="iic-osic-tools_xserver_uid_"$(id -u)
(attaches the executing user's id to the name on Unix, or onlyCONTAINER_NAME="iic-osic-tools_xserver
for.bat
) is the name that is assigned to the container for easy identification. It is used to identify if a container exists and is running.
For Mac and Windows, the X11 server is accessed through TCP (:0
, aka port 6000). To control the server's address, you can set the following variable:
DISP=host.docker.internal:0
is the environment variable that is copied into theDISPLAY
variable of the container.host.docker.internal
resolves to the host's IP address inside the docker containers,:0
corresponds to display 0 which corresponds to TCP port 6000.
If the executable xauth
is in PATH
, the startup script automatically disables access control for localhost, so the X11 server is open for connections from the container. A warning will be shown if not, and you must disable access control.
For Linux, the local X11 server is accessed through a Unix socket. There are multiple variables to control:
XSOCK=/tmp/.X11-unix
is typically the default location for the Unix sockets. The script will probe if it exists and, if yes, mount it into the container.DISP
has the same function as macOS and Windows. It is copied to the container'sDISPLAY
variable. If it is not set, the value ofDISPLAY
from the host is copied.XAUTH
defines the file that holds the cookies for authentication through the socket. If it is unset, the host'sXAUTHORITY
contents are used. If those are unset too, it will use$HOME/.Xauthority
.
The defaults for these variables are tested on native X11 servers, X2Go sessions, and Wayland. The script copies and modifies the cookie from the.Xauthority
file into a separate, temporary file. This file is then mounted into the container.
Everything should be ready on Linux with a desktop environment / UI (this setup has been tested on X11 and XWayland). For Windows and macOS, the installation of an X11 server is typically required. Due to the common protocol, every X11-server should work, although the following are tested:
- For Windows: VcXsrc
- For macOS: XQuartz Important: Please enable "Allow connections from network clients" in the XQuartz preferences [CMD+","], tab "Security"
For both X-Servers, it is strongly recommended to enable OpenGL:
-
The
start_x.sh
script will take care of that on macOS and set it according to configuration values. Only a manual restart of XQuartz is required after the script is run once (observe the output!). -
On Windows with VcXsrv, we recommend using the utility "XLaunch" (installed with VcXsrv):
- Multiple windows mode
- Set the Display Number to 0
- Start no client
- Tick all
Extra settings
:Clipboard
,Primary selection
,Native opengl
, andDisable access control
There are multiple ways to configure the start scripts using Bash. Two of them are shown here. First, the variables can be set directly for each run of the script; they are not saved in the active session:
DESIGNS=/my/design/directory DOCKER_USERNAME=another_user ./start_x.sh
The second variant is to set the variables in the current shell session (not persistent between shell restarts or shared between sessions):
export DESIGNS=/my/design/directory
export DOCKER_USERNAME=another_user
./start_x.sh
As those variables are stored in your current shell session, you only have to set them once. After setting, you can directly run the scripts.
In CMD
you can't set the variables directly when running the script. So for the .bat
scripts, it is like the second variant for Bash scripts:
SET DESIGNS=\my\design\directory
SET DOCKER_USERNAME=another_user
.\start_x.bat
This is a new usage mode, that might not fit your needs. Devcontainers are a great way to provide a working build environment along your own project. It is supported by the devcontainer extension in Visual Studio Code.
Option 1: In Visual Studio, click the remote window icon on the left and then "Reopen in Container", "Add configuration to workspace". Enter "ghcr.io/iic-jku/iic-osic-tools/devcontainer" as template, choose the version of the container and add more features (probably not needed). It will then restart the IDE, download the image and start a terminal and mount the work folder into the image.
Option 2: Alternatively you can directly just create the configuration file .devcontainer/devcontainer.json
:
{
"name": "IIC-OSIC-TOOLS",
"image": "ghcr.io/iic-jku/iic-osic-tools-devcontainer:2024.09"
}
Either way, the great thing is that you can now commit the file to repository and all developers will be asked if they want to reopen their development in this container, all they need is Docker and VSCode.
We are open to your questions about this container and are very thankful for your input! If you run into a problem and you are sure it is a bug, please let us know by following this routine:
- Take a look at the KNOWN_ISSUES and the RELEASE_NOTES. Both these files can include problems that we are already aware of and maybe include a workaround.
- Check the existing Issues on GitHub and see if the problem has been reported already. If yes, please participate in the discussion and help by further collecting information.
- Is the problem in connection with the container, or rather a problem with a specific tool? If it is the second, please also check out the sources of the tool and further contact the maintainer!
- To help us fix the problem, please open an issue on Github and report the error. Please give us as much information as possible without being unneedingly verbose, so filter accordingly. It is also fine to open an issue with very little information, we will help you to narrow down the source of the error.
- Finally, if you can exactly know how to fix the reported error, we are also happy if you open a pull request with a fix!
Thank you for your cooperation!