Skip to content
/ abcRL Public
forked from krzhu/abcRL

MLCAD 2020: Reinforcement for logic optimization sequence exploration

License

Notifications You must be signed in to change notification settings

homuch/abcRL

 
 

Repository files navigation

ABC_RL

Reinforcement learning for logic synthesis.

This is the source codes for our paper "Exploring Logic Optimizations with Reinforcement Learning and Graph Convolutional Network", published at 2nd ACM/IEEE Workshop on Machine Learning for CAD (MLCAD), Nov. 2020.

The authors include Keren Zhu, Mingjie Liu, Hao Chen, Zheng Zhao and David Z. Pan.


Prerequsites

Python environment

The project is modified to run under:

python 3.12.3
Pytorch 2.2.0

The project has other dependencies such as numpy, six, etc. Please installing the dependencies correspondingly.

abc_py

The project requires the Python API, abc_py, for Berkeley-abc.

Please refer to the Github page of abc_py for installing instruction.


Benchmarks

Benmarks can be found in url.


Usage

The current version can execute on combinational .aig and .blif benchmarks. To run the REINFORCE algorithm, please first edit the python/rl/testReinforce.py for the benchmark circuit. And execute python3 testReinforce.py


Contact

Keren Zhu, UT Austin (keren.zhu AT utexas.edu)

About

MLCAD 2020: Reinforcement for logic optimization sequence exploration

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%