Part of code is based on PBLib (a fork of MiniSAT 2.2), NetworkX and CUDD packages. The presented tool is the result of my work on Transition System and Petri net decomposition [1], [2], [3], [4], [5].
- Creation of Petri Nets from Transition Systems
- Decomposition of Transition Systems into sets of interacting State Machines
- Decomposition of Transition Systems into sets of interacting FCPNs
- Decomposition of Transition Systems into sets of interacting ACPNs (under developing, may present bugs)
Input extensions: .g .sg .ts
Output extensions: .dot .g .aut
- g++
sudo apt install g++
- cmake
sudo apt install cmake
- python 3.10 (required for networkx installation), the software can work also without networkx avoiding the standard SM decomposition
sudo apt install python3.10
- pip
sudo apt install python3-pip
- networkx
pip install networkx
- boost (necessary for the installation of mCRL2)
sudo apt-get install libboost-all-dev
- mCRL2 (Optional): install in order to allow to check the decomposition correctness for FCPNs
sudo apt-get update
sudo apt-get install mcrl2
Before building the project all dependencies (submodules) have to be downloaded, it can be done in two different ways:
First method: download the current repository recursively:
git clone --recursive https://github.com/viktorteren/Seto.git
Second method: download the repository "as is" and update the dependencies:
git submodule update --init --recursive
Once all dependencies are downloaded, the project can be built:
mkdir build
cd build
cmake ..
make Seto
./Seto <file_path> PN <optional_flags>
D: debug mode
S: verbose info mode
O: write output PN on file in .dot extension
--INFO: base info related to the time for the generation of the regions are shown, the same for N. of transitions, states, regions etc.
./Seto <file_path> SM <optional_flags>
If python and networkx are not installed the decomposition can be performed only with BDD flag!
BDD: usage of a BDD for the contemporary search of k SMs instead of sequentially search new SMs
D: debug info mode
O: write output SMs on file in .dot extension
G: write output SMs on file in .g extension
-ALL: execute the decomposition using an exact algorithm to find all minimal independent sets (not only the minimum required set to satisfy EC)
COMPOSE: perform the composition of PNs (actually FCPNs and SMs) and create an output file in .aut extension
DOT: combined with COMPOSE flag creates the output file in .dot extension instead of .aut
GE: perform an exact algorithm instead of greedy one during the removal of redundant SMs
NOBOUNDS: this flag can be used only with BDD flag and it allows the creation of unbounded SMs
MS: use a mixed strategy which chooses if use the exact algorithm or approximate one for the removal of redundant SMs
SC: check if the separate SMs are safe (can be used only with BDD and NOBOUNDS flags)
Statistics are stored into 'stats.csv' file (in the folder from where the code is run). The file contains the following data if BDD flag was not selected:
- decomposition type: SM/SM_exact_removal/SM_mixed_strategy
- file name
- time region generation
- time initial SM decomposition (without optimizations)
- time greedy algorithm
- time merge
- time entire decomposition (including greedy algorithm and merge)
- number of places after the initial creation of the set of SMs
- number of places after the execution of greedy algorithm
- final number of places
- number of transitions after the initial creation of the set of SMs
- number of transitions after the execution of greedy algorithm
- final number of transitions
- number of SMs
- max (place+transition) sum
- max alphabet (for the alphabet a and a' are the same)
- max transitions
- average number of places
- variance of the number of places
- average number of transitions
- variance of the number of transitions
If BDD flag was selected the 'stats.csv' will contain the following data:
- decomposition type: SM_BDD/SM_BDD_DEBUG
- file name
- runtime
- time region generation
- time decomposition (including reedy algorithm if BDD flag is not used and merge algorithm)
- number of final places
- number of FCPNs
- number of places after the initial step of decomposition (before greedy algorithm)
- number of places after the greedy algorithm
- maximum alphabet
- average alphabet
In 'useful_scripts' folder the following script can be executed, but firstly it has to be moved into execution folder (usually 'cmake-build-debug'), since it uses some dependencies of MIS solver:
./Benchmark.sh SM <optional_flags>
The script runs the decomposition in SMs on all benchmarks of 'benchmark_dir' folder.
./Seto <file_path> FC <optional_flags>
Statistics are stored into 'stats.csv' file (in the folder from where the code is run). The file contains the following data:
- decomposition type: FCPN/ACPN/FCPN_BDD/FCPN_DEBUG/ACPN_DEBUG/FCPN_BDD_DEBUG
- file name
- runtime
- time region generation
- time decomposition (including reedy algorithm if BDD flag is not used and merge algorithm)
- number of final places
- number of FCPNs
- number of places after the initial step of decomposition (before greedy algorithm)
- number of places after the greedy algorithm
- maximum alphabet
- average alphabet
D: debug info mode
B: execute also SM decomposition combining the set of not minimized SMs to the FCPNs before performing the greedy algorithm
O: write output FCPNs on file in .dot extension
NOMERGE: avoids the Merge step
COMPOSE: perform the composition of FCPNs and create an output file in .aut extension (does not works with k-FCPN decomposition)
DOT: combined with COMPOSE flag creates the output file in .dot extension instead of .aut
I: ignore incorrect decomposition in order to allow to produce the output PNs
GE: this flag is experimental and could not work: instead of executing greedy search perform a Pseudo-Boolean search in order to find the minimal number of FCPNs, indeed this search has very restricted constraints: at least one occurrence of each region in at least one FCPN, provides worst results compared to greedy algorithm (not recommended and not compatible with BDD flag)
BDD: use a BDD to encode excitation closure constraint, enabling the direct decomposition into a set of FCPNs instead of iteratively search new FCPNs
CHECK: check if each derived FCPN/ACPN has a correct FCPN/ACPN structure, also in case of BDD usage check if excitation closure is satisfied
SC: check if the separate FCPNs are safe
SAFE: decompose allowing only standalone safe FCPNs, it has a timeout off 3600 seconds after the start of the decomposition
SS: SAFE SMs: if the heuristics finds an unsafe FCPN the next iteration an SM is added (which is also an FCPN) and the iteration after the usual FCPN search is performed. In this way we don't have slow computations with pretty good results. Being based on sequential search cannot be performed with BDD flag.
OPTIMAL: when using BDD flag ensures the minimum number of FCPNs
NORESET: used only with sequential FCPN search, deny the reset of learned clauses brining faster to a possible deadlock but with the new changes after a deadlock there is a change of heuristics
CS: count the number of derived SMs if SAFE or (BDD+SS) flags were used
COUNTER: used only with FC SAFE flag combination, changes the approach for the search of new FCPNs; if a region takes part of an unsafe FCPN the counter for this region is increased, higher is the counter for a region less is the probability to take it
COUNTER_OPTIMIZED: used only with FC SAFE flag combination, changes the approach for the search of new FCPNs; if a region takes part of an unsafe FCPN the counter for this region is increased if the region is a post-region of a fork transition, higher is the counter for a region less is the probability to take it
UNSAFE_PATH: print the path to arrive to the unsafe marking
In the root folder the following script can be executed:
./Benchmark.sh FC <optional_flags>
The script runs the approximate algorithm for FCPN decomposition on all benchmarks of 'auto_benchmark_dir' folder.
./Seto <file_path> AC <optional_flags>
See FCPN flow (approximated algorithm).
This is the maximal decomposition presented in [6] adapted to Transition Systems. The algorithm is the following:
- Creation of minimal regions from TS.
- Creation of a single PN from the regions.
- Minimization of the PN.
- Extraction of single components composed by places and their adjacent transitions.
./Seto <file_path> CC <optional_flags>
The flags SM and AC/FC/KFC can be used together.
Creation of .dot extension TS/ECTS file starting from .g or .ts extension TS.
./Seto <file_path> (TS | ECTS) <optional_flag>
--INFO: if TS option is selected only the numbers of transitions, states and events are shown; if ECTS option is selected in addition also the number of regions and event splits is shown
AUT: instead of .dot extension export the resultant TS/ECTS in Aldebaran extension (.aut)
Graphviz library is required!
Graphviz installation:
sudo apt install graphviz
Command for the creation of a PostScript PN/TS file.
dot -Tps filename.dot -o outfile.ps
In order to check the correctness of the decomposition mCLR2 tool can be used (https://www.mcrl2.org/web/user_manual/index.html).
Steps for the verification:
- Generate the TS with AUT extension
./Seto <file_path> (TS | ECTS) AUT
- Generate the FCPN/ACPN with COMPOSE flag:
./Seto <file_path> (FC | AC) COMPOSE
- Verify the bisimulation between the initial TS (suppose example_TS.aut) and the composition of the PNs (suppose example_FCPN_composed.aut)
ltscompare --equivalence=bisim <TS_aut_file> <composed_PN_aut_file>
-
The parser for .ts files allow only the syntax with integers: the places and labels have to start from 0 and the maximum value has to correspond to the number of places/labels - 1 (any index can be skipped).
-
There is no check on .ts inputs.
-
It is not possible to run two parallel instances of the program on the same input file since there is a file dependency used by the SAT solver therefore the results could result wrong.
[1] Viktor Teren, Tiziano Villa (2024). Decomposition of sequential and concurrent models. Ph.D. thesis.
[2] Viktor Teren, Jordi Cortadella, Tiziano Villa (2023). Generation of Synchronizing State Machines from a Transition System: A Region–Based Approach. International Journal of Applied Mathematics and Computer Science, 33, 133-149.
[3] Viktor Teren, Jordi Cortadella, Tiziano Villa (2023). Seto: a framework for the decomposition of Petri nets and transition systems. Proceedings of 26th Euromicro Conference on Digital System Design (DSD), 669-677.
[4] Viktor Teren, Jordi Cortadella, Tiziano Villa (2022). Decomposition of transition systems into sets of synchronizing Free-choice Petri Nets. Proceedings of 25th Euromicro Conference on Digital System Design (DSD), 165-173.
[5] Viktor Teren, Jordi Cortadella, Tiziano Villa (2021). Decomposition of transition systems into sets of synchronizing state machines. Proceedings of 24th Euromicro Conference on Digital System Design (DSD), 77-81.
[6] Van der Aalst, W. M. (2013). Decomposing Petri nets for process mining: A generic approach. Distributed and Parallel Databases, 31, 471-507.