-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to apply the "3-step flow" using Questa #899
Comments
This 3-step flow would also solve this issue #877 |
Good news today is that Siemens decided to support VUnit with Questa licenses such that issues like this be solved. |
Known limitation is that parallel threads aren't supported.
I started to prototype on this and there is a first iteration to try out where I simply run In this iteration, optimization is not a proper step before the simulation step. This means that
This error is not suppressible so this doesn't help (I've added support for setting
If I run the tests after one another this doesn't seem to be a concern.
Making optimization a proper step executed before starting the simulation step is probably the solution to this. Until then, please give it a try for your other use cases. |
Btw, there is a small example that you can start playing with in https://github.com/VUnit/vunit/tree/three-step-flow/examples/vhdl/three_step_flow |
I tried the three-step-flow branch and it will cause errors when running with multiple threads. The vopt call will need to be thread safe, you cannot run vopt in parallel with the same output destination apparently. It can be solved either by:
The benefit of 2 is that it only runs vopt once for a top level and not for every simulation which could save time. |
I have made a prototype of solution 1. described above and it works without problems in multi threading. This is really the simplest solution to get it working as separating the vopt step and using a Python-lock on it would require a lot of restructuring of PS: Apparently it still has a probability to fail when using multi-threading. It seems even when using unique vopt artifacts per thread the common files in the library is also mutated by Questa. |
Note also that I found problems with the My understanding from the manual is that the trailing dot causes the generics of all lower instances to also be floating. For the purpose of VUnit it should be enough that the top level test bench generics are floating as changing the generic of a deeper instance is not required or supported. I would assume a floating top level generic coupled to a lower instance generic would also cause it to be floating anyway without the trailing dot. |
It seems running vopt will mutate the common files in the library folder even if multiple threads use different vopt output targets. I have verified this by diffing all md5sum of all files in the library folder before and after running vopt. However just running vsim on an already created vopt folder does not seem to change any md5sum at all. This makes me think a solution needs to ensure all vopt calls for a single library needs to happen before any simulation starts. PS: Another alternativ would be to just duplicate the library folders with one copy per thread. That would avoid any potential concurrency problem within the simulator itself. |
@xkvkraoSICKAG Thanks for trying this out. Yes, adding a thread suffix to the name of the optimized design would be a nice solution. It works in my simple example but I also see that the library files are modified. As I see it, duplication is the only option. I will give it a try. I will also consult Siemens to get these observations verified. The reason for using floating generics on all levels is that there are use cases where the |
@xkvkraoSICKAG Different directories for each thread is something that was already implemented in another feature branch so there is code to reuse from that branch (which was dropped when we realized that there were other ways to solve that feature). Library duplication will have to be added and I've a discussion ongoing with Siemens to figure out to what extent that is needed. That will be an overhead that we obviously want to minimize so it will probably only be activated with the 3-step flow. The 2-step flow will work as before. |
I have another observation to report. I tried changing the library folder format from Regarding library duplication. To reduce the overhead it could maybe use a smart approach based on https://docs.python.org/3/library/filecmp.html to only copy over what has changed. Another approach would be to run |
I also tried with |
Known limitation is that parallel threads aren't supported.
I pushed my local changes I used to test to a fork: I think this commit may be of interest, it ensures the library mapping arguments are deterministic between calls to vcom/vlog and vopt. Before this change they were subject to the random iteration order of dictionary keys: |
@xkvkraoSICKAG Can you make a pull request to the VUnit repo? |
@LarsAsplund Yes if you can rebase the |
I started to build on a solution where the first test running a testbench performs the optmization which is then used by the other tests. The optimization is still part of the simulation step so the second test has to wait for both the optimization and simulation of the first test to complete before it can proceed. That will be fixed later but I did see something that needs more investigation Below is a debug log from a testbench with an single test that has 5 different configurations. I'm using two threads for my test run. The first test run gets to optimize the testbench:
Since the second test starts simultaneously in another thread, it blocks while waiting for the first test:
Now the second test case can proceed:
With the first test completed, there is one simulation thread available and the third test
For every test completed, a new one can start:
But what happened to the third test case? It takes forever to complete and is overtaken by the tests starting after it.
Regardless how many configurations I create of the test, there is always one which completes much |
It should be said that a single-thread test run works as expected. The first test takes a bit longer to run since it's doing the optimization: Some test runs with two threads work well. In this case, the second test also takes some extra time since it's waiting for the first to complete. After that everything runs smoothly: This is what a bad run with two threads looks like: |
What I see is that it is the simulation process that takes time and the problem is intermittent. This is the execution time for 500 configurations of the same test: Considering that I once got this message, I'm suspecting this has to do with the license server. In my case it sits on my computer so there is no network delay. After trying for 30 seconds it simply fails. I will cleanup my code so that you can test on your computers. |
Based on my investigations running vopt on any design in a top level will mutate the library_folder/_info file. So running vopt in a library has to lock the entire library. Thus care has to be taken if there are several test benches in the same library, they cannot have vopt run in parallel. |
Agree, there will be multiple conditions for when vsim and vopt can be run. vsim waits for the testbench to be optimized if it hasn't already and vopt waits for the lib to be available. I hope that a second vopt on a library doesn't invalidate previous vopts on that library just because of the altered _info file. |
Unfortunately I think the altered _info file does cause problems. I am running our company internal simulations on my three-step-flow branch. On this branch only the _info file is mutated during vopt and still it causes test cases to fail with a low probability. To mitigate this every test bench in the same library must be sequentially vopt:ed which kind of defeats the common vunit style of having multiple test benches in the same library. |
I was thinking about the case where you vopt every testbench sequentially. If you vopt A and then B, will you then have to vopt A again before running just because _info changed? Even if the designs didn't change? The vopt lock would be a problem if you have testbenches without test cases. They would run in series. If you have test cases, only the first test's vopt will run on it's own. All the vsims will run in parallel. If scheduling is optimised, the vopt for the next testbench could run in parallel with the vsims of the previous. |
Yes the problem is the _info mutation forces you to run vopt sequentially for all testbenches within a library before starting any simulation. This becomes a problem if you have a lot of test benches within the same library. |
@xkvkraoSICKAG Ok, so what I have now is a prototype that manages locks for the libraries as well. There are corners which I have yet to handle but it should be useful for testing in some different projects. I've tested it for myself with dummy testbenches and two threads and also with a client that has a single license. What I found was that using two threads improved performance even if there was only one license. Not sure why but maybe vopt is allowed to run concurrently with vsim on a single licens. Running vopt on one testbench in one thread while running vsim on a testbench already optimized in another thread hasn't caused any problems for me. This is when I run with two licenses. Running two vopts at the same time on different libraries also works. If you have a library with 10 testbenches and simulation time is much longer than optimization time you will eventually have 10 simulations running concurrently, provided you have that many licenses. The problem is if simulation time is relatively small compared to the optimization time. But is optimization needed in those cases? I will push what I have tomorrow so you can try it. |
A good run for your reference: A bad run with A bad run without |
@tasgomes Ok, now I see it. There is a bug in my code which causes a race condition. Please try the latest push. |
I also started to remove the things I added lately before finding what I think was the real bug. I suggest testing the last three commits one at a time to see if removing everything was too optimistic |
@tasgomes @xkvkraoSICKAG Have any of you had the chance to test the latest commit with your projects? |
@LarsAsplund I am out of office this week. I can retry this again next Monday. |
@LarsAsplund I am back. Below you find the last three commits one at a time. I ran each several times without problems, except for the last one. This one has an issue that occurs only sometimes. |
@tasgomes Thanks @tasgomes. I suspect there can be a slight delay before files owned by one However, in this case
I'll reach out to Siemens for some more support. As long as we are guessing how it works, we cannot be sure we have a stable solution |
@LarsAsplund I restarted my laptop to make sure everything is clean. Then I executed the test twice. The first time was successful, but the second time fails: Could it also be that the previous run did not close or delete the lock files properly? |
Is there any PR for those changes to look at? Especially with the fact that questasim introduced And second question: What about questa visaliser offline debug support? Did anyone try running that with VUnit? |
@SzymonHitachi You can find the work in the https://github.com/VUnit/vunit/tree/three-step-flow branch. There is a simple testbench https://github.com/VUnit/vunit/tree/three-step-flow/examples/vhdl/three_step_flow which we use to get a first proof of concept. It works for me but not for @tasgomes and I'm assuming we have some race conditions. I'm aware of |
What I think we need from Siemens at this point is:
|
Thanks for the links. It seems it still needs modelsim defined as the simulator, so I guess it requires to have either modelsim/questasim installed or some ENV var defined to use one or another? |
@SzymonHitachi Currently, VUnit does not distinguish Modelsim from Questa. To use Questa you need to define environ["VUNIT_SIMULATOR"] = "modelsim"
environ["VUNIT_MODELSIM_PATH"] = "C:/intelFPGA_pro/21.2/questa_fe/win64" |
Mikael Andersson, Siemens EDA here! Here you need to make some choices when it comes to optimizations. The alternatives are: NOTE! I've tried the first alternative and that is not working. So it is the second alternative that is the best option.
And then foreach test:
You can also generate different optimizations with vopt. Like in the second case, you could generate one for performance in regression only and one for debug:
And if you want best performance, you use the tb_opt version without logging:
Hope this is helpful! Evenemangsgatan 21 |
Mikael Andersson, Siemens EDA here again! I have used the axi_dma example. The only change is that the "Random AXI configuration" test has been extended to run a bit longer. And I choose to measure the effect on a "clean" start. The way you would do in a Continuous Integration environment. This picture is a screenshot from Questa Run Manager where I defined a flow that is suitable for regression of Vunit based testbenches:
And in simulations I used:
So what about performance comparison with current Vunit flow and the flow above? The blue is first run, the orange one is the second run. The main reason that multiple cpu's does not make a bigger difference for Vunit is that each simulation does optimization and creates a lock file which next simulation needs to wait until it is removed. A comparison on only the compile time qrun vs Vunit: Hope this is helpful! Evenemangsgatan 21 |
Hi @outdoorsweden,
VUnit currently runs vopt as part of the internal "simulation" step as it is a simpler first modification of the current VUnit structure. It is still reusing previous vopt runs though. If we have a testbench running 5 times (with different generics), there will only be one vopt run. That is visible in the debug logs of @tasgomes tests. Currently, I wait for the lock file to be removed before releasing the thread such that a new simulation can begin. That can be improved by checking for lock files before beginning a simulation instead. If the next simulation is towards another library, there will be no wait time at all. Is that what you mean with
Or are you actively using the Regarding the difference between the 1 CPU run and the 5 CPU run. If I interpret your measurements correctly, there is no difference between qrun and VUnit for the simulation runs. In both cases, the orange bars becomes 24 time units faster in the 5 CPU case. In the best of worlds, the 5 CPU run would be 5x faster but in short tests like these, the simulator startup time becomes dominant. I think you've confirmed that simultaneous vopts on the same library isn't possible but how deep does an optimization go? If testbench A and B are optimized towards different libraries but they both use module C, will they both try to optimize C? Or is vopt limited the the top-level and whatever C design already existing (optimized or not optimized) is the one being used? |
This does indeed look like a race condition. I have never seen anything like it. How do you check that vopt has finished before you start vsim? Because what you normally would do is:
So this is the directory structure that I get with Questa Run Manager (I have filtered away some stuff): So each testbench have a qrun.out directory that contains all the libraries. And this is why I can run all the optimizations in parallel.
No, so all the machine code generated ends up in the optimized version. So if you want to optimize to different libraries, this might work (have not tested if it generates a lock file in the design lib or not: I will update my Questa vrun application so that it tests this concept. BR |
@outdoorsweden Just to be clear. There were race conditions that we fixed and none of us experience any problems at this point. However, since we weren't sure about the inner workings it was hard to be fully confident that it would work for everyone. Initially I used OS synchronization mechanisms to make sure that a vopt operation performed by one thread returns before another thread starts doing a new vopt in the same lib. A mistake in that code was the cause of one race condition. That approach was not enough since the lock file may remain on the file system after the vopt call has returned. Probably due to delays in the file system. A second vopt to the same lib may see that lock file before it is deleted and then it fails. That was the cause of the other race condition we've seen. It was fixed by also checking for the presence of the lock file and wait for it to disappear before letting the next thread run. I don't think checking the lock file alone is enough. I've observed that vopt can create and delete a lock file several times during its execution. For that reason, VUnit will use an OS lock to prevent other threads from starting a vopt on a lib already used by another vopt. That OS lock is removed when the first vopt call returns and there are no remaining lock files. This means that we do follow your suggestion of 1. compile, 2. optimize, and 3 simulate for each testbench. However, step 2 and 3 are performed in several concurrent threads. No constraint are placed on what can be simulated in parallel. The only constraint now is that no concurrent vopt is allowed on the same lib. We discussed compiling all libraries into multiple directories or simply make multiple copies of the original set of compiled libs. However, that doesn't feel like a nice solution considering we can have hundreds of testbenches that need their own library copy. I much rather give up the idea of concurrent vopts and have a single library set. Optimizing each design to a separate lib sounds more interesting as it doesn't involve multiple copies. I will try that and see what lock files are being generated. Final question about vopt. I feel I don't quite understand the basics. What is being optimized? From what I understand we only call vopt on the testbench (test_counter in your example) but never on the design being tested (the counter) |
@outdoorsweden I made a quick test where the design is optimized into another directory. It looks like the lock files only appear in the new directory. I think it would work as a workaround if the current solution, which seems to work right now, proves to have some yet to be seen issues. One issue that remains though is the issue with the slow test as discussed earlier in this thread: Sometimes a random test case takes much longer time than it should but sometimes all tests run as expected. This problem is present with multiple threads even if I completely disable optimization. I looked briefly at it before and concluded that it is vsim that adds the extra time. Do you have any idea why that is? Currently all threads running vsim is doing that from the same working directory. I recall we have had discussions about that in the past. Is that a problem? Should all threads run in separate directories? |
Vopt will optimize testbench and everything in it, including the design. |
|
@outdoorsweden This happens on the dummy testbench used in this thread https://github.com/VUnit/vunit/blob/three-step-flow/examples/vhdl/three_step_flow/tb_example.vhd which we run with 5 different settings for the for value in range(5):
test.add_config(name=f"{value}", generics=dict(value=value)) I don't think the test itself is significant so you could also test with the DMA example you had. Run all simulations in parallel in the same directory but remove optimization so that it doesn't play a role. @tasgomes hasn't experience this so you may not experience anything either. That would suggest that there is another timing depending conflict over shared resources that has to be taken into account. modelsim.ini? |
@outdoorsweden I forgot to "check the plug". Initially I got two one-month eval licenses from Innofour which was later "converted" to one-year licenses... I thought. Looks like my license file only contains a single license so the delay I see is simply that one vsim call gets stuck while waiting for a license. I will get in touch with Innofour and see if I can get this fixed. |
I now got another license and the problems I saw disappeared as expected. I think our concept is good to go. I will review the work and take it from prototype to release quality. If nothing new shows up I will release it. |
Sounds good! Will you include support for visualizer as well? |
@outdoorsweden Eventually we should have the visualizer fully supported but I will release this feature first. |
Known limitation is that parallel threads aren't supported.
@outdoorsweden I tried to invoke Visualizer post-simulation. This works if I add the following vu.set_sim_option("modelsim.vsim_flags", ["-qwavedb=+signal"])
vu.set_sim_option("modelsim.vopt_flags", ["-debug", "-designfile", "design.bin"]) When trying the live-simulation mode, I run into some problems. Our normal approach to GUI-based simulations is to call
The do file contains the actual If I try running Visualizer by replacing
Looks like the If I keep We use the same do file approach when running batch mode simulations ( Any idea how this can be fixed? |
I updated the example to emulate what embedded post-simulation Visualizer support would look like. If you run the run script with the |
Hi all,
my understanding is, that the VUnit framework (always?) uses the "2-step flow" (-> vcom & vsim) of Questa, where the 'vopt' step is automatically applied during 'vsim'.
Unfortunaly, I need some intermediate result of the vopt step to be able to use the 'Visuallizer' to analyze the results of a simulation:
not only provides an optimized design 'tb_opt' of the original testbench 'tb' for faster simulation, but also a database (-> design.bin) required by the 'Visualizer' to correlate the simulation results from 'tb_opt' to the design being simulated.
How can/should I apply that 3rd step (vopt) within the VUnit framework?
Many thanks for advice
Jochen
The text was updated successfully, but these errors were encountered: