-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with more than one fixed vertex #204
Comments
Hi @trvto, thanks for reaching out. That's indeed very interesting. I was able to reproduce the first example. @TobiasHeuer / @kittobi1992 I attached a test hypergraph and the corresponding fixed vertex file to this comment. Do you have some ideas what could be going wrong? If I find some time the next couple of days, I can also try to debug this. |
Some additional logs:
|
@SebastianSchlag Ok, thanks for having a look! |
Hi, Sorry for my late answer. I will take a look at the problem tomorrow, and let you know my findings. Best, |
I did some investigation today and found the root cause of the issue. It is not really a bug. The problem lies in the design of our initial partitioning algorithm in presence of fixed vertices. The high-level steps that we do during initial partitioning are as follows:
In your example, we would remove node An alternative approach to treat fixed vertices during initial partitioning is to contract all fixed vertices assigned to a block to one supernode instead of removing them, and then compute an initial partition of the contracted hypergraph. This way you don't lose the connectivity information of fixed vertices. However, when we implemented the feature we compared both approaches and found out that removing fixed vertices works better than contracting them. In real-world uses cases, the number of fixed vertices is usually much less than the actual number of nodes, and removing them does not really destroy the structure of the hypergraph. Furthermore, the matching approach gives slightly more flexibility where to assign fixed vertices after initial partitioning. I hope I could answer all your questions. Let me know if there are any follow-up questions. Best, |
@kittobi1992 Would we need significant changes to the initial partitioning code to make this alternative approach work? If not, we could perhaps just add a parameter which allows the user to switch to this mode. To me, it seems more robust than the current approach anyways |
@kittobi1992 Thanks for the detailed explanation! The example I gave is very simplified and I'm not sure how important these types of edge cases will be. The current algorithm may work well enough in the actual implementation. I have thought of an alternative that I could use if necessary but I have one question about it. In my workaround, I would not fix any nodes but assign edges with negative weights between all nodes I wanted to fix. With large enough magnitude weights, the nodes will almost certainly be assigned to different blocks (which is what I wanted to achieve by fixing them in the first place). This seems to work in my tests, but since I haven't seen it documented anywhere I wanted to ask if it is okay to use negative edge weights? Thanks again |
So, negative edge weights are ... interesting. As far as I know there is nothing in the code which prevents using negative edge weights, however we don't really support them. Properly supporting negative weights (i.e. to achieve high quality in the general case with negative weights) would require some significant algorithmic changes and is generally not a well-researched topic. However, as mentioned, our code probably still works with negative weights. So if you get good results in your tests, I suppose you might as well do it this way. Although I would recommend also running some tests in debug mode, to see whether there are assertions in our code which are violated by negative weights. |
@kittobi1992 @N-Maas So after some more testing I have found the following:
So I have some further questions: My use case is essentially this: I would like the objective function to take into account a block-dependent penalty (or reward) for placing each vertex into a certain block. The penalty would be different for each vertex-block combination. I wanted to achieve this by adding a fixed vertex per block and edges between fixed and non-fixed vertices that represent the "reward". However, because of the way fixed vertices are treated (explained above) this doesn't work well in some important cases. Do you have any suggestions for how I could achieve what I need? I have started also using Thanks! |
Hi @trvto, Can you not map the negative weights into a positive range by adding, e.g., the minimum weight to all edges? Process mapping can only introduce penalties between block pairs. It should be possible to model the problem with a custom objective function but it would require some not standard changes as the partitioner has to read the vertex-block penalties from somewhere. Best, |
The purpose of the negative weight is to incentivise the edge between the two vertices to be cut, so that they are assigned to different blocks. To achieve the same effect with positive weights I would also need to add the minimum weight to all edges with an implied weight of 0, which means adding an edge between all non-connected vertices in the graph. I could imagine that giving a large performance penalty, but may be okay in some cases. I use process mapping for a different problem, just mentioned it as the reason I am now using |
I have found this paper: https://inria.hal.science/hal-01277392/document, which discusses the fixed vertex problem and proposes an algorithm for it. Are you aware of that paper? This seems to be exactly what I need, but I'm not sure if it is implemented anywhere or well tested. Do you think it could be easily incorporated into |
I have seen this paper a couple of years ago. While the algorithm is described for graph partitioning, it should be possible to generalize it to hypergraphs.
Assuming that the generalization works, it should be possible to integrate it into |
Okay I found the implementation of that algorithm here: https://gitlab.inria.fr/metapart/libgraph. I got it running and ran some tests. Played around with some of the settings but didn't have great results. Not sure if it's the implementation or the algorithm but don't think I'll be using that. On another note. I ran the same tests that I wrote about in my original comment above using In any case, it appears that my problems are solved by using the graph partitioning algorithm of Thanks for all your answers. |
Hello @trvto, I think I have experimented with this algorithm when I implemented the fixed vertex support in It is nice to hear that The main difference is that btw it is great to hear that someone is using the process mapping algorithm of Best, |
@kittobi1992 . Okay good to know that I am looking into routines to improve quantum compilation. I used process mapping to map qubits to starting locations within the quantum computer. I use partitioning with fixed vertices for remapping steps when necessary. |
For my algorithm I need one fixed node per partition block. Some simple test runs have given surprising results that appear to be a bug to me, but could also be a misunderstanding.
Example:
I have 6 nodes all with weight 1 and set the number of partitions to
k=2
. I then define four weight 1 edges[[0, 1], [0, 2], [3, 4], [3, 5]]
. After partitioning I expect nodes[0, 1, 2]
to be in one block, and[3, 4, 5]
to be in the other, the cut cost is then 0. With no fixed nodes, that is what I get. However, if I fix node0
to block0
and node3
to block1
, I get the partitions[0, 1, 5]
and[2, 3, 4]
with a cut cost of2
. I would expect fixing these nodes to have no effect on the results.I am using
cut_kKaHyPar_sea20.ini
and the python library. I have tried all three values forc-fixed-vertex-acceptance-criterion
and also tried the other suggested settings from #145 (comment) (Seems like it could be related to #145). These settings have had no effect on the results.I have tried an analogous test with 9 nodes, 3 partitions, and edges
[[0, 1], [0, 2], [3, 4], [3, 5], [6, 7], [6, 8]]
and I get similarly puzzling results when fixing nodes to blocks.Am I missing something? Any suggestions?
Thanks!
The text was updated successfully, but these errors were encountered: