-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing CSV_INNER_OUTPUT columns for single models in parallel #1753
Comments
One more addition to this issue - actually this is a different topic connected to the CSV in parallel. |
Hi @tandreasr , thanks for reporting this back. Will look into this to have it fixed. |
HI @tandreasr , I am looking at your issue and maybe you can provide a minimal example? I was a bit confused by: Instead they only contain the convergence summary for all those models processed by the corresponding MPI rank. As far as your question about keeping output the same. We stayed clear so far from synchronizing over all processes to gather output in a single file where possible, hence the *.p0.csv format. Also, IMS has an additionally a solver parameter (alpha/omega) that we do not print when using the parallel PETSc solver. Other than that, we could definitely look into improving the parallel csv output. I did reproduce the 'uninitialized' issue you mentioned in your second comment. Will have a look into that. |
You are right: those are exactly the columns I'm missing, maybe I was a little unclear :-) |
Hi @tandreasr , this should be fixed now. Please reopen if not. |
Me again :-) |
I just realized another behaviour, which should be corrected - at least in my opinion. And yet another question concerning assignment of models to MPI ranks: |
Hi @tandreasr , this was actually changed deliberately: these columns show the maximum residual, it's not a delta with respect to the previous. So my guess is that if you rerun the serial version, you will have consistent naming again. |
About your question with respect to assigning models to rank: we have a PR coming up any day now that allows a HPC configuration file with that capability. |
"I just realized another behaviour, which should be corrected - at least in my opinion. I see what you mean, I will have a look into that. |
Thank you! |
Hi @mjr-deltares, Here solution_inner_dvmax, solution_inner_dvmax_model and solution_inner_dvmax_node are all 0 which is obviously wrong. |
Hi @tandreasr , thanks again for reporting and sorry for the delay. We had the MODFLOW and More conference the other week and I have been away from my desk for some time. OK, I see. Would you mind pointing me to or sharing the model and steps to reproduce? |
Hi @mjr-deltares, And here is the MPI call which reproduces the behaviour: "E:\Modflow6\mf6.5.0_win64par\bin\mpiexec.exe" -np 8 "E:\Modflow6\mf6.5.0_win64par\bin\mf6.exe" -P It occurs when called with just 8 MPI ranks inside both files zzz.inner.p4.csv and zzz.inner.p7.csv If you have any further questions, please don't hesitate to contact me! |
HI @tandreasr , what is going on here is that the dvmax equals zero for all cells in the processes 4 and 7 on that first iteration. There is no change in the solution for those domains so it is technically not possible to assign a specific cell and model to the change. I could maybe modify this and not write the zeros but leave a blank for those fields. Would that make more sense to you? |
Hi @mjr-deltares, ok I see. In order to read the csv files of ranks which were assigned just one model So what would work for me is if you leave all other solution_inner_dvmax_* columns 0 for that case, Best regards |
There are some complications for that case. And what if you would have those extra columns in parallel mode you asked about before? THe ones for the single model, duplicating what's in the solution convergence data part. |
Just curious, can you share what you are working on with parallel MODFLOW? |
The extra columns would be fine as well, but would add a lot of overhead regarding file size. Regarding your second question: |
Thanks for your explanation, still curious to learn what company that would be ;-) About the CSV, the possibilities I see:
I don't like option 1, option 3 would be easiest for me :-) but then the CSVs are not really an interpretable set of files on their own, so that's why I am leaning towards option 2, at the cost of some redundancy. |
We have also discussed the interpretation of the CSV files for the parallel case in general. I guess we are considering a restructuring to make things more transparent to the user. Just so you are aware of this. |
The company is www.ihu-gmbh.com and it's a german engineering office of about 80 employees :-) As for your suggestions: Anyway, just let me know, what your decision is :-) Regards |
Hi @tandreasr , I have implemented option 2. Hopefully this now works for you. Please open a new issue if there are more or other issues coming up with the parallel version. Cheers, Martijn |
Hi Martijn, |
Hi again,
while testing the new parallel capabilites I realized that the CSV file(s) created by using the above mentioned option in parallel, lack(s) all columns related to single models ("MODELNAME"_inner_dvmax etc.)
Instead they only contain the convergence summary for all those models processed by the corresponding MPI rank.
Do you see any chance to implement that output the very same way as IMS does within a "foreseeable" future?
Meaning to add those columns to the CSV belonging to the MPI rank for all models processed by that rank)
I think that sometimes it's quite important to obtain an overview about which models have convergence issues
(not just the worst one listed in solution_inner_d?max_model).
And anyway keeping the CSV format identical between IMS and PetSC seems a logical choise?
Regards
Andreas
The text was updated successfully, but these errors were encountered: