Loading a cube giving different AuxCoord behaviour? #4820
-
I’m still quite new to using iris so this might not even be the right question to ask, apologies. I am working with data which spans six hours, with three sets of data for each hour. I have some working code which reads in a given number of raw .pp files as:
Where the filename_list spans data from the entire six hour period. The callback simply adds a realization coord to the cube if it is not present in the input data. After adding the cubes for rain and snow together, it is able to construct a cube which looks something like this:
The time AuxCoord spans both the realization and the unnamed dimension, as desired. I am trying to do the same for a sample of a much larger dataset, where keeping the full .pp files around would be impractical. So instead, after retrieving the data for the individual times, I have extracted the relevant fields using similar code as above and saved the resulting cube as smaller .nc files. Now, however, when trying to load data for multiple times into a single cube using iris.load_cube(fnms), iris gives an error with:
After trying to load just the three sets of data for a single time, I find that the cube looks something like this:
where, now, it appears that the time coord is not able to be expanded into the dimension which has 126 values. I have also tried to load individual cubes at different times into a single cube using CubeList.merge_cube() but run into the same issue. Is there any way that I can recreate the same behaviour during both loads? Any help or suggestions would be very much appreciated. Many thanks. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 3 replies
-
You should be able to avoid that message "failed to merge into a single cube", if you used "iris.load" instead of "iris.load_cube". I think what is going on is that the individual netcdf files contain multiple timepoints, so that they have a time dimension -- like your example immediately above
So, the key problem is that to combine several such cubes into a single one, you need to "concatenate" them, but iris loading can only "merge" cubes. ( By contrast, PP files don't themselves contain a time dimension : each 2D field loads as a separate "raw cube", and the time dimensio is contructed by merging these, during loading ) To be frank, this is an annoying and ultimately unnecessary distinction, but it is where we are at present with our legacy code. In the meantime, you may be able to use something like Does that help at all ? |
Beta Was this translation helpful? Give feedback.
-
One difference between your two examples is that only the second has a “forecast_period” coordinate. This seems odd to me if they come from the same data set. I think Iris gets the “forecast_period” from the |
Beta Was this translation helpful? Give feedback.
-
Archiving "answered" Q+As |
Beta Was this translation helpful? Give feedback.
You should be able to avoid that message "failed to merge into a single cube", if you used "iris.load" instead of "iris.load_cube".
I think what is going on is that the individual netcdf files contain multiple timepoints, so that they have a time dimension -- like your example immediately above
So, the key problem is that to combine several such cubes into a single one, you need to "concatenate" them, but iris loading can only "merge" cubes.
This is because "merge" works to create a new dimension from scalar coordinates, whereas "concatenate" combines cubes on an existing dimension -- see the disti…