-
Notifications
You must be signed in to change notification settings - Fork 179
NetcdfFile.openInMemory() fails in 4.3 #32
Comments
ping |
Hi Ken: Thanks for the ping, I didnt see this, not sure why. Ill have to think about this use case, I hadnt considered it. How big is the GRIB file? John |
They can get pretty big. I'm doing this in a Hadoop context, so I want to keep my files big (see https://blog.cloudera.com/blog/2009/02/the-small-files-problem/). Right now they're typically about 1 or 2 GB. Currently I read the entire thing at once, but on my to-do list is to read smaller chunks at a time. |
Im not sure why 4.2 worked, as we have always written indexes for GRIB. The problem is that theres a lot of processing that one needs to do to convert a GRIB file to a netCDF file. We write both a .gbx9 index and an .ncx index. You might be able to use a lower level interface. What exactly are you trying to do, eg what do you do once you have a NetcdfFile ? |
I'm actually just using the NetCDF libraries to read data from a GRIB file, I don't actually need to convert from one file format to another. Once I've extracted the relevant data I'm done using the file. |
Sorry, i didnt mean write to a netcdf file, i meant create a NetcdfFile object. Do you read all of the data, or are you using netcdf index subsetting to extract parts of it? Do you reuse the GRIB files or is it one-time use? |
You can just scan the grib file and create grib records in memory, without creating an index See https://www.unidata.ucar.edu/software/thredds/current/netcdf-java/reference/formats/GribFiles.html at the end "lower level interface to GRIB". |
In NetCDF 4.2, I do the following, which works:
However, in NetCDF 4.3.19, this now fails because opening the "file" seems to force the creation of an index on disk, and something goes haywire in this process. See stack trace below.
Any way to disable index creation, or a better strategy to use?
My use case is one where I'm handed the bytes of a Grib file, which doesn't necessarily exist anywhere on disk.
Thanks.
The text was updated successfully, but these errors were encountered: