-
Notifications
You must be signed in to change notification settings - Fork 179
Disable NetcdfFile.openInMemory() writing index file to disk #93
Comments
John On Thu, Jan 8, 2015 at 8:28 AM, Harsha Veeramachaneni <
|
Thanks, John. Actually I am trying to process all the records in parallel using spark. The disk reads and writes add a lot of overhead to processing the small records. Thanks for the suggestion about Grib2RecordScanner. Are there any helper classes that you can recommend to look at that make it easy to parse the Grib2Record objects and extract variables, x, y, time and values? |
we dont officialy support those low-level APIs, so you have to look at the Just look for other examples of use of RecordScanners, eg GribIndex. On Fri, Jan 9, 2015 at 2:19 PM, Harsha Veeramachaneni <
|
I would like to be able to sequentially read records from a large grib2 file (too large to read all at once) and extract data for a particular variable. I use NetcdfFile.openInMemory(String, byte []) to do that. However, because even for small records the call writes out an index file to disk reading the whole file takes hours.
Is there any way to disable the index file creation? Or any way to circumvent openInMemory() by using some low-level methods?
The text was updated successfully, but these errors were encountered: