-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Blosc Decompression Fails when chunk size is greater than the shape #228
Comments
Thanks for reporting. Unfortunately I don't think there is much we can do about this, since the error is raised in blosc. |
I have looked into it in more detail and this particular code block seems to be the issue: Line 255 in f118d95
Is there any reason for this particular modification of the |
I have a dataset where the chunk size is
[1,1,1,1024,1024]
but the actual shape of the data is[1,1,1,256,256]
. I read the zarr-spec and could not find anything that makes this an invalid setup. My.zarray
file looks the following:When trying to read this file using
z5py
package, the following error happens:I debugged the code and found that, when we call
blosc_decompress_ctx
inz5
, it returns-1
. Further investigation inside theblosc.c
tells me that it fails due to this check:It looks like, even though the actual destsize is
256*256*8
bytes, it is expecting it to be1024*1024*8
bytes (chunk shape being larger than the actual shape).At this point, I am not sure, if there is a way to trick
blosc
to accept this and do the decompressing.Just to add some context,
zarr-python
andtensorstore
seems to deal with it without this error and decompress correctly.I am not sure if there is an easy solution to this, but wanted to report this behavior.
The text was updated successfully, but these errors were encountered: