Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cat_file with start and end of gzipped file does not work. #512

Open
racinmat opened this issue Dec 8, 2022 · 9 comments
Open

cat_file with start and end of gzipped file does not work. #512

racinmat opened this issue Dec 8, 2022 · 9 comments

Comments

@racinmat
Copy link
Contributor

racinmat commented Dec 8, 2022

Reading gzipped file using transcoding works when you use the fs.open, but not when using fs.cat_file.
Here is and example uploading 2 files, 1 plaintext, 1 gzipped, and both files are read using open, and then using cat_file:

This part works:

fs = gcsfs.GCSFileSystem(project='a')
a_file = 'same_path/a_test'
a_file_gz = 'same_path/a_test.gz'
with fs.open(a_file, 'wb') as f:
    f.write(b'abcd')
with fs.open(a_file_gz, 'wb', compression='gzip', fixed_key_metadata={'content_encoding': 'gzip'}) as f:
    f.write(b'abcd')
with fs.open(a_file, 'rb') as f:
    assert f.read() == b'abcd'
with fs.open(a_file_gz, 'rb') as f:
    assert f.read() == b'abcd'
assert bytes(fs.cat_file(a_file, 1, 3)) == b'bc'

this errors out

assert bytes(fs.cat_file(a_file_gz, 1, 3, fixed_key_metadata={'content_encoding': 'gzip'})) == b'bc'
assert bytes(fs.cat_file(a_file_gz, 1, 3)) == b'bc'

throwing this error

self = <StreamReader e=ClientPayloadError("400, message='Can not decode content-encoding: gzip'")>
n = -1

    async def read(self, n: int = -1) -> bytes:
        if self._exception is not None:
>           raise self._exception
E           aiohttp.client_exceptions.ClientPayloadError: 400, message='Can not decode content-encoding: gzip'

C:\tools\miniconda3\envs\filesystem-py39\lib\site-packages\aiohttp\streams.py:349: ClientPayloadError

my guess is because it's not passing the header in https://github.com/fsspec/gcsfs/blob/main/gcsfs/core.py#L859-L863

@martindurant
Copy link
Member

with fs.open(a_file_gz, 'wb', compression='gzip', fixed_key_metadata={'content_encoding': 'gzip'}) as f:
    f.write(b'abcd')

this is not correct. The content encoding is not the same as the MIME type, which would be "application/gzip". If you wanted to use content encoding like this, then the appropriate compression is actually none.

I don't exactly follow what your code snippet is trying to achieve: what behaviour are you after?

@racinmat
Copy link
Contributor Author

racinmat commented Dec 8, 2022

I am trying to use the gzip transcoding https://cloud.google.com/storage/docs/transcoding
The documentation there literally says Content-Encoding: gzip

The code I have used properly encodes the data into gzip format, and on google cloud, it is stored in gzip-compressed format, and when I download it, it is automatically decompressed, as the documentation states, so I'm not sure why its not correct when it's doing what it should.
When I look at the object in the bucket browser, it shows correct encoding.
image

@martindurant
Copy link
Member

Yes, but in that case, fsspec must not attempt to decompress it, because the transport library (aiohttp) should have done it already. Also note that the size reported for the file might be wrong.

@racinmat
Copy link
Contributor Author

racinmat commented Dec 8, 2022

I know, but AFAIK fsspec does not attempt to decompress it, because the compression='gzip' is only at the 'wb', because GCP needs to obtain the data compressed, and does not compress it on its own, but during the rb' there is no compression, I am just adding the header, because without it the code throws error. And apparently it works, because the fsspec correctly obtains the decompressed data.

@racinmat racinmat changed the title cat_file of gzipped file does not work. cat_file with start and end of gzipped file does not work. Dec 9, 2022
@racinmat
Copy link
Contributor Author

racinmat commented Dec 9, 2022

I found out that if I read the whole file, it works. The size of gzipped file is 24 bytes.

assert fs.read_range(a_file_gz, 0, 23) == b'abcd'

But when I read only part it seems it does not work, and it looks like it tried to decode it.

@racinmat
Copy link
Contributor Author

racinmat commented Dec 9, 2022

I found out the problem, it's in headers, I can replicate the error in curl

curl --location --request GET 'https://storage.googleapis.com/download/storage/v1/b/our-temp/o/tmp_bong%2Fa_test.gz?alt=media' \
--header '... \
--header 'Range: bytes=1-5' \
--header 'Accept-Encoding: gzip, deflate, br'

errors out, but when using only

```bash
curl --location --request GET 'https://storage.googleapis.com/download/storage/v1/b/our-temp/o/tmp_bong%2Fa_test.gz?alt=media' \
--header '... \
--header 'Range: bytes=1-5' \
--header 'Accept-Encoding: deflate, br'

without the gzip, it works and returns the whole contents according to the docs.
And there is no way to pass some custom Accept-Encoding header to the underlying GET call.

@martindurant
Copy link
Member

This is not unexpected. You can only get specific offsets within the bytestream after decompression, this is a limitation of gzip. I expect the server is really returning the byte range you request out of the original compressed data, but that no longer is a valid gzip stream and so causes the error.
If you save your data as gzip, you cannot expect random access of uncompressed data.

@racinmat
Copy link
Contributor Author

racinmat commented Dec 9, 2022

The server decompressed the data and returns the whole range. The GCP documentation, link I shared, states the whole file contents is returned, decoded.

@martindurant
Copy link
Member

Well, in the first place it says that you shouldn't ever do this; in the second that the header key will be ignored. And thirdly we have found that the documentation is incorrect. I don't think there's anything gcsfs can do about this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants