New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
.tar.gz files downloaded with Chrome get gunzipped #22
Comments
Some good advice is not to use Google's spyware as a browser. Other than that, I've been using wget to download things for ages (a download manager on Windows before moving to Linux), due to how browsers' download functionality tends to be broken. It is annoying though when a download site doesn't give you a direct link to copy and paste into wget. Anyway, you're right about the site's configuration, though downloading it works fine in my browser:
|
Noting #215 is a duplicate of this issue. |
I guess I'll leave this here, since this is the open issue, although there's somewhat more useful data in the report @ #215. Possibly not the issue: Content-Type/Content-EncodingI don't think the issue is necessarily the
The problem: no Content-DispositionThe real problem, I think, is the lack of a Content-Disposition header indicating that the file is to be downloaded as-is. The GitHub server's response (see above) includes one, which makes all the difference:
Setting Content-DispositionThis StackOverflow answer indicates you can use Apache's <IfModule mod_headers.c>
<FilesMatch "\.(mp3|MP3)$">
ForceType audio/mpeg
Header set Content-Disposition "attachment"
Allow from all
</FilesMatch>
</IfModule> Including the filenameThe Apache docs don't give any indication that the <IfModule mod_headers.c>
<FilesMatch "^(?<file>[^/]+\.tar\.gz)$">
Header set Content-Disposition "attachment; filename=%{env:MATCH_FILE}"
Allow from all
</FilesMatch>
</IfModule> (Edit: Fixed my (Edit2: Also, if the issue is the <IfModule mod_headers.c>
<FilesMatch "^(?<file>[^/]+\.tar\.gz)$">
Header set Content-Disposition "attachment; filename=%{env:MATCH_FILE}"
Header unset Content-Encoding
Allow from all
</FilesMatch>
</IfModule> |
I tried this but had a couple of problems. Using your script verbatim, when I try to download a tar.gz file, Apache gives a 500 error. I changed the first Header line to just
and that seems to work. Possibly my server is using an old version of Apache. With the above change I can see the Content-Disposition header in the HTTP response. The unset line doesn't seem to work however. I still see the I can't really tell whether this has any effect on the issue, because I don't have Chrome. Edge does not seem to uncompress the file regardless of whether the Content-Disposition header is present or not. Perhaps someone with Chrome who does see this issue can confirm whether it's still happening with the Content-Disposition header present. |
I just tried to download http:https://www.greenwoodsoftware.com/less/less-639-beta.tar.gz via the latest version of Chrome and still got the uncompressed version. These were the response headers:
I think @ferdnyc may have the right approach of either adding the filename in the |
Unfortunately I have not found a way to remove the Content-Encoding header. I've spent several hours trying various suggestions but nothing seems to work. I don't know much about the internals of apache, but since the Content-Encoding header appears after the Content-Disposition header that I recently added with mod_header, I suspect it's being added by apache after the .htaccess file is processed. |
[I realise this is not an issue with the less software, but it is somewhat related]
Downloading .tar.gz files from http:https://www.greenwoodsoftware.com/less/download.html with Chrome causes them to become uncompressed. i.e.
This then obviously causes issues when trying to verify the .tar.gz files with the GnuPG signature.
It seems that the www.greenwoodsoftware.com webserver might be misconfigured with regards to the Content-Encoding header:
https://superuser.com/questions/940605/chromium-prevent-unpacking-tar-gz
You might want to fix that.
Using wget or curl to download the files works as expected.
The text was updated successfully, but these errors were encountered: