Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for true in-memory zip handling #85

Open
oppiansteve opened this issue Jan 31, 2022 · 3 comments
Open

Add support for true in-memory zip handling #85

oppiansteve opened this issue Jan 31, 2022 · 3 comments

Comments

@oppiansteve
Copy link

Description
I'm using php-zip in an AWS Lambda with a zip in S3 using the seekable stream wrapper - which works great!

However, the use of php:https://temp means the lambda's small storage area fills up quickly and causes problems.

As I'm using a lambda with 10GB of RAM (storage is <500MB) it would be great if instead we could optionally support php:https://memory instead.

I'm likely to fork and have a go at this tomorrow, but I've no idea which way of implementing it would be acceptable for a PR (if any).

Example

@oppiansteve
Copy link
Author

oppiansteve commented Feb 1, 2022

perhaps a better fix is to allow setting of the maxmemory for the temp streams e.g.php:https://temp/maxmemory:<bytes>

@oppiansteve
Copy link
Author

I did try this and it seemed to work in itself, however it showed that the AWS S3 (seeking) Stream Wrapper isn't written very well - and instead of having a page-cache LRU it just keeps the whole file up to where seeked - so useless for my purposes.

Still having the control over php-zip's temp memory cache level is quite nice.
(And if I ever get round to rewriting the S3 stream wrapper, then it would be useful).

For reference, my experiment is available here - kaldor@0dc0e8c

@oppiansteve
Copy link
Author

I was initially going to use memory, but temp with adjustable memory was more flexible and achieved the same results for me - I'm happy with my php-zip changes (for my purpose), I was just scuppered by the AWS S3 StreamWrapper to have end-to-end in-memory only access of huge zip files in S3.

(I did use php:https://memory for my output writing to S3 with multipart upload and I think that kept it all in memory - if strings are passed, then its guzzle also uses php:https://temp so will use the filesystem too for bigger content).

Perhaps, instead of passing a maxmemory I could just pass a stream URI and use that throughout php-zip - which means I could pass in a URI with my own custom protocol and have more control. However, maxmemory did what I needed for the experiment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant