-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After #14707 I observe increased visual jitter, in singleplayer mode #14737
Comments
I guess the limit really is too low but I think 1GB is too high, because it means in worst case exactly that much memory is wasted before it is attempted to clean up. I propose:
|
To be pedantic... That memory is not wasted. It will be used for future allocations. It's just not returned to OS. After a while (days?) the memory might be truly wasted if it is too fragmented to be used again. A GB of deallocated memory thus is a not a problem at all. Also, we're not measuring memory used or wasted, but memory de-allocated. That memory is soon used again. That all said, as long as it is rate limited it's fine. (In my case it deallocates 1GB every second - and that's an empty game, just with a large viewing_range) |
@sfan5 The time-gated approach (with about 64 MiB of headroom after trim) might already suffice - without manual allocation counters. Wouldn't the trim function be kind of a no-op in case the unused pool is already < 64 MiB ? |
I went with the deallocation counter solution because I hoped it would be sort of "self-regulating" and not require extra handholding (like the rate-limit we're now talking about). My only concern with a pure timed solution is that while
It is wasted in the sense that Minetest is claiming the memory and it cannot be used by other applications all while being completely unused. In the worst case there could even be an entirely avoidable OOM situation.
Not sure if fragmentation is relevant here at all. |
Minetest version
Irrlicht device
No response
Operating system and version
Linux, Fedora 39
CPU model
No response
GPU model
NVidia GeForce GTX 1650 Mobile / Max-Q
Active renderer
No response
Summary
I noticed visual jitter. My viewing_range is 1000, and client_mesh_chunk is 8, so this is expected to produce quite a lot of garbage as new blocks are loaded and meshes are rebuild.
When I add logging before malloc_trim I see that it is called about 8-10x per second, corresponding with the jitter.
When MEMORY_TRIM_THRESHOLD to 1GB, this is reduced to about once per second and the jitter goes away (i.e. just once per second).
128MB is probably too way small, or we should rate limit this to only once every few second.
The point of #14707 was not to return freed memory in realtime, but to prevent infinite accumulation, so returning memory to OS once per minute should absolutely be sufficient. And the accumulation only becomes a problem when it is so fragmented that it cannot be reused again.
This feels a little like an optimization in search of a problem. We could just as well call malloc_trim every 1-5 minutes or so (or once an hour).
Steps to reproduce
Set viewing_range is 1000 and client_mesh_chunk is 8, move around. Notice the jitter.
The text was updated successfully, but these errors were encountered: