Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After #14707 I observe increased visual jitter, in singleplayer mode #14737

Closed
lhofhansl opened this issue Jun 8, 2024 · 5 comments · Fixed by #14744
Closed

After #14707 I observe increased visual jitter, in singleplayer mode #14737

lhofhansl opened this issue Jun 8, 2024 · 5 comments · Fixed by #14744
Labels
Bug Issues that were confirmed to be a bug @ Client / Audiovisuals Performance Regression Something that used to work no longer does
Milestone

Comments

@lhofhansl
Copy link
Contributor

lhofhansl commented Jun 8, 2024

Minetest version

Latest from main - after #14707

Irrlicht device

No response

Operating system and version

Linux, Fedora 39

CPU model

No response

GPU model

NVidia GeForce GTX 1650 Mobile / Max-Q

Active renderer

No response

Summary

I noticed visual jitter. My viewing_range is 1000, and client_mesh_chunk is 8, so this is expected to produce quite a lot of garbage as new blocks are loaded and meshes are rebuild.

When I add logging before malloc_trim I see that it is called about 8-10x per second, corresponding with the jitter.
When MEMORY_TRIM_THRESHOLD to 1GB, this is reduced to about once per second and the jitter goes away (i.e. just once per second).

128MB is probably too way small, or we should rate limit this to only once every few second.
The point of #14707 was not to return freed memory in realtime, but to prevent infinite accumulation, so returning memory to OS once per minute should absolutely be sufficient. And the accumulation only becomes a problem when it is so fragmented that it cannot be reused again.

This feels a little like an optimization in search of a problem. We could just as well call malloc_trim every 1-5 minutes or so (or once an hour).

Steps to reproduce

Set viewing_range is 1000 and client_mesh_chunk is 8, move around. Notice the jitter.

@lhofhansl lhofhansl added Unconfirmed bug Bug report that has not been confirmed to exist/be reproducible Bug Issues that were confirmed to be a bug @ Client / Audiovisuals Performance and removed Unconfirmed bug Bug report that has not been confirmed to exist/be reproducible labels Jun 8, 2024
@lhofhansl
Copy link
Contributor Author

@sfan5 FYI.

I propose either reverting #14707 or significantly reducing the frequency of the call to malloc_trim; after all, the original issue talks about servers being run for days.

@lhofhansl lhofhansl added the Regression Something that used to work no longer does label Jun 8, 2024
@sfan5
Copy link
Member

sfan5 commented Jun 8, 2024

I guess the limit really is too low but I think 1GB is too high, because it means in worst case exactly that much memory is wasted before it is attempted to clean up.

I propose:

  1. raising the limit to 256M
  2. rate-limiting malloc_trim calls to once per minute

@Zughy Zughy added this to the 5.9.0 milestone Jun 8, 2024
@lhofhansl
Copy link
Contributor Author

lhofhansl commented Jun 8, 2024

To be pedantic... That memory is not wasted. It will be used for future allocations. It's just not returned to OS. After a while (days?) the memory might be truly wasted if it is too fragmented to be used again.

A GB of deallocated memory thus is a not a problem at all.

Also, we're not measuring memory used or wasted, but memory de-allocated. That memory is soon used again.
What we want to avoid is large peak of unusable, fragmented memory or a short giant allocation.

That all said, as long as it is rate limited it's fine.
I'll expect that the with a limit of 256MB we will call malloc_trim exactly once per minute.

(In my case it deallocates 1GB every second - and that's an empty game, just with a large viewing_range)

@SmallJoker
Copy link
Member

SmallJoker commented Jun 8, 2024

I propose:

@sfan5 The time-gated approach (with about 64 MiB of headroom after trim) might already suffice - without manual allocation counters. Wouldn't the trim function be kind of a no-op in case the unused pool is already < 64 MiB ?

@sfan5
Copy link
Member

sfan5 commented Jun 9, 2024

I went with the deallocation counter solution because I hoped it would be sort of "self-regulating" and not require extra handholding (like the rate-limit we're now talking about).
But it's indeed imperfect.

My only concern with a pure timed solution is that while malloc_trim should just be a no-op in that case, there's still risk that calling it unnecessarily still introduces jitter.

To be pedantic... That memory is not wasted. It will be used for future allocations. It's just not returned to OS.

It is wasted in the sense that Minetest is claiming the memory and it cannot be used by other applications all while being completely unused. In the worst case there could even be an entirely avoidable OOM situation.

After a while (days?) the memory might be truly wasted if it is too fragmented to be used again. [...]
What we want to avoid is large peak of unusable, fragmented memory or a short giant allocation.

Not sure if fragmentation is relevant here at all.
As far as my testing goes the memory usage will just always stay at its peak (unless the game session is ended). In "normal" server usage this might only become a problem after days, but you can reproduce it in 5 minutes just fine if you do the right operations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Issues that were confirmed to be a bug @ Client / Audiovisuals Performance Regression Something that used to work no longer does
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants