-
Notifications
You must be signed in to change notification settings - Fork 912
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dragonfly not rejecting inserts after reaching maxmemory #2984
Comments
it looks like |
I can confirm I have had the same issue. It behaves very strange when reaching maxmemory. It always accepts write, and in many cases it seems like it works fine to also get the data so it actually stored it. But in many other cases it does not store the write. So my health check that was based on writing a key, get the value, and compare it with what I wrote actually passed in most cases but for our users it still did not work. Might be dependant on the size of data written? If it's very close to hitting the limit so some write is actually stored and some are not. But both without error on SET. Don't know if this information helps, but I hope so. |
@cyppe if you use |
Yeh thanks, but I use it not only for cache so auto eviction is not an option here. Would be better if it rejects operations similar to redis if noeviction is used and it reached maxmemory. |
How to reproduce:
run dragonfly
./dragonfly --alsologtostderr --dbfilename= --port=6379 --maxmemory=350MB --proactor_threads=1
run memtier
memtier_benchmark -c 2 -t 4 --pipeline=30 --hide-histogram --test-time=3000 --distinct-client-seed --expiry-range=100-10000 --data-size-range=3000-4000
expect dragonfly to reject inserts after reaching 350MB but memtier continues to run inserting more entries
The text was updated successfully, but these errors were encountered: