Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Subscriber OOM crash | suggestion to implement broker incoming message size limit #3053

Closed
domasgim opened this issue May 17, 2024 · 2 comments

Comments

@domasgim
Copy link

domasgim commented May 17, 2024

It has been noticed that MQTT 3.1.1 clients receiving a large payload ~200MB invoke OOM killer for that process if the device has limited computing power. When sending larger messages ~240MB the client handles this situation and returns MOSQ_ERR_NOMEM as expected.

My setup is OpenWRT based device running on ARM having roughly 100MB of RAM. The client is a custom application written in C that uses mosquitto library. The client is connected to an external MQTT broker which passes these messages.

Client mosquitto version v2.0.17

I understand that the broker can be configured to limit the maximum payload size but nevertheless I'm guessing the mosquitto library should not invoke OOM killer on the client and should notify MOSQ_ERR_NOMEM error. Also I suppose I could use MQTT 5 and set maximum payload size parameter https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901049 but it would be great to solve this issue with 3.1.1 client.

I have been digging around the source code and noticed the following line, maybe it's related?

mosquitto/lib/packet_mosq.c

Lines 508 to 510 in 15292b2

#else
/* FIXME - client case for incoming message received from broker too large */
#endif

Also I am willing to patch the library for my particular use case since I will be using only on my specific device, even if the patch is not portable for everyone. I would appreciate if you gave some guidelines where the issue may occur, I'm guessing the mosquitto library tries to allocate too much memory when a packet arrives though it's not so easy to track down these places when reading through the source code.

@domasgim
Copy link
Author

domasgim commented May 17, 2024

Most likely not mosquitto library related issue but a platform dependent one, for some reason mosquitto__malloc does not return error on specific payload sizes on my particular device

mosq->in_packet.payload = mosquitto__malloc(mosq->in_packet.remaining_length*sizeof(uint8_t));

Afterwards it fills up the buffer until OOM is invoked when calling the following

read_length = net__read(mosq, &(mosq->in_packet.payload[mosq->in_packet.pos]), mosq->in_packet.to_process);

Still, it would be great to have some configurable value to limit the maximum incoming message from the broker even for non MQTT5 clients

@domasgim domasgim reopened this May 17, 2024
@domasgim domasgim changed the title Subscriber OOM crash Subscriber OOM crash, suggestion to implement client's incoming message limit May 17, 2024
@domasgim domasgim changed the title Subscriber OOM crash, suggestion to implement client's incoming message limit Subscriber OOM crash, suggestion to implement broker incoming message size limit May 17, 2024
@domasgim domasgim changed the title Subscriber OOM crash, suggestion to implement broker incoming message size limit Subscriber OOM crash | suggestion to implement broker incoming message size limit May 17, 2024
@karlp
Copy link
Contributor

karlp commented May 17, 2024

I suspect (prettttty sure) you're seeing VM overcommit effects. This can be especially obvious on smaller platforms like openwrt. With big enough messages, the malloc will fail, mosquitto can report as you desire. With "just right(wrong)" big messages, you'll be out of memory, but inside the VM overcommit ratio. You can probably play with those tunings to confirm this. See https://www.kernel.org/doc/Documentation/vm/overcommit-accounting

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants