Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[LI-HOTFIX] catch throwable instead of exception from user callback o… #100

Open
wants to merge 1 commit into
base: 2.4-li
Choose a base branch
from

Conversation

xiowu0
Copy link

@xiowu0 xiowu0 commented Dec 3, 2020

…n completeFutureAndFireCallbacks

TICKET = KAFKA-10806
LI_DESCRIPTION = When kafka producer tries to complete/abort a batch, producer invokes user callback. However, "completeFutureAndFireCallbacks" only captures exceptions from user callback not all throwables. An uncaught throwable can prevent the batch from being freed.

EXIT_CRITERIA = TICKET [KAFKA-10806]

…n completeFutureAndFireCallbacks

TICKET = KAFKA-10806
LI_DESCRIPTION =  When kafka producer tries to complete/abort a batch,  producer invokes user callback. However, "completeFutureAndFireCallbacks" only captures exceptions from user callback not all throwables.  An uncaught throwable can prevent the batch from being freed.

EXIT_CRITERIA = TICKET [KAFKA-10806]
@@ -229,8 +229,8 @@ private void completeFutureAndFireCallbacks(long baseOffset, long logAppendTime,
if (thunk.callback != null)
thunk.callback.onCompletion(null, exception);
}
} catch (Exception e) {
log.error("Error executing user-provided callback on message for topic-partition '{}'", topicPartition, e);
} catch (Throwable t) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's generally a bad idea to catch throwable. Why is this needed?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Venice team encounters an error that their producer stuck on waiting for memory. The actual error happened several days ago (the producer continues to function after the initial error). java.lang.OutOfMemoryError is thrown from the user callback, and it is not caught by the exception handlers. As a result, the batch is not cleaned up, and leave the Kafka producer in an unstable state. Since Kafka client doesn't have control over the user callback, and the callback is executed in the kafka client thread, it is ok to isolate user errors from kafka threads. What do you think?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only safe action to take when OOM happe s is to exit.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The best approach is for users to handle these errors in the callback, which Venice team tries to fix. On our side, I think catching throwables and not leave kafka client in an unstable state have some benefits. For example, if users eventually detect the error by other means, they can still gracefully flush pending records and shut down kafka producer. In addition, some errors may not require close/restart kafka producers.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, they can't gracefully shutdown.

Anything they need to do probably requires memory allocation which is now likely to fail. We also can't make any guarantees about the behavior of our clients after OOM for the same reasons.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The JDK documentation itself says you should not be doing this. See:

https://docs.oracle.com/javase/7/docs/api/java/lang/Error.html

An Error is a subclass of Throwable that indicates serious problems that a reasonable application should not try to catch. Most such errors are abnormal conditions. The ThreadDeath error, though a "normal" condition, is also a subclass of Error because most applications should not try to catch it.

Copy link

@radai-rosenblatt radai-rosenblatt Dec 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's no way to recover from OutOfMem. its not guaranteed that the OOM will be thrown on the same code/thread thats consuming the memory.

the only thing you can do on OOM is die, hence i doubt its worth catching.
worse, catching this will mask the OOM?

perhaps its better if whatever needs to be cleaned up is cleaned up in a finally block, instead of catching throwables?

Copy link
Author

@xiowu0 xiowu0 Dec 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, the issue here is the error is invisible from users since the callback is executed on the kafka client's thread. I am ok not to swallow the error (throwable), but we might need a better way to propagate the error to the user so that user can act appropriately. In addition, the error happens when executing user code, any error could happen, and Kafka client is not responsible for handling these errors.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

throw it out of the next poll() call ?

Copy link
Author

@xiowu0 xiowu0 Dec 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is Kafka producer. A simple solution is to catch exceptions and throwable, log the error if producer sees the exceptions, but close the sender thread (unrecoverable) if producer sees throwables. However, the con of this approach is user still don't see the throwable. Carrying the throwable to next "send" looks like a hack to me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants