vector protobuf encoding uses 3x CPU compared to JSON encoding #21863
Unanswered
nmiculinic
asked this question in
Q&A
Replies: 2 comments 7 replies
-
Fwiw I had issue getting timestamp in some format vector can serialize by default, thus why I've used unix_ns uint64. |
Beta Was this translation helpful? Give feedback.
5 replies
-
Thank you for providing all this so quickly. I think we will need a more fine-grained of what's happening on the encoding side to understand if this is something we can easily improve on. We rely on an internal framework for performance analysis but we didn't add any protobuf experiment. If you are interested, the existing cases are here: https://github.com/vectordotdev/vector/tree/master/regression/cases |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Image: timberio/vector:0.42.0-debian
I was very surprised when I switch kafka encoding to protobuf from JSON having 4.5x increase in CPU usage. Config:
with proto being (simple string key/value fields):
So yeah, I'm not sure what's happening here, or how to get performance in line with plain JSON shemaless encoding
Beta Was this translation helpful? Give feedback.
All reactions