-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERROR Error when sending message to topic XXX with key: null, value: X bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback #100
Comments
Hi, your please see: http:https://kafka.apache.org/0100/quickstart.html#quickstart_send |
@wurstmeister I'm getting the same error but I'm pretty much following the guide verbatim.
Then in another tab
My consumer looks like this:
My zookeeper logs look like this during the exchange:
And my docker-compose ps:
|
Hi,
|
I still encounter this problem, but I think this is a kubernetes issue. |
My OS is Mac OSX The output of broker-list:
I've attached my broker [logs](url |
what's the output of It might be worth starting with an clean environment ( |
It looks like it doesn't match: ➜ kafka-docker git:(master) ✗ ./start-kafka-shell.sh 192.168.99.104 192.168.99.104:2181 I did the docker-compose rm and recreated and everything appears to be working now as expected. Thanks! |
Not working for me.
|
@BlackRider97 the more information you provide the more likely I will be able to help you. |
@wurstmeister Issue got fixed when I deleted data from zookeeper. |
@wurstmeister does this imply, that i should use static (configured) broker ids in production? |
I'm having a similar problem here. I pretty much followed the tutorial steps for testing out the kafka running container but I get this error next time I restart the docker container and try to run producer:
When the docker container gets a new container id, kafka seems to create a new broker id so the kafka topic from the past is no longer accessible. If you create a brand new topic in the running container again, that topic is accessible as long as the container lives, but is no longer when the container is restarted. Therefore, I think this is something to do with --no-recreate option in docker-compose in order to keep the same container name/id pair so that kafka keeps same broker_id in its kafka-log meta.properties. |
+1 |
1 similar comment
+1 |
@wurstmeister I am facing similar issue
When i describe the topic:
And the Docker logs shows
debug steps followed -
|
+1 after everything is up i try to produce basic message: receive error: when i run describe: |
+1 |
+1 |
1 similar comment
+1 |
It starts working when I removed zookeeper's data dir |
[cloudera@quickstart kafka_2.11-0.10.2.0]$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test |
How to resolve it .? [cloudera@quickstart kafka_2.11-0.10.2.0]$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test |
I 'm also getting the same error @lckanth007 Were you able to fix it? I'm just doing the quick start tutorial in documentation. I'm just running in my windows 10 local machine. Somebody help! 😕 |
@keerthivasan-r |
I'm getting the same error using Windows 8.1 and docker-toolbox. |
@hao5ang How is that possible? Zookeeper has to register the kafka brokers right. Does that not clear the registries of kafka brokers list at zookeeper? Please clarify |
Adding to that, I'm getting the problem even outside docker environment. I.e plain vanilla kafka deployment |
@keerthivasan-r I'm so sorry, my exception can not be reproduced. I am not familiar with Kafka. |
kafka-console-producer.sh nonsupport key=null .. so use api test |
You may be setting log.cleanup.policy=compact . This must use key and value |
I am getting the same error in a plain vanilla download (without docker) and when I follow the quick start guide for the latest version of kafka. I switched back to 2.11-0.9.0.1 since I wanted to get working quick.. If anybody has a resolution for this please do reply... |
I was able to overcome the issue by modifying: I don't have any idea what it does but it works... |
I had the exact same problem as @n1207n mentioned. But @Shabirmeans fix did not help me. I have kafka deployed in Kubernetes. One kafka broker and 3 zookepper nodes. I created a topic and was able to produce/consume messages. After a restart of the kafka container, i could not post to existing topics anymore but newly created topics worked fine. I found out, that the old topic was bound to broker 1001, but the new broker had the id 1006 (after some restarts). What i did was to reset the broker.id back to 1001 and then i was able to produce messages on the old topic again. I set the brocker.id back by setting the environment variable KAFKA_BROKER_ID to 1001. I'm not sure if this is the correct way to fix the issue, but it worked. Edit: You'll have to set the environment variable KAFKA_RESERVED_BROKER_MAX_ID to 1001 to be allowed to set the broker id to 1001. |
I have tried @MoJo2600 's solution and it works! Just set the KAFKA_BROKER_ID in docker-compose.yml. |
I have the same issue while trying to produce and consume at two different terminals |
@smehtaji try public ip address instead of localhost |
@litongyu @MoJo2600 I'm also on OSX. I tried adding KAFKA_BROKER_ID=1001 and it barfed with...
This is interesting because the output of kafka-topics is....
Any advice on how to move forward? I am trying to solve this error
Edit: also the configuration value is reserved.broker.max.id = 1000 |
@steverhoades The error messages mean that your broker id 1001 exceeds the broker.max.id(default 1000), so just set the KAFKA_BROKER_ID = 999. Good luck! |
@litongyu thanks, i banged my head on this one for hours and finally gave up. Used this one https://hub.docker.com/r/ches/kafka/ and was up and running in minutes. Still not entirely sure where I went wrong with wurstmeister's docker image... |
@steverhoades You are able to set the maximum allowed broker id by setting the KAFKA_RESERVED_BROKER_MAX_ID environment variable. I updated my answer above, maybe it will help someone else :) |
works for me. |
I finally make it work in kubernetes v1.6 with this image, but with minor modifications.
|
Found issue in my case - I'm using $ kafka-topics --create --zookeeper "zookeeper-0.zookeeper" --partitions 1 --replication-factor 1 --topic topicA
Created topic "topicA".
$ kafka-topics --describe --zookeeper "zookeeper-0.zookeeper" --topic topicA
Topic:topicA PartitionCount:1 ReplicationFactor:1 Configs:
Topic: topicA Partition: 0 Leader: 1 Replicas: 1 Isr: 1
$ echo hello | kafka-console-producer --broker-list "localhost:9092" --topic "topicA"
$ kafka-topics --create --zookeeper "zookeeper-0.zookeeper" --partitions 1 --replication-factor 1 --topic topicB --config cleanup.policy=compact
Created topic "topicB".
$ kafka-topics --describe --zookeeper "zookeeper-0.zookeeper" --topic topicB
Topic:topicB PartitionCount:1 ReplicationFactor:1 Configs:cleanup.policy=compact
Topic: topicB Partition: 0 Leader: 1 Replicas: 1 Isr: 1
$ echo hello | kafka-console-producer --broker-list "localhost:9092" --topic "topicB"
[2018-01-13 10:46:39,295] WARN [Producer clientId=console-producer] Got error produce response with correlation id 3 on topic-partition topicB-0, retrying (2 attempts left). Error: CORRUPT_MESSAGE (org.apache.kafka.clients.producer.internals.Sender)
[2018-01-13 10:46:39,397] WARN [Producer clientId=console-producer] Got error produce response with correlation id 4 on topic-partition topicB-0, retrying (1 attempts left). Error: CORRUPT_MESSAGE (org.apache.kafka.clients.producer.internals.Sender)
[2018-01-13 10:46:39,503] WARN [Producer clientId=console-producer] Got error produce response with correlation id 5 on topic-partition topicB-0, retrying (0 attempts left). Error: CORRUPT_MESSAGE (org.apache.kafka.clients.producer.internals.Sender)
[2018-01-13 10:46:39,611] ERROR Error when sending message to topic topicB with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.CorruptRecordException: This message has failed its CRC checksum, exceeds the valid size, or is otherwise corrupt. |
Ah, stupid me, when compaction it turned on key is required, so correct way to fill such channel from bash is: echo "hello:world" | kafka-console-producer \
--broker-list "localhost:9092" \
--topic "topicB" \
--property "parse.key=true" \
--property "key.separator=:" Just found @irisrain comment about that in this thread, thank you! |
I solve this problem, after edit /etc/hosts |
setting Possible solutions for this problem
|
Setting
Fixed this issue!!! Full working code at https://github.com/orefalo/docker-kafka-ssl |
What does this mean? It seems like you are making some assumptions that people have a clue what you mean by "...so use api test.." Can you elaborate on this? |
I received the same error like below: I'm in other VM to connect the Kafka brokers. |
Can confirm that updating /etc/hosts does appear to resolve the issue. Thanks for the recommendation! |
We tried everything, but no luck.
Finally our case there was issue with kafka cluster itself, from 2 servers we were unable to fetch metadata in between. When we changed target kafka cluster to our dev box, it worked fine. |
working fine for me when removing compact policy |
I run a local zookeeper-server and run the docker image as:
docker run -e "KAFKA_ADVERTISED_PORT=9092" -e "KAFKA_ZOOKEEPER_CONNECT=localhost:2181" -p 9092:9092 --net=host -d wurstmeister/kafka
Then I tried
kafka logs:
Desribe topic "test":
The text was updated successfully, but these errors were encountered: