Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR Error when sending message to topic XXX with key: null, value: X bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback #100

Closed
harryge00 opened this issue Jun 8, 2016 · 52 comments

Comments

@harryge00
Copy link

harryge00 commented Jun 8, 2016

I run a local zookeeper-server and run the docker image as:
docker run -e "KAFKA_ADVERTISED_PORT=9092" -e "KAFKA_ZOOKEEPER_CONNECT=localhost:2181" -p 9092:9092 --net=host -d wurstmeister/kafka
Then I tried

# bin/kafka-console-producer.sh --broker-list localhost:2181 --topic test
# ERROR Error when sending message to topic test with key: null, value: 6 bytes with error: Batch Expired (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

kafka logs:

[2016-06-08 09:53:07,783] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2016-06-08 09:53:07,783] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2016-06-08 09:53:07,783] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2016-06-08 09:53:07,783] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2016-06-08 09:53:07,783] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2016-06-08 09:53:07,783] INFO Client environment:os.version=4.4.0-22-generic (org.apache.zookeeper.ZooKeeper)
[2016-06-08 09:53:07,783] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2016-06-08 09:53:07,783] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2016-06-08 09:53:07,783] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2016-06-08 09:53:07,784] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@1c93084c (org.apache.zookeeper.ZooKeeper)
[2016-06-08 09:53:07,793] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
[2016-06-08 09:53:07,795] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2016-06-08 09:53:07,834] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2016-06-08 09:53:07,851] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1552f558b520012, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2016-06-08 09:53:07,852] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2016-06-08 09:53:07,978] INFO Log directory '/kafka/kafka-logs-pao-H110M-TS' not found, creating it. (kafka.log.LogManager)
[2016-06-08 09:53:08,009] INFO Loading logs. (kafka.log.LogManager)
[2016-06-08 09:53:08,030] INFO Logs loading complete. (kafka.log.LogManager)
[2016-06-08 09:53:08,109] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2016-06-08 09:53:08,110] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2016-06-08 09:53:08,113] WARN No meta.properties file under dir /kafka/kafka-logs-pao-H110M-TS/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2016-06-08 09:53:08,205] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2016-06-08 09:53:08,207] INFO [Socket Server on Broker 1003], Started 1 acceptor threads (kafka.network.SocketServer)
[2016-06-08 09:53:08,251] INFO [ExpirationReaper-1003], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2016-06-08 09:53:08,252] INFO [ExpirationReaper-1003], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2016-06-08 09:53:08,277] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2016-06-08 09:53:08,293] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2016-06-08 09:53:08,294] INFO 1003 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
[2016-06-08 09:53:08,388] INFO New leader is 1003 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
[2016-06-08 09:53:08,389] INFO [ExpirationReaper-1003], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2016-06-08 09:53:08,390] INFO [ExpirationReaper-1003], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2016-06-08 09:53:08,398] INFO [GroupCoordinator 1003]: Starting up. (kafka.coordinator.GroupCoordinator)
[2016-06-08 09:53:08,399] INFO [GroupCoordinator 1003]: Startup complete. (kafka.coordinator.GroupCoordinator)
[2016-06-08 09:53:08,400] INFO [Group Metadata Manager on Broker 1003]: Removed 0 expired offsets in 5 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2016-06-08 09:53:08,410] INFO [ThrottledRequestReaper-Produce], Starting  (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2016-06-08 09:53:08,410] INFO [ThrottledRequestReaper-Fetch], Starting  (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2016-06-08 09:53:08,414] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
[2016-06-08 09:53:08,427] INFO Creating /brokers/ids/1003 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2016-06-08 09:53:08,450] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2016-06-08 09:53:08,451] INFO Registered broker 1003 at path /brokers/ids/1003 with addresses: PLAINTEXT -> EndPoint(pao-H110M-TS,9092,PLAINTEXT) (kafka.utils.ZkUtils)
[2016-06-08 09:53:08,451] WARN No meta.properties file under dir /kafka/kafka-logs-pao-H110M-TS/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2016-06-08 09:53:08,545] INFO Kafka version : 0.10.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2016-06-08 09:53:08,545] INFO Kafka commitId : b8642491e78c5a13 (org.apache.kafka.common.utils.AppInfoParser)
[2016-06-08 09:53:08,545] INFO [Kafka Server 1003], started (kafka.server.KafkaServer)
[2016-06-08 10:03:08,395] INFO [Group Metadata Manager on Broker 1003]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)

Desribe topic "test":

Topic:test  PartitionCount:1    ReplicationFactor:1 Configs:
    Topic: test Partition: 0    Leader: 0   Replicas: 0 Isr: 0
@wurstmeister
Copy link
Owner

Hi,

your kafka-console-producer.sh command line does not look correct.
the --broker-list argument should point to a broker, not zookeeper like in your example

please see: http:https://kafka.apache.org/0100/quickstart.html#quickstart_send

@ssherwood
Copy link

@wurstmeister I'm getting the same error but I'm pretty much following the guide verbatim.

docker-compose up

Then in another tab

./start-kafka-shell.sh 192.168.99.104 192.168.99.104:2181
bash-4.3# $KAFKA_HOME/bin/kafka-console-producer.sh --topic=topic --broker-list=`broker-list.sh`
abc
[2016-06-10 14:08:56,280] ERROR Error when sending message to topic topic with key: null, value:   3 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers for topic-3

My consumer looks like this:

./start-kafka-shell.sh 192.168.99.104 192.168.99.104:2181
bash-4.3# $KAFKA_HOME/bin/kafka-console-consumer.sh --topic=topic --zookeeper=$ZK

My zookeeper logs look like this during the exchange:

zookeeper_1  | 2016-06-10 14:10:15,537 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@617] -     Established session 0x1553aa1d9950006 with negotiated timeout 6000 for client /172.18.0.1:51044
zookeeper_1  | 2016-06-10 14:10:15,598 [myid:] - INFO  [ProcessThread(sid:0   cport:-1)::PrepRequestProcessor@645] - Got user-level KeeperException when processing   sessionid:0x1553aa1d9950006 type:create cxid:0x2 zxid:0x1c2 txntype:-1 reqpath:n/a Error   Path:/consumers Error:KeeperErrorCode = NodeExists for /consumers
zookeeper_1  | 2016-06-10 14:10:15,877 [myid:] - INFO  [ProcessThread(sid:0   cport:-1)::PrepRequestProcessor@645] - Got user-level KeeperException when processing   sessionid:0x1553aa1d9950006 type:create cxid:0x1c zxid:0x1c6 txntype:-1 reqpath:n/a Error   Path:/consumers/console-consumer-39205/owners/topic Error:KeeperErrorCode = NoNode for   /consumers/console-consumer-39205/owners/topic
zookeeper_1  | 2016-06-10 14:10:15,879 [myid:] - INFO  [ProcessThread(sid:0   cport:-1)::PrepRequestProcessor@645] - Got user-level KeeperException when processing   sessionid:0x1553aa1d9950006 type:create cxid:0x1d zxid:0x1c7 txntype:-1 reqpath:n/a Error   Path:/consumers/console-consumer-39205/owners Error:KeeperErrorCode = NoNode for   /consumers/console-consumer-39205/owners

And my docker-compose ps:

docker-compose ps
         Name                        Command               State                         Ports
---------------------------------------------------------------------------------------------------------------------
kafkadocker_kafka_1       start-kafka.sh                   Up      0.0.0.0:32777->9092/tcp
kafkadocker_zookeeper_1   /bin/sh -c /usr/sbin/sshd  ...   Up      0.0.0.0:2181->2181/tcp, 22/tcp, 2888/tcp, 3888/tcp

@wurstmeister
Copy link
Owner

Hi,

  • what's the output of ``broker-list.sh`?
  • do you have any output in the broker log?
  • which OS are you using?

@harryge00
Copy link
Author

harryge00 commented Jun 11, 2016

I still encounter this problem, but I think this is a kubernetes issue. 

@ssherwood
Copy link

ssherwood commented Jun 13, 2016

@wurstmeister

My OS is Mac OSX

The output of broker-list:

bash-4.3# broker-list.sh
192.168.99.104:32778

I've attached my broker [logs](url
[kafka.txt]%28https://github.com/wurstmeister/kafka-docker/files/312014/kafka.txt%29).

kafka.txt

@wurstmeister
Copy link
Owner

@ssherwood

what's the output of $KAFKA_HOME/bin/kafka-topics.sh --describe --topic topic --zookeeper $ZK
and does the leader match the ID of the broker (based on your log that's 1010) ? If this is not the case you will see this behaviour.

It might be worth starting with an clean environment (docker-compose rm)

@ssherwood
Copy link

It looks like it doesn't match:

➜ kafka-docker git:(master) ✗ ./start-kafka-shell.sh 192.168.99.104 192.168.99.104:2181
bash-4.3# $KAFKA_HOME/bin/kafka-topics.sh --describe --topic topic --zookeeper $ZK
Topic:topic PartitionCount:4 ReplicationFactor:2 Configs:
Topic: topic Partition: 0 Leader: 1003 Replicas: 1004,1003 Isr: 1003
Topic: topic Partition: 1 Leader: 1003 Replicas: 1003,1004 Isr: 1003
Topic: topic Partition: 2 Leader: 1003 Replicas: 1004,1003 Isr: 1003
Topic: topic Partition: 3 Leader: 1003 Replicas: 1003,1004 Isr: 1003

I did the docker-compose rm and recreated and everything appears to be working now as expected. Thanks!

@BlackRider97
Copy link

BlackRider97 commented Jun 24, 2016

Not working for me.
Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

bash-4.3# bin/kafka-topics.sh --describe --zookeeper zookeeper:2181/kafka --topic test
Topic:test  PartitionCount:1    ReplicationFactor:1 Configs:
    Topic: analytics-logs   Partition: 0    Leader: 1001    Replicas: 1001  Isr: 1001

docker-compose rm also not working

@wurstmeister
Copy link
Owner

@BlackRider97 the more information you provide the more likely I will be able to help you.
e.g. steps to reproduce, logs, environment, full error message, ...

@BlackRider97
Copy link

@wurstmeister Issue got fixed when I deleted data from zookeeper.
I tried to upgrade from 0.9 to 0.10 via simply updating docker image.

@quodt
Copy link

quodt commented Aug 17, 2016

@wurstmeister does this imply, that i should use static (configured) broker ids in production?

@n1207n
Copy link

n1207n commented Sep 2, 2016

I'm having a similar problem here. I pretty much followed the tutorial steps for testing out the kafka running container but I get this error next time I restart the docker container and try to run producer:

org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers for <TOPIC_NAME>-3

When the docker container gets a new container id, kafka seems to create a new broker id so the kafka topic from the past is no longer accessible. If you create a brand new topic in the running container again, that topic is accessible as long as the container lives, but is no longer when the container is restarted.

Therefore, I think this is something to do with --no-recreate option in docker-compose in order to keep the same container name/id pair so that kafka keeps same broker_id in its kafka-log meta.properties.

@Tolsi
Copy link

Tolsi commented Sep 20, 2016

+1

1 similar comment
@onexdrk
Copy link

onexdrk commented Oct 7, 2016

+1

@sudharshankakumanu
Copy link

sudharshankakumanu commented Dec 1, 2016

@wurstmeister
Hi,

I am facing similar issue

[2016-12-01 20:27:44,849] WARN Error while fetching metadata with correlation id 0 : {topic02=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

When i describe the topic:

Topic:topic02	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: topic02	Partition: 0	Leader: 1001	Replicas: 1001	Isr: 1001

And the Docker logs shows

Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2016-12-01 21:55:13,988] INFO 1001 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
[2016-12-01 21:55:14,154] INFO [ExpirationReaper-1001], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2016-12-01 21:55:14,160] INFO [ExpirationReaper-1001], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2016-12-01 21:55:14,162] INFO [ExpirationReaper-1001], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2016-12-01 21:55:14,220] INFO [GroupCoordinator 1001]: Starting up. (kafka.coordinator.GroupCoordinator)
[2016-12-01 21:55:14,270] INFO [GroupCoordinator 1001]: Startup complete. (kafka.coordinator.GroupCoordinator)
[2016-12-01 21:55:14,275] INFO [Group Metadata Manager on Broker 1001]: Removed 0 expired offsets in 2 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2016-12-01 21:55:14,351] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
[2016-12-01 21:55:14,476] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2016-12-01 21:55:14,482] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2016-12-01 21:55:14,507] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: PLAINTEXT -> EndPoint(52.53.216.107,32771,PLAINTEXT) (kafka.utils.ZkUtils)
[2016-12-01 21:55:14,509] WARN No meta.properties file under dir /kafka/kafka-logs-ab0d02b4ca06/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2016-12-01 21:55:14,640] INFO New leader is 1001 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
[2016-12-01 21:55:14,735] INFO Kafka version : 0.10.1.0 (org.apache.kafka.common.utils.AppInfoParser)
[2016-12-01 21:55:14,736] INFO Kafka commitId : 3402a74efb23d1d4 (org.apache.kafka.common.utils.AppInfoParser)
[2016-12-01 21:55:14,738] INFO [Kafka Server 1001], started (kafka.server.KafkaServer)
bash-4.3# ./broker-list.sh 
:32771
bash-4.3# docker ps
CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS                                                NAMES
ab0d02b4ca06        wurstmeister/kafka       "start-kafka.sh"         12 minutes ago      Up 12 minutes       0.0.0.0:32771->9092/tcp                              ec2user_kafka_1
e74c49b8003f        wurstmeister/zookeeper   "/bin/sh -c '/usr/sbi"   12 minutes ago      Up 12 minutes       22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp   ec2user_zookeeper_1

debug steps followed -

  • Created brand new topic after starting the docker.
  • Cleaned up the environment and even tried reinstalling everything from scratch on a different Ec2.
  • Weird thing is that , the same setup was running just fine, for past 2 weeks.

@avolochenko
Copy link

+1
Having exact same error.
I'm running macOS 10.12.3 with Docker 1.13.1-rc2-beta41 (15300)
docker-compose template: https://github.com/confluentinc/cp-docker-images/blob/3.1.x/examples/cp-all-in-one/docker-compose.yml

after everything is up i try to produce basic message:
zk_host=localhost
zk_port=2181
broker_host=localhost
broker_port=9092
bin/kafka-avro-console-producer --broker-list ${broker_host}:${broker_port} --topic TestTopic1 --property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"}]}'
{"f1": "value1"}

receive error:
ERROR Error when sending message to topic TestTopic1 with key: null, value: 12 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback:47)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for TestTopic1-0 due to 1526 ms has passed since batch creation plus linger time

when i run describe:
bin/kafka-topics --describe --topic TestTopic1 --zookeeper ${zk_host}:${zk_port}
Topic:TestTopic1 PartitionCount:1 ReplicationFactor:1 Configs:
Topic: TestTopic1 Partition: 0 Leader: 1 Replicas: 1 Isr: 1

@karthik-ir
Copy link

karthik-ir commented Feb 19, 2017

+1
And it works if i provide MACHINE_IP:PORT comma separated for the brokerlist
Example --broker-list 192.168.0.104:32770,192.168.0.104:32769 .......

@aiakym
Copy link

aiakym commented Mar 6, 2017

+1

1 similar comment
@weisd
Copy link

weisd commented Apr 10, 2017

+1

@kandeshvari
Copy link

It starts working when I removed zookeeper's data dir /var/lib/zookeeper/version-2.

@lckanth007
Copy link

[cloudera@quickstart kafka_2.11-0.10.2.0]$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
hi
[2017-04-13 18:13:17,793] ERROR Error when sending message to topic test with key: null, value: 2 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-0: 1504 ms has passed since batch creation plus linger time
^C[cloudera@quickstart kafka_2.11-0.10.2.0]$

@lckanth007
Copy link

lckanth007 commented Apr 14, 2017

How to resolve it .?
Followed steps in http:https://kafka.apache.org/quickstart my Cloudera-quickstart-vm.

[cloudera@quickstart kafka_2.11-0.10.2.0]$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
hi
[2017-04-13 18:13:17,793] ERROR Error when sending message to topic test with key: null, value: 2 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-0: 1504 ms has passed since batch creation plus linger time
^C[cloudera@quickstart kafka_2.11-0.10.2.0]$

@keerthivasan-r
Copy link

I 'm also getting the same error @lckanth007 Were you able to fix it? I'm just doing the quick start tutorial in documentation. I'm just running in my windows 10 local machine. Somebody help! 😕

@hao5ang
Copy link

hao5ang commented Apr 18, 2017

@keerthivasan-r
I use docker Kafka and docker Zookeeper. I just leave Kafka alive and restart Zookeeper. After that, every thing goes well.
I got org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms Exception.

@javierholguera
Copy link

I'm getting the same error using Windows 8.1 and docker-toolbox.

@keerthivasan-r
Copy link

@hao5ang How is that possible? Zookeeper has to register the kafka brokers right. Does that not clear the registries of kafka brokers list at zookeeper? Please clarify

@keerthivasan-r
Copy link

Adding to that, I'm getting the problem even outside docker environment. I.e plain vanilla kafka deployment

@hao5ang
Copy link

hao5ang commented Apr 21, 2017

@keerthivasan-r I'm so sorry, my exception can not be reproduced. I am not familiar with Kafka.

@irisrain
Copy link

kafka-console-producer.sh nonsupport key=null .. so use api test

@irisrain
Copy link

You may be setting log.cleanup.policy=compact . This must use key and value
kafka-console-producer.sh nonsupport key=null .. so use api test

@Shabirmean
Copy link

I am getting the same error in a plain vanilla download (without docker) and when I follow the quick start guide for the latest version of kafka. I switched back to 2.11-0.9.0.1 since I wanted to get working quick..

If anybody has a resolution for this please do reply...

@Shabirmean
Copy link

I was able to overcome the issue by modifying:
listeners=PLAINTEXT:https://hostname:9092 property in the server.properties file to
listeners=PLAINTEXT:https://0.0.0.0:9092

I don't have any idea what it does but it works...

@MoJo2600
Copy link

MoJo2600 commented Jun 30, 2017

I had the exact same problem as @n1207n mentioned. But @Shabirmeans fix did not help me.

I have kafka deployed in Kubernetes. One kafka broker and 3 zookepper nodes. I created a topic and was able to produce/consume messages. After a restart of the kafka container, i could not post to existing topics anymore but newly created topics worked fine. I found out, that the old topic was bound to broker 1001, but the new broker had the id 1006 (after some restarts).

What i did was to reset the broker.id back to 1001 and then i was able to produce messages on the old topic again. I set the brocker.id back by setting the environment variable KAFKA_BROKER_ID to 1001.

I'm not sure if this is the correct way to fix the issue, but it worked.

Edit: You'll have to set the environment variable KAFKA_RESERVED_BROKER_MAX_ID to 1001 to be allowed to set the broker id to 1001.

@litongyu
Copy link

litongyu commented Jul 4, 2017

I have tried @MoJo2600 's solution and it works! Just set the KAFKA_BROKER_ID in docker-compose.yml.

@smehtaji
Copy link

I have the same issue while trying to produce and consume at two different terminals
[root@sandbox kafka-broker]# bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test sandee [2017-07-11 12:23:15,929] ERROR Error when sending message to topic test with key: null, value: 6 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

@litongyu
Copy link

@smehtaji try public ip address instead of localhost

@steverhoades
Copy link

steverhoades commented Aug 2, 2017

@litongyu @MoJo2600 I'm also on OSX. I tried adding KAFKA_BROKER_ID=1001 and it barfed with...

kafka_1      | [2017-08-02 21:17:44,141] FATAL  (kafka.Kafka$)
kafka_1      | java.lang.IllegalArgumentException: requirement failed: broker.id must be equal or greater than -1 and not greater than reserved.broker.max.id
kafka_1      | 	at scala.Predef$.require(Predef.scala:277)
kafka_1      | 	at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:1159)
kafka_1      | 	at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1155)
kafka_1      | 	at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:867)
kafka_1      | 	at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:864)
kafka_1      | 	at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
kafka_1      | 	at kafka.Kafka$.main(Kafka.scala:58)
kafka_1      | 	at kafka.Kafka.main(Kafka.scala)
kafkadocker_kafka_1 exited with code 1

This is interesting because the output of kafka-topics is....

bash-4.3# $KAFKA_HOME/bin/kafka-topics.sh --describe --topic test --zookeeper $ZK
Topic:test	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: test	Partition: 0	Leader: 1001	Replicas: 1001	Isr: 1001

Any advice on how to move forward? I am trying to solve this error

bash-4.3# $KAFKA_HOME/bin/kafka-console-producer.sh --topic=test --broker-list=`broker-list.sh` --property print.key=true --property key.separator=, 
>key 1, message foo
[2017-08-02 20:24:14,183] ERROR Error when sending message to topic test with key: null, value: 18 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
>[2017-08-02 20:25:26,974] WARN Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

Edit: also the configuration value is reserved.broker.max.id = 1000

@litongyu
Copy link

litongyu commented Aug 3, 2017

@steverhoades The error messages mean that your broker id 1001 exceeds the broker.max.id(default 1000), so just set the KAFKA_BROKER_ID = 999. Good luck!

@steverhoades
Copy link

@litongyu thanks, i banged my head on this one for hours and finally gave up. Used this one https://hub.docker.com/r/ches/kafka/ and was up and running in minutes. Still not entirely sure where I went wrong with wurstmeister's docker image...

@MoJo2600
Copy link

MoJo2600 commented Aug 3, 2017

@steverhoades You are able to set the maximum allowed broker id by setting the KAFKA_RESERVED_BROKER_MAX_ID environment variable. I updated my answer above, maybe it will help someone else :)

@softwarevamp
Copy link

softwarevamp commented Sep 2, 2017

-e KAFKA_ADVERTISED_HOST_NAME=kafka

works for me.

@ERPChina
Copy link

ERPChina commented Nov 9, 2017

I finally make it work in kubernetes v1.6 with this image, but with minor modifications.
My lesson learnt as below:

  1. advertised.listeners should provide 2 listeners, one with internal ip and port 9093 for controller to connect with broker in the same pod, one external service ip for client to connect
  2. check the configuration and make sure zookeeper and kafka run successfully, then redeploy them to have a clean environment, it might be the reason for timeout exception during message producing

@artemyarulin
Copy link

Found issue in my case - I'm using cleanup.policy=compact config in one channel and for some reason kafka-console-producer doesn't work with such channels, am I doing something wrong? Normal channels works just fine

$ kafka-topics --create --zookeeper "zookeeper-0.zookeeper" --partitions 1 --replication-factor 1 --topic topicA
Created topic "topicA".
$ kafka-topics --describe --zookeeper "zookeeper-0.zookeeper" --topic topicA
Topic:topicA	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: topicA	Partition: 0	Leader: 1	Replicas: 1	Isr: 1
$ echo hello | kafka-console-producer --broker-list "localhost:9092" --topic "topicA" 
$ kafka-topics --create --zookeeper "zookeeper-0.zookeeper" --partitions 1 --replication-factor 1 --topic topicB --config cleanup.policy=compact
Created topic "topicB".
$ kafka-topics --describe --zookeeper "zookeeper-0.zookeeper" --topic topicB
Topic:topicB	PartitionCount:1	ReplicationFactor:1	Configs:cleanup.policy=compact
	Topic: topicB	Partition: 0	Leader: 1	Replicas: 1	Isr: 1
$ echo hello | kafka-console-producer --broker-list "localhost:9092" --topic "topicB" 
[2018-01-13 10:46:39,295] WARN [Producer clientId=console-producer] Got error produce response with correlation id 3 on topic-partition topicB-0, retrying (2 attempts left). Error: CORRUPT_MESSAGE (org.apache.kafka.clients.producer.internals.Sender)
[2018-01-13 10:46:39,397] WARN [Producer clientId=console-producer] Got error produce response with correlation id 4 on topic-partition topicB-0, retrying (1 attempts left). Error: CORRUPT_MESSAGE (org.apache.kafka.clients.producer.internals.Sender)
[2018-01-13 10:46:39,503] WARN [Producer clientId=console-producer] Got error produce response with correlation id 5 on topic-partition topicB-0, retrying (0 attempts left). Error: CORRUPT_MESSAGE (org.apache.kafka.clients.producer.internals.Sender)
[2018-01-13 10:46:39,611] ERROR Error when sending message to topic topicB with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.CorruptRecordException: This message has failed its CRC checksum, exceeds the valid size, or is otherwise corrupt.

@artemyarulin
Copy link

Ah, stupid me, when compaction it turned on key is required, so correct way to fill such channel from bash is:

echo "hello:world" | kafka-console-producer \
  --broker-list "localhost:9092" \
  --topic "topicB" \
  --property "parse.key=true" \
  --property "key.separator=:"

Just found @irisrain comment about that in this thread, thank you!

@bat9r
Copy link

bat9r commented Jan 25, 2018

I solve this problem, after edit /etc/hosts
<ip address current machine> localhost localhost.localdomain

@pavankjadda
Copy link

pavankjadda commented Jan 29, 2018

setting KAFKA_BROKER_ID in docker-compose.yml along with kafka.yml worked for me.

Possible solutions for this problem

  1. Add host entry into /etc/hosts file (<IP Address> hostname) in docker host
  2. If that doesn't work login to Docker container move to folder opt/kafka and execute vi server.properties. Edit the file and set manuallyadvertised.host.name=<hostname given KAFKA_ADVERTISED_HOST_NAME field>
  3. If that does not work set KAFKA_ADVERTISED_HOST_NAME in docker-compose.yml (app.yml if docker-compose not used) and KAFKA_BROKER_ID
  4. I recommend this step only if 2 and 3 steps did not work. Execute command docker system prune -a to remove all existing containers and images in host (be careful with this). Repeat steps 2 and 3 after this.

@orefalo
Copy link

orefalo commented Feb 26, 2018

Setting

    KAFKA_ADVERTISED_HOST_NAME: 'kafka.docker.ssl'
      KAFKA_BROKER_ID: 999

Fixed this issue!!!

Full working code at https://github.com/orefalo/docker-kafka-ssl

@jkilgrow
Copy link

You may be setting log.cleanup.policy=compact . This must use key and value
kafka-console-producer.sh nonsupport key=null .. so use api test

What does this mean? It seems like you are making some assumptions that people have a clue what you mean by "...so use api test.." Can you elaborate on this?

@FanJunling
Copy link

I received the same error like below:
[2018-12-03 13:44:24,392] ERROR Error when sending message to topic fan with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for fan-2: 1544 ms has passed since batch creation plus linger time
[2018-12-03 13:44:25,561] ERROR Error when sending message to topic fan with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for fan-1: 1521 ms has passed since batch creation plus linger time
[2018-12-03 13:44:26,829] ERROR Error when sending message to topic fan with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for fan-0: 1540 ms has passed since batch creation plus linger time

I'm in other VM to connect the Kafka brokers.
Then I modify the /etc/hosts file to add the same hosts both in Brokers and the test VM. And it works.

@doakinyemi
Copy link

Can confirm that updating /etc/hosts does appear to resolve the issue. Thanks for the recommendation!

@kumatrx
Copy link

kumatrx commented Nov 27, 2019

We tried everything, but no luck.

  1. Decreased producer batch size and increased request.timeout.ms.
  2. Restarted target kafka cluster, still no luck.
  3. Checked replication on target kafka cluster, that as well was working fine.
  4. Added retries, retries.backout.ms in prodcuer properties.
  5. Added linger.time as well in kafka prodcuer properties.

Finally our case there was issue with kafka cluster itself, from 2 servers we were unable to fetch metadata in between.

When we changed target kafka cluster to our dev box, it worked fine.

@Fakhruddin-Kararawala
Copy link

Found issue in my case - I'm using cleanup.policy=compact config in one channel and for some reason kafka-console-producer doesn't work with such channels, am I doing something wrong? Normal channels works just fine

$ kafka-topics --create --zookeeper "zookeeper-0.zookeeper" --partitions 1 --replication-factor 1 --topic topicA
Created topic "topicA".
$ kafka-topics --describe --zookeeper "zookeeper-0.zookeeper" --topic topicA
Topic:topicA	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: topicA	Partition: 0	Leader: 1	Replicas: 1	Isr: 1
$ echo hello | kafka-console-producer --broker-list "localhost:9092" --topic "topicA" 
$ kafka-topics --create --zookeeper "zookeeper-0.zookeeper" --partitions 1 --replication-factor 1 --topic topicB --config cleanup.policy=compact
Created topic "topicB".
$ kafka-topics --describe --zookeeper "zookeeper-0.zookeeper" --topic topicB
Topic:topicB	PartitionCount:1	ReplicationFactor:1	Configs:cleanup.policy=compact
	Topic: topicB	Partition: 0	Leader: 1	Replicas: 1	Isr: 1
$ echo hello | kafka-console-producer --broker-list "localhost:9092" --topic "topicB" 
[2018-01-13 10:46:39,295] WARN [Producer clientId=console-producer] Got error produce response with correlation id 3 on topic-partition topicB-0, retrying (2 attempts left). Error: CORRUPT_MESSAGE (org.apache.kafka.clients.producer.internals.Sender)
[2018-01-13 10:46:39,397] WARN [Producer clientId=console-producer] Got error produce response with correlation id 4 on topic-partition topicB-0, retrying (1 attempts left). Error: CORRUPT_MESSAGE (org.apache.kafka.clients.producer.internals.Sender)
[2018-01-13 10:46:39,503] WARN [Producer clientId=console-producer] Got error produce response with correlation id 5 on topic-partition topicB-0, retrying (0 attempts left). Error: CORRUPT_MESSAGE (org.apache.kafka.clients.producer.internals.Sender)
[2018-01-13 10:46:39,611] ERROR Error when sending message to topic topicB with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.CorruptRecordException: This message has failed its CRC checksum, exceeds the valid size, or is otherwise corrupt.

working fine for me when removing compact policy

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests