
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
Eventbroker worker node down after reboot
Hi All,
So we have a one master and three worker node event broker deployment.
One of our worker nodes stop and we restarted it, after it came back up the the worker node did not start up, as if we have a look at the evenetbroker it only displays 2 brokers and not 3 as it used to be.
Also I am not to experienced with this install and setup of eventbroker, any input will be appreciated.
the logs shows the following issue's
in the kubelet.ERROR log:
E0107 13:43:00.110784 2212 pod_workers.go:182] Error syncing pod 02c0b6a1-3150-11ea-a1bc-0a861c008ba0 ("eb-kafka-2_eventbroker3(02c0b6a1-3150-11ea-a1bc-0a861c008ba0)"), skipping: failed to "StartContainer" for "atlas-kafka" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=atlas-kafka pod=eb-kafka-2_eventbroker3(02c0b6a1-3150-11ea-a1bc-0a861c008ba0)
in the Kubelet.INFO:
E0107 13:44:28.110982 2212 pod_workers.go:182] Error syncing pod 02c0b6a1-3150-11ea-a1bc-0a861c008ba0 ("eb-kafka-2_eventbroker3(02c0b6a1-3150-11ea-a1bc-0a861c008ba0)"), skipping: failed to "StartContainer" for "atlas-kafka" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=atlas-kafka pod=eb-kafka-2_eventbroker3(02c0b6a1-3150-11ea-a1bc-0a861c008ba0)"
And in the Kubelet.WARNING
W0107 13:47:36.286325 2212 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
W0107 13:47:45.111099 2212 kubelet_pods.go:800] Unable to retrieve pull secret eventbroker3/hub.docker.secret for eventbroker3/eb-kafka-2 due to secrets "hub.docker.secret" not found. The image pull may not succeed.
E0107 13:47:45.111403 2212 pod_workers.go:182] Error syncing pod 02c0b6a1-3150-11ea-a1bc-0a861c008ba0 ("eb-kafka-2_eventbroker3(02c0b6a1-3150-11ea-a1bc-0a861c008ba0)"), skipping: failed to "StartContainer" for "atlas-kafka" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=atlas-kafka pod=eb-kafka-2_eventbroker3(02c0b6a1-3150-11ea-a1bc-0a861c008ba0)"

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
okay, I think I know what the issue is, one of the event broker pods are not running,
this is when i run kubectl get pods --all-namespaces, now to figure out how to get it running.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
Hi @eonl
I am not quite sute if it will resolve your issue, but give it a try.
- Login to the Event Broker master node with root privileges.
- Run the below command
kubectl delete pods -n eventbroker3 eb-kafka-2
This will terminate the existing instance and create new one.
- If it doesn't resolve your issue, delete schemaregistry associated with this worker node
Same command as above, but schema registry name instead of eb-kafka-2
Hope this helps
Regards
Ajith K S