Lieutenant Commander
Lieutenant Commander
283 views

Eventbroker worker node down after reboot

Hi All,

So we have a one master and three worker node event broker deployment.

One of our worker nodes stop and we restarted it, after it came back up the the worker node did not start up, as if we have a look at the evenetbroker it only displays 2 brokers and not 3 as it used to be.

Also I am not to experienced with this install and setup of eventbroker, any input will be appreciated.

the logs shows the following issue's

in the kubelet.ERROR log:

E0107 13:43:00.110784 2212 pod_workers.go:182] Error syncing pod 02c0b6a1-3150-11ea-a1bc-0a861c008ba0 ("eb-kafka-2_eventbroker3(02c0b6a1-3150-11ea-a1bc-0a861c008ba0)"), skipping: failed to "StartContainer" for "atlas-kafka" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=atlas-kafka pod=eb-kafka-2_eventbroker3(02c0b6a1-3150-11ea-a1bc-0a861c008ba0)

in the Kubelet.INFO:

E0107 13:44:28.110982 2212 pod_workers.go:182] Error syncing pod 02c0b6a1-3150-11ea-a1bc-0a861c008ba0 ("eb-kafka-2_eventbroker3(02c0b6a1-3150-11ea-a1bc-0a861c008ba0)"), skipping: failed to "StartContainer" for "atlas-kafka" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=atlas-kafka pod=eb-kafka-2_eventbroker3(02c0b6a1-3150-11ea-a1bc-0a861c008ba0)"

And in the Kubelet.WARNING

 

W0107 13:47:36.286325 2212 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
W0107 13:47:45.111099 2212 kubelet_pods.go:800] Unable to retrieve pull secret eventbroker3/hub.docker.secret for eventbroker3/eb-kafka-2 due to secrets "hub.docker.secret" not found. The image pull may not succeed.
E0107 13:47:45.111403 2212 pod_workers.go:182] Error syncing pod 02c0b6a1-3150-11ea-a1bc-0a861c008ba0 ("eb-kafka-2_eventbroker3(02c0b6a1-3150-11ea-a1bc-0a861c008ba0)"), skipping: failed to "StartContainer" for "atlas-kafka" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=atlas-kafka pod=eb-kafka-2_eventbroker3(02c0b6a1-3150-11ea-a1bc-0a861c008ba0)"

 

0 Likes
2 Replies
Lieutenant Commander
Lieutenant Commander

okay, I think I know what the issue is,  one of the event broker pods are not running, 

eonl_0-1578406911858.png

this is when i run kubectl get pods --all-namespaces, now to figure out how to get it running.

0 Likes
Commodore
Commodore

Hi @eonl 

I am not quite sute if it will resolve your issue, but give it a try.

- Login to the Event Broker master node with root privileges.

- Run the below command

   kubectl delete pods -n eventbroker3 eb-kafka-2

This will terminate the existing instance and create new one.

 - If it doesn't resolve your issue, delete schemaregistry associated with this worker node

   Same command as above, but schema registry name instead of eb-kafka-2

 

Hope this helps

Regards

Ajith K S

The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.