Micro Focus Expert
Micro Focus Expert
317 views

The interesting scenario of Docker networking discovery

Recently I noticed a lot of rumors around Docker super nodes that get created after multiple discovery runs over a Docker node over a longer period of time.

Context:

when a worker node spawns a docker pod then that pod will need the bare minimum to run and emulate 'an OS' on which the application inside the pod will function. Among others, it will need a network interface so the docker node/worker will create an ad-hoc virtual interface which will be named veth{some_random_numbers}@ifX. This means that we have a virtual ethernet interface that is unique (the random numbers) @ ifX (it's associated with the worker/node interface ifX from which it has access to the TCP/UDP stack). Also, the MAC address for such vEth interfaces will be unique as it's generated randomly each time such an interface is created by Docker.

When the pod is shutdown, the veth* interface will be deleted and will no longer exist. When the same pod will be started, another veth* interface will be created following the above rules. The important detail is that the new veth* will have a different random number and will have a different MAC address so it won't be the same interface and it can't be merged. If we start and stop the same pod 100 times than 100 different veth* interfaces will be created and deleted. If we discover that Docker node during each of those 100 life spans of that pod then we will add each time a new vEth interface to that Docker node so in the end we will have a super node (the Docker node which runs that pod) with 4-5 native interfaces plus 100 vEth interfaces (each one created adhoc for each pod life time).


In other terms, depending on when we do discovery we can report those virtual docker interfaces and we will model them as interfaces of the docker node. The next time we will do discovery we will discover new interfaces as in the meantime some pods were shut down and started again so the old vEth interfaces were deleted and new ones were created. Over time we can end up with some false super nodes, the docker nodes, which have hundreds of vEth interfaces. Until aging manages to delete them we will accumulate so many vEth interfaces that reconciliation won't be able to properly identify or merge the nodes. For example the adapter Host Connection by Shell has Interface CIT defined as Candidate for Deletion so it will take 20 days until the old interfaces will be deleted. Until then those interfaces will influence reconciliation/identification.

Ways to fix this problem:

  • add a new rule to GlobalFiltering to exclude such vEth interfaces on probe side based on their name. This is not a good approach as the results are not consistent and we will still have those CIs in the result vector from Discovery
  • add a new GlobalSettings.xml parameter to control how this vEth interfaces and their IPs are modeled in the Jython scripts

In the end such vEth interfaces are toxic most of the times and they should be deleted if they don't have a real business value behind them.

Example of vEth interfaces in UCMDB. Note the condition on the interface nameveth dockerveth docker



Kind regards,
Bogdan Mureșan
EMEA CMS Technical Success
Labels (3)
Tags (2)
0 Likes
4 Replies
Micro Focus Expert
Micro Focus Expert

@MIHAI APOSTOL 

Some more details on the vEth Docker interfaces.



Kind regards,
Bogdan Mureșan
EMEA CMS Technical Success
0 Likes
Commodore Commodore
Commodore

So how would you work around the primary ip address and primary dns issue? Will doing the globalfiltering help this issue?

0 Likes
Commodore Commodore
Commodore

Actually I am Seeing "Docker0" interfaces that are causing my duplicates along with wrong primary dns and ip addresses. The Veth are only showing a connection with the server itself. I am assuming I just need to create an enrichment to remove both of those types of interfaces.

0 Likes
Micro Focus Expert
Micro Focus Expert

Actually the Docker environment will create on that Linux host 3 different types of interfaces:

- docker0 - main Docker bridge interface

- virbr0 - a virtual bridge interface

- veth - virtual interfaces assigned to the pods

This will be in Host Connection by shell CommLog

<CMD>[CDATA: /sbin/ip addr show ; echo ERROR_CODE:$?]</CMD>
<RESULT>[CDATA: 1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:a0:7b:d7 brd ff:ff:ff:ff:ff:ff
inet 192.168.128.220/22 brd 192.168.128..255 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fea0:7bd7/64 scope link
valid_lft forever preferred_lft forever
3: virbr0: &lt;NO-CARRIER,BROADCAST,MULTICAST,UP&gt; mtu 1500 qdisc noqueue state DOWN qlen 1000
link/ether 52:54:00:b1:11:45 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: &lt;BROADCAST,MULTICAST&gt; mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
link/ether 52:54:00:b1:11:45 brd ff:ff:ff:ff:ff:ff
5: docker0: &lt;NO-CARRIER,BROADCAST,MULTICAST,UP&gt; mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:ce:52:0a:53 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
6: cni0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP qlen 1000
link/ether ba:bc:c3:38:23:1a brd ff:ff:ff:ff:ff:ff
inet 172.16.40.1/24 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::b8bc:c3ff:fe38:231a/64 scope link
valid_lft forever preferred_lft forever
7: veth38b99e4c@if3: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue master cni0 state UP
link/ether 42:4c:cc:4a:cc:e2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::404c:ccff:fe4a:cce2/64 scope link
valid_lft forever preferred_lft forever
8: veth83a59fc8@if3: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue master cni0 state UP
link/ether b2:6d:b2:48:56:95 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::b06d:b2ff:fe48:5695/64 scope link
valid_lft forever preferred_lft forever
9: vethd5af438f@if3: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue master cni0 state UP
link/ether 82:9d:46:05:3b:0e brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::809d:46ff:fe05:3b0e/64 scope link
valid_lft forever preferred_lft forever
ERROR_CODE:0]</RESULT>

 

Based on this output you will end up with docker.PNG

Did you notice in the lower right side the interface_role for Docker0? Yes, all of them are modeled as physical interfaces.

I mentioned veth interfaces because they represent the big chunk of virtual interfaces that lead to such super nodes and duplicates.

The primary IP address and DNS are not related to this problem. At least I didn't find a link between them.

 



Kind regards,
Bogdan Mureșan
EMEA CMS Technical Success
0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.