@pinewall
7.6 KiB
pinewall is translating
Anatomy of a Linux DNS Lookup – Part IV
In Anatomy of a Linux DNS Lookup – Part I, Part II, and Part III I covered:
-
nsswitch
-
/etc/hosts
-
/etc/resolv.conf
-
ping
vshost
style lookups -
systemd
and itsnetworking
service -
ifup
andifdown
-
dhclient
-
resolvconf
-
NetworkManager
-
dnsmasq
In Part IV I’ll cover how containers do DNS. Yes, that’s not simple either…
- Docker and DNS ============================================================
In part III we looked at DNSMasq, and learned that it works by directing DNS queries to the localhost address 127.0.0.1
, and a process listening on port 53 there will accept the request.
So when you run up a Docker container, on a host set up like this, what do you expect to see in its /etc/resolv.conf
?
Have a think, and try and guess what it will be.
Here’s the default output if you run a default Docker setup:
$ docker run ubuntu cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.
search home
nameserver 8.8.8.8
nameserver 8.8.4.4
Hmmm.
Where did the addresses 8.8.8.8
and 8.8.4.4
come from?
When I pondered this question, my first thought was that the container would inherit the /etc/resolv.conf
settings from the host. But a little thought shows that that won’t always work.
If you have DNSmasq set up on the host, the /etc/resolv.conf
file will be pointed at the 127.0.0.1
loopback address. If this were passed through to the container, the container would look up DNS addresses from within its own networking context, and there’s no DNS server available within the container context, so the DNS lookups would fail.
‘A-ha!’ you might think: we can always use the host’s DNS server by using the host’s IP address, available from within the container as the default route:
root@79a95170e679:/# ip route
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.2
Use the host?
From that we can work out that the ‘host’ is on the ip address: 172.17.0.1
, so we could try manually pointing DNS at that using dig (you could also update the /etc/resolv.conf
and then run ping
, this just seems like a good time to introduce dig
and its @
flag, which points the request at the ip address you specify):
root@79a95170e679:/# dig @172.17.0.1 google.com | grep -A1 ANSWER.SECTION
;; ANSWER SECTION:
google.com. 112 IN A 172.217.23.14
However: that might work if you use DNSMasq, but if you don’t it won’t, as there’s no DNS server on the host to look up.
So Docker’s solution to this quandary is to bypass all that complexity and point your DNS lookups to Google’s DNS servers at 8.8.8.8
and 8.8.4.4
, ignoring whatever the host context is.
Anecdote: This was the source of my first problem with Docker back in 2013. Our corporate network blocked access to those IP addresses, so my containers couldn’t resolve URLs.
So that’s Docker containers, but container orchestrators such as Kubernetes can do different things again…
2) Kubernetes and DNS
The unit of container deployment in Kubernetes is a Pod. A pod is a set of co-located containers that (among other things) share the same IP address.
An extra challenge with Kubernetes is to forward requests for Kubernetes services to the right resolver (eg myservice.kubernetes.io
) to the private network allocated to those service addresses. These addresses are said to be on the ‘cluster domain’. This cluster domain is configurable by the administrator, so it might be cluster.local
or myorg.badger
depending on the configuration you set up.
In Kubernetes you have four options for configuring how DNS lookup works within your pod.
- Default
This (misleadingly-named) option takes the same DNS resolution path as the host the pod runs on, as in the ‘naive’ DNS lookup described earlier. It’s misleadingly named because it’s not the default! ClusterFirst is.
If you want to override the /etc/resolv.conf
entries, you can in your config for the kubelet.
- ClusterFirst
ClusterFirst does selective forwarding on the DNS request. This is achieved in one of two ways based on the configuration.
In the first, older and simpler setup, a rule was followed where if the cluster domain was not found in the request, then it was forwarded to the host.
In the second, newer approach, you can configure selective forwarding on an internal DNS
Here’s what the config looks like and a diagram lifted from the Kubernetes docs which shows the flow:
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
stubDomains: |
{"acme.local": ["1.2.3.4"]}
upstreamNameservers: |
["8.8.8.8", "8.8.4.4"]
The stubDomains
entry defines specific DNS servers to use for specific domains. The upstream servers are the servers we defer to when nothing else has picked up the DNS request.
This is achieved with our old friend DNSMasq running in a pod.
The other two options are more niche:
- ClusterFirstWithHostNet
This applies if you use host network for your pods, ie you bypass the Docker networking setup to use the same network as you would directly on the host the pod is running on.
- None
None does nothing to DNS but forces you to specify the DNS settings in the dnsConfig
field in the pod specification.
CoreDNS Coming
And if that wasn’t enough, this is set to change again as CoreDNS comes to Kubernetes, replacing kube-dns. CoreDNS will offer a few benefits over kube-dns, being more configurabe and more efficient.
Find out more here.
If you’re interested in OpenShift networking, I wrote a post on that here. But that was for 3.6 so is likely out of date now.
End of Part IV
That’s part IV done. In it we covered.
-
Docker DNS lookups
-
Kubernetes DNS lookups
-
Selective forwarding (stub domains)
-
kube-dns
via: https://zwischenzugs.com/2018/08/06/anatomy-of-a-linux-dns-lookup-part-iv/
作者:zwischenzugs 译者:译者ID 校对:校对者ID