Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2020-09-11 21:16:51 +08:00
commit 598d0b068e
8 changed files with 542 additions and 42 deletions

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12602-1.html)
[#]: subject: (What is DNS and how does it work?)
[#]: via: (https://www.networkworld.com/article/3268449/what-is-dns-and-how-does-it-work.html)
[#]: author: (Keith Shaw, Josh Fruhlinger )
@ -16,7 +16,7 @@
<ruby>域名系统<rt>Domain Name System</rt></ruby>DNS是互联网的基础之一然而大多数不懂网络的人可能并不知道他们每天都在使用它来工作、查看电子邮件或在智能手机上浪费时间。
就其本质而言DNS 是一个与数字匹配的名称目录。这些数字,在这种情况下是 IP 地址,计算机用 IP 地址来相互通信。大多数对 DNS 的描述都是用电话簿来比喻,这对于 30 岁以上的人来说是没有问题的,因为他们知道电话簿是什么。
就其本质而言DNS 是一个与数字匹配的名称目录。这些数字,在这里指的是 IP 地址,计算机用 IP 地址来相互通信。大多数对 DNS 的描述都是用电话簿来比喻,这对于 30 岁以上的人来说是没有问题的,因为他们知道电话簿是什么。
如果你还不到 30 岁,可以把 DNS 想象成你的智能手机的联系人名单,它将人们的名字与他们的电话号码及电子邮件地址进行匹配,然后这个联系人名单的就像地球上的人一样多。
@ -24,25 +24,25 @@
当互联网还非常、非常小的时候,人们很容易将特定的 IP 地址与特定的计算机对应起来,但随着越来越多的设备和人加入到不断发展的网络中,这种简单的情况就没法持续多久了。现在仍然可以在浏览器中输入一个特定的 IP 地址来到达一个网站,但当时和现在一样,人们希望得到一个由容易记忆的单词组成的地址,也就是我们今天所认识的那种域名(比如 linux.cn。在 20 世纪 70 年代和 80 年代早期,这些名称和地址是由一个人指定的,她是[斯坦福大学的 Elizabeth Feinler][2],她在一个名为 [HOSTS.TXT][3] 的文本文件中维护着一个主列表,记录了每一台连接互联网的计算机。
随着互联网的发展,这种局面显然无法维持下去,尤其是因为 Feinler 只处理加州时间下午 6 点之前的请求而且圣诞节也要请假。1983 年,南加州大学的研究人员 Paul Mockapetris 受命在处理这个问题的多种建议中提出一个折中方案。他基本上无视了所有提出的建议,并开发了自己的系统,他将其称为 DNS。虽然从那时起现今的它显然发生了很大的变化但在基本层面上它的工作方式仍然与将近 40 年前相同。
随着互联网的发展,这种局面显然无法维持下去,尤其是因为 Feinler 只处理加州时间下午 6 点之前的请求而且圣诞节也要请假。1983 年,南加州大学的研究人员 Paul Mockapetris 受命在处理这个问题的多种建议中提出一个折中方案。但他基本上无视了所有提出的建议,而是开发了自己的系统,他将其称为 DNS。虽然从那时起现今的它显然发生了很大的变化但在基本层面上它的工作方式仍然与将近 40 年前相同。
### DNS 服务器是如何工作的
将名字与数字相匹配的 DNS 目录并不是整个藏在互联网的某个黑暗角落的一个地方。截至 2017 年底,[它记录了超过 3.32 亿个域名][4],如果作为一个目录确实会非常庞大。就像互联网本身一样,该目录分布在世界各地,存储在域名服务器(一般简称 DNS 服务器)上,这些服务器都会非常有规律地相互沟通,以提供更新和冗余。
将名字与数字相匹配的 DNS 目录并不是整个藏在互联网的某个黑暗角落。截至 2017 年底,[它记录了超过 3.32 亿个域名][4],如果作为一个目录确实会非常庞大。就像互联网本身一样,该目录分布在世界各地,存储在域名服务器(一般简称 DNS 服务器)上,这些服务器都会非常有规律地相互沟通,以提供更新和冗余。
### 权威 DNS 服务器与递归 DNS 服务器的比较
当你的计算机想要找到与域名相关联的 IP 地址时,它首先会向<ruby>递归<rt>recursive</rt></ruby> DNS 服务器提出请求,也称为递归解析器。递归解析器是一个通常由 ISP 或其他第三方提供商运营的服务器,它知道需要向其他哪些 DNS 服务器请求解析一个网站的名称与其 IP 地址。实际拥有所需信息的服务器称为<ruby>权威<rt>authoritative</rt></ruby> DNS 服务器。
当你的计算机想要找到与域名相关联的 IP 地址时,它首先会向<ruby>递归<rt>recursive</rt></ruby> DNS 服务器(也称为递归解析器)提出请求。递归解析器是一个通常由 ISP 或其他第三方提供商运营的服务器,它知道需要向其他哪些 DNS 服务器请求解析一个网站的名称与其 IP 地址。实际拥有所需信息的服务器称为<ruby>权威<rt>authoritative</rt></ruby> DNS 服务器。
### DNS 服务器和 IP 地址
每个域名可以对应一个以上的 IP 地址。事实上,有些网站有数百个更多的 IP 地址与一个域名相对应。例如,你的计算机访问 [www.google.com][5] 所到达的服务器,很可能与其他国家的人在浏览器中输入相同的网站名称所到达的服务器完全不同。
每个域名可以对应一个以上的 IP 地址。事实上,有些网站有数百个甚至更多的 IP 地址与一个域名相对应。例如,你的计算机访问 [www.google.com][5] 所到达的服务器,很可能与其他国家的人在浏览器中输入相同的网站名称所到达的服务器完全不同。
该目录的分布式性质的另一个原因是,如果这个目录只有一个位置,在数百万,可能是数十亿同样在同一时间寻找信息的人中共享,那么当你在寻找一个网站时,你需要花费多少时间才能得到响应 —— 这就像是排着长队使用电话簿一样。
该目录的分布式性质的另一个原因是,如果这个目录只在一个位置,在数百万,可能是数十亿在同一时间寻找信息的人中共享,那么当你在寻找一个网站时,你需要花费多少时间才能得到响应 —— 这就像是排着长队使用电话簿一样。
### 什么是 DNS 缓存?
为了解决这个问题DNS 信息在许多服务器之间共享。但最近访问过的网站的信息也会在客户端计算机本地缓存。你有可能每天使用 google.com 好几次。你的计算机不是每次都向 DNS 名称服务器查询 google.com 的 IP 地址,而是将这些信息保存在你的计算机上,这样它就不必访问 DNS 服务器来解析这个带有 IP 地址的名称。额外的缓存可能出现在用于将客户端连接到互联网的路由器上以及用户的互联网服务提供商ISP的服务器上。有了这么多的缓存实际上对 DNS 名称服务器的查询数量比看起来要少很多。
为了解决这个问题DNS 信息在许多服务器之间共享。但最近访问过的网站的信息也会在客户端计算机本地缓存。你有可能每天使用 google.com 好几次。你的计算机不是每次都向 DNS 名称服务器查询 google.com 的 IP 地址,而是将这些信息保存在你的计算机上,这样它就不必访问 DNS 服务器来解析这个名称的 IP 地址。额外的缓存也可能出现在用于将客户端连接到互联网的路由器上以及用户的互联网服务提供商ISP的服务器上。有了这么多的缓存实际上对 DNS 名称服务器的查询数量比看起来要少很多。
### 如何找到我的 DNS 服务器?
@ -52,7 +52,7 @@
但要记住,虽然你的 ISP 会设置一个默认的 DNS 服务器,但你没有义务使用它。有些用户可能有理由避开他们 ISP 的 DNS —— 例如,有些 ISP 使用他们的 DNS 服务器将不存在的地址的请求重定向到[带有广告的网页][7]。
如果你想要一个替代方案,你可以将你的计算机指向一个公共 DNS 服务器,以它作为一个递归解析器。最著名的公共 DNS 服务器之一是谷歌的,它的 IP 地址是 8.8.8.8 和 8.8.4.4。Google 的 DNS 服务往往是[快速的][8],虽然对 [Google 提供免费服务的别有用心的动机][9]有一定的质疑,但他们无法真正从你那里获得比他们从 Chrome 中获得的更多信息。Google 有一个页面,详细说明了如何[配置你的电脑或路由器][10]连接到 Google 的 DNS。
如果你想要一个替代方案,你可以将你的计算机指向一个公共 DNS 服务器,以它作为一个递归解析器。最著名的公共 DNS 服务器之一是谷歌的,它的 IP 地址是 8.8.8.8 和 8.8.4.4。Google 的 DNS 服务往往是[快速的][8],虽然对 [Google 提供免费服务的别有用心的动机][9]有一定的质疑,但他们无法真正从你那里获得比他们从 Chrome 浏览器中获得的更多信息。Google 有一个页面,详细说明了如何[配置你的电脑或路由器][10]连接到 Google 的 DNS。
### DNS 如何提高效率
@ -60,13 +60,13 @@ DNS 的组织结构有助于保持事情的快速和顺利运行。为了说明
如上所述,对 IP 地址的初始请求是向递归解析器提出的。递归解析器知道它需要请求哪些其他 DNS 服务器来解析一个网站linux.cn的名称与其 IP 地址。这种搜索会传递至根服务器,它知道所有顶级域名的信息,如 .com、.net、.org 以及所有国家域名,如 .cn中国和 .uk英国。根服务器位于世界各地所以系统通常会将你引导到地理上最近的一个服务器。
一旦请求到达正确的根服务器它就会进入一个顶级域名TLD名称服务器该服务器存储二级域名的信息即在你到达 .com、.org、.net 之前所使用的单词例如linux.cn 的信息是 “linux”。然后请求进入域名服务器域名服务器掌握着网站的信息和 IP 地址。一旦 IP 地址被发现,它就会被发回给客户端,客户端现在可以用它来访问网站。所有这一切都只需要几毫秒的时间。
一旦请求到达正确的根服务器它就会进入一个顶级域名TLD名称服务器该服务器存储二级域名的信息即在你写在 .com、.org、.net 之前的单词例如linux.cn 的信息是 “linux”。然后请求进入域名服务器域名服务器掌握着网站的信息和 IP 地址。一旦 IP 地址被找到,它就会被发回给客户端,客户端现在可以用它来访问网站。所有这一切都只需要几毫秒的时间。
因为 DNS 在过去的 30 多年里一直在工作,所以大多数人都认为它是理所当然的。在构建系统的时候也没有考虑到安全问题,所以[黑客们充分利用了这一点][11],制造了各种各样的攻击。
### DNS 反射攻击
DNS 反射攻击可以用 DNS 解析器服务器的大量信息淹没受害者。攻击者使用受害者的欺骗 IP 地址来向他们能找到的所有开放的 DNS 解析器请求大量的 DNS 数据。当解析器响应时,受害者会收到大量未请求的 DNS 数据,使其机器不堪重负。
DNS 反射攻击可以用 DNS 解析器服务器的大量信息淹没受害者。攻击者使用伪装成受害者的 IP 地址来向他们能找到的所有开放的 DNS 解析器请求大量的 DNS 数据。当解析器响应时,受害者会收到大量未请求的 DNS 数据,使其不堪重负。
### DNS 缓存投毒
@ -74,7 +74,7 @@ DNS 反射攻击可以用 DNS 解析器服务器的大量信息淹没受害者
### DNS 资源耗尽
[DNS 资源耗尽][13]攻击可以堵塞 ISP 的 DNS 基础设施,阻止 ISP 的客户访问互联网上的网站。攻击者注册一个域名,并通过将受害者的名称服务器作为域名的权威服务器来实现。因此,如果递归解析器不能提供与网站名称相关的 IP 地址,就会询问受害者的名称服务器。攻击者会对自己注册的域名产生大量的请求,并查询不存在的子域名,这就会导致大量的解析请求发送到受害者的名称服务器,使其不堪重负。
[DNS 资源耗尽][13]攻击可以堵塞 ISP 的 DNS 基础设施,阻止 ISP 的客户访问互联网上的网站。攻击者注册一个域名,并通过将受害者的名称服务器作为域名的权威服务器来实现这种攻击。因此,如果递归解析器不能提供与网站名称相关的 IP 地址,就会询问受害者的名称服务器。攻击者会对自己注册的域名产生大量的请求,并查询不存在的子域名,这就会导致大量的解析请求发送到受害者的名称服务器,使其不堪重负。
### 什么是 DNSSec
@ -90,13 +90,13 @@ DNSSec 将通过让每一级 DNS 服务器对其请求进行数字签名来解
### SIGRed: 蠕虫病毒 DNS 漏洞再次出现
最近,随着 Windows DNS 服务器缺陷的发现,全世界都看到了 DNS 中的弱点可能造成的混乱。这个潜在的安全漏洞被称为 SIGRed[它需要一个复杂的攻击链][16],但利用未打补丁的 Windows DNS 服务器,有可能在客户端安装和执行任意恶意代码。而且该漏洞是“可蠕虫”的,这意味着它可以在没有人为干预的情况下从计算机传播到计算机。该漏洞被认为足够令人震惊,以至于美国联邦机构[被要求在几天时间内安装补丁][17]。
最近,随着 Windows DNS 服务器缺陷的发现,全世界都看到了 DNS 中的弱点可能造成的混乱。这个潜在的安全漏洞被称为 SIGRed[它需要一个复杂的攻击链][16],但利用未打补丁的 Windows DNS 服务器,有可能在客户端安装和执行任意恶意代码。而且该漏洞是“可蠕虫传播”的,这意味着它可以在没有人为干预的情况下从计算机传播到计算机。该漏洞被认为足够令人震惊,以至于美国联邦机构[被要求他们在几天时间内安装补丁][17]。
### DNS over HTTPS新的隐私格局
截至本报告撰写之时DNS 正处于其历史上最大的一次转变的边缘。谷歌和 Mozilla 共同控制着浏览器市场的大部分份额,他们正在鼓励向 [DNS over HTTPS][18]DoH的方向发展在这种情况下DNS 请求将被已经保护了大多数 Web 流量的 HTTPS 协议加密。在 Chrome 的实现中,浏览器会检查 DNS 服务器是否支持 DoH如果不支持则会将 DNS 请求重新路由到谷歌的 8.8.8.8。
这是一个并非没有争议的举动。早在上世纪 80 年代就在 DNS 协议上做了大量早期工作的 Paul Vixie 称此举对安全来说是“[灾难][19]”:例如,企业 IT 部门将更难监控或引导穿越其网络的 DoH 流量。不过Chrome 浏览器是无所不在的DoH 不久就会被默认打开,所以我们会看到未来会发生什么
这是一个并非没有争议的举动。早在上世纪 80 年代就在 DNS 协议上做了大量早期工作的 Paul Vixie 称此举对安全来说是“[灾难][19]”:例如,企业 IT 部门将更难监控或引导穿越其网络的 DoH 流量。不过Chrome 浏览器是无所不在的DoH 不久就会被默认打开,所以让我们拭目以待
--------------------------------------------------------------------------------
@ -105,7 +105,7 @@ via: https://www.networkworld.com/article/3268449/what-is-dns-and-how-does-it-wo
作者:[Keith Shaw][a], [Josh Fruhlinger][c]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: (hongcha8385)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (jlztan)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,266 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Automate your container orchestration with Ansible modules for Kubernetes)
[#]: via: (https://opensource.com/article/20/9/ansible-modules-kubernetes)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Automate your container orchestration with Ansible modules for Kubernetes
======
Combine Ansible with Kubernetes for cloud automation. Plus, get our
cheat sheet for using the Ansible k8s module.
![Ship captain sailing the Kubernetes seas][1]
[Ansible][2] is one of the best tools for automating your work. [Kubernetes][3] is one of the best tools for orchestrating containers. What happens when you combine the two? As you might expect, Ansible combined with Kubernetes lets you automate your container orchestration. 
### Ansible modules
On its own, Ansible is basically just a framework for interpreting YAML files. Its true power comes from its [many modules][4]. Modules are what enable you to invoke external applications with just a few simple configuration settings in a playbook.
There are a few modules that deal directly with Kubernetes, and a few that handle related technology like [Docker][5] and [Podman][6]. Learning a new module is often similar to learning a new terminal command or a new API. You get familiar with a module from its documentation, you learn what arguments it accepts, and you equate its options to how you might use the application it interfaces with.
### Access a Kubernetes cluster
To try out Kubernetes modules in Ansible, you must have access to a Kubernetes cluster. If you don't have that, then you might try to open a trial account online, but most of those are short term. Instead, you can install [Minikube][7], as described on the Kubernetes website or in Bryant Son's excellent article on [getting started with Minikube][8]. Minikube provides a local instance of a single-node Kubernetes install, allowing you to configure and interact with it as you would a full cluster.
**[Download the [Ansible k8s cheat sheet][9]]**
Before installing Minikube, you must ensure that your environment is ready to serve as a virtualization backend. You may need to install `libvirt` and grant yourself permission to the `libvirt` group:
```
$ sudo dnf install libvirt
$ sudo systemctl start libvirtd
$ sudo usermod --append --groups libvirt `whoami`
$ newgrp libvirt
```
#### Install Python modules
To prepare for using Kubernetes-related Ansible modules, you should also install a few helper Python modules:
```
$ pip3.6 install kubernetes --user
$ pip3.6 install openshift --user
```
#### Start Kubernetes
If you're using Minikube instead of a Kubernetes cluster, use the `minikube` command to start up a local, miniaturized Kubernetes instance on your computer:
```
`$ minikube start --driver=kvm2 --kvm-network default`
```
Wait for Minikube to initialize. Depending on your internet connection, this could take several minutes.
### Get information about your cluster
Once you've started your cluster successfully, you can get information about it with the `cluster-info` option:
```
$ kubectl cluster-info
Kubernetes master is running at <https://192.168.39.190:8443>
KubeDNS is running at <https://192.168.39.190:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy>
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
```
### Use the k8s module
The entry point for using Kubernetes through Ansible is the `k8s` module, which enables you to manage Kubernetes objects from your playbooks. This module describes states resulting from `kubectl` instructions. For instance, here's how you would create a new [namespace][10] with `kubectl`:
```
`$ kubectl create namespace my-namespace`
```
It's a simple action, and the YAML representation of the same result is similarly terse:
```
\- hosts: localhost
  tasks:
    - name: create namespace
      k8s:
        name: my-namespace
        api_version: v1
        kind: Namespace
        state: present
```
In this case, the host is defined as `localhost`, under the assumption that you're running this against Minikube. Notice that the module in use defines the syntax of the parameters available (such as `api_version` and `kind`).
Before using this playbook, verify it with `yamllint`:
```
`$ yamllint example.yaml`
```
Correct any errors, and then run the playbook:
```
`$ ansible-playbook ./example.yaml`
```
Verify that the new namespace has been created:
```
$ kubectl get namespaces
NAME              STATUS   AGE
default           Active   37h
kube-node-lease   Active   37h
kube-public       Active   37h
kube-system       Active   37h
demo              Active   11h
my-namespace      Active   3s
```
### Pull a container image with Podman
Containers are Linux systems, almost impossibly minimal in scope, that can be managed by Kubernetes. Much of the container specifications have been defined by the [LXC project][11] and Docker. A recent addition to the container toolset is Podman, which is popular because it runs without requiring a daemon.
With Podman, you can pull a container image from a repository, such as Docker Hub or Quay.io. The Ansible syntax for this is simple, and all you need to know is the location of the container, which is available from the repository's website:
```
   - name: pull an image
      podman_image:
        name: quay.io/jitesoft/nginx
```
Verify it with `yamllint`:
```
`$ yamllint example.yaml`
```
And then run the playbook:
```
$ ansible-playbook ./example.yaml
[WARNING]: provided hosts list is empty, only localhost is available.
Note that the implicit localhost does not match 'all'
PLAY [localhost] ************************
TASK [Gathering Facts] ************************
ok: [localhost]
TASK [create k8s namespace] ************************
ok: [localhost]
TASK [pull an image] ************************
changed: [localhost]
PLAY RECAP ************************
localhost: ok=3 changed=1 unreachable=0 failed=0
           skipped=0 rescued=0 ignored=0
```
### Deploy with Ansible
You're not limited to small maintenance tasks with Ansible. Your playbook can interact with Ansible in much the same way a configuration file does with `kubectl`. In fact, in many ways, the YAML you know by using Kubernetes translates to your Ansible plays. Here's a configuration you might pass directly to `kubectl` to deploy an image (in this example, a web server):
```
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-webserver
spec:
  selector:
    matchLabels:
      run: my-webserver
  replicas: 1
  template:
    metadata:
      labels:
        run: my-webserver
    spec:
      containers:
      - name: my-webserver
        image: nginx
        ports:
        - containerPort: 80
```
If you know these parameters, then you mostly know the parameters required to accomplish the same with Ansible. You can, with very little modification, move that YAML into a `definition` element in your Ansible playbook:
```
   - name: deploy a web server
      k8s:
        api_version: v1
        namespace: my-namespace
        definition:
          kind: Deployment
          metadata:
            labels:
              app: nginx
            name: nginx-deploy
          spec:
            replicas: 1
            selector:
              matchLabels:
                app: nginx
            template:
              metadata:
                labels:
                  app: nginx
              spec:
                containers:
                  - name: my-webserver
                    image: quay.io/jitesoft/nginx
                    ports:
                      - containerPort: 80
                        protocol: TCP
```
After running this, you can see the deployment with `kubectl`, as usual:
```
$ kubectl -n my-namespace get pods
NAME                      READY  STATUS
nginx-deploy-7fdc9-t9wc2  1/1    Running
```
### Modules for the cloud
As more development and deployments move to the cloud, it's important to understand how to automate the important aspects of your cloud. The `k8s` and `podman_image` modules are only two examples of modules related to Kubernetes and a mere fraction of modules developed for the cloud. Take a look at your workflow, find the tasks you want to track and automate, and see how Ansible can help you do more by doing less.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/ansible-modules-kubernetes
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
[2]: https://opensource.com/resources/what-ansible
[3]: https://opensource.com/resources/what-is-kubernetes
[4]: https://docs.ansible.com/ansible/latest/modules/modules_by_category.html
[5]: https://opensource.com/resources/what-docker
[6]: http://podman.io
[7]: https://kubernetes.io/docs/tasks/tools/install-minikube
[8]: https://opensource.com/article/18/10/getting-started-minikube
[9]: https://opensource.com/downloads/ansible-k8s-cheat-sheet
[10]: https://opensource.com/article/19/10/namespaces-and-containers-linux
[11]: https://www.redhat.com/sysadmin/exploring-containers-lxc

View File

@ -0,0 +1,156 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How this open source test framework evolves with .NET)
[#]: via: (https://opensource.com/article/20/9/testing-net-fixie)
[#]: author: (Patrick Lioi https://opensource.com/users/patricklioi)
How this open source test framework evolves with .NET
======
Re-evaluating and overhauling a software project's design are crucial
steps to keep up as circumstances change.
![magnifying glass on computer screen, finding a bug in the code][1]
A software project's design is a consequence of the time it was written. As circumstances change, it's wise to take a step back and consider whether old ideas still make for a good design. If not, you risk missing out on enhancements, simplifications, new degrees of freedom, or even a project's very survival.
This is relevant advice for [.NET][2] developers whose dependencies are subject to constant updates or are preparing for .NET 5. The [Fixie][3] project confronted this reality as we flexed to outside circumstances during the early adoption phase of .NET Core. Fixie is an open source .NET test framework similar to NUnit and xUnit with an emphasis on [developer ergonomics][4] and customization. It was developed before .NET Core and has gone through a few major design overhauls in response to platform updates.
### The problem: Reliable assembly loading
A .NET test project tends to feel a lot like a library: a bunch of classes with no visible entry point. The assumption is that a test runner, like Fixie's Visual Studio Test Explorer plugin, will load your test assembly, use reflection to find all the tests within it, and invoke the tests to collect results. Unlike a regular library, test projects share some similarities with regular console applications:
1. The test project's dependencies should be naturally loadable, as with any executable, from their own build output folder.
2. When running multiple test projects, the loaded assemblies for test project A should be separate from the loaded assemblies for test project B.
3. When the system under test relies on an App.config file, it should use the one local to the test project while the tests are running.
I'll call these behaviors the Big Three. The Big Three are so natural that you rarely find a need to even _say_ them. A test project should resemble a console executable: It should be able to have dependencies, it should not conflict with the assemblies loaded for another project, and each project should respect its own dedicated config file. We take this all for granted. The sky is blue, water is wet, and the Big Three must be honored as tests run.
### Fixie v1: Designing for the Big Three
The Big Three pose a _huge_ problem for .NET test frameworks: the primary running process, such as Visual Studio Test Explorer, is nowhere near the test project's build output folder. The most natural attempt to load a test project and run it will fail all of the Big Three.
Early alpha builds of Fixie were naive about assembly loading: The test runner .exe would load a test project and run simple tests, but it would fail as soon as a test tried to run code in another assembly—like the application being tested. By default, it searched for assemblies _near the test runner_, nowhere near the test project's build output folder.
Once we resolved that, using that test runner to run more than one test project would result in conflicts at runtime, such as when each test project referenced different versions of the same library.
And when we resolved that, the test runner would fail to look in the right config files, mistakenly thinking the test runner's config file was the one to use.
In the days of the regular old .NET Framework, the solution to the Big Three came in the form of AppDomains. AppDomains are a fairly old and now-deprecated technology. Fixie v1 was developed when this was the primary solution, with no deprecation in sight, to the Big Three. _Under those circumstances_, using AppDomains to solve the Big Three was the ideal design, though it was a bit frustrating to work with them. In short, they let a single test runner carve out little pockets of loaded assemblies with rigid communication boundaries between them.
![Fixie version 1][5]
The Test Explorer plugin and its own dependencies (like [Mono.Cecil][6]) live in one AppDomain. The test assembly and its dependencies live in a second AppDomain. A painful serialization boundary allows requests to cross the chasm with no risk of mixing the loaded assemblies.
AppDomains let you identify each test project's build output folder as the home of that test project's config file and dependencies. You could successfully load a test project's folder into the test runner process, call into it, and get test results while meeting the Big Three requirements.
And then .NET Core came along. Suddenly, AppDomains were an old and deprecated concept that simply would not continue to exist in the .NET Core world.
Circumstances had changed with a vengeance.
### Fixie v2: Adapting to the .NET Core crisis
At first, this seemed like the end of the Fixie project. The entire design depended on AppDomains, and if this newfangled .NET Core thing survived, Fixie would have no answer to the Big Three. Despair. Close up shop. Delete the repository.
In these moments of despair, we were making a classic software development mistake: confusing the _solution_ with the _requirements_. The _actual requirements_ (the Big Three) had not changed. The circumstances _around_ the design had changed: AppDomains were no longer available. When people make the mistake of confusing their solution with their requirements, they may double down, grip their steering wheel tighter, and just flail around while they try to force their solution to continue working.
Instead, we needed to recognize the plain truth: we had familiar requirements, but new circumstances, and it was time to throw out the old design for something new that met the same requirements under the new circumstances. Once we gave ourselves permission to go back to the drawing board, the solution was clear:
The Big Three let your "library" test project feel like a console application. So, what if your test project _was_ a console application?
A console application already has meaningful notions of loading dependencies from the right folder, distinct from the dependencies of another application, while respecting its own config file. The test runner is no longer the only process in the mix. Instead, the test runner's job is to _start_ the test project as a process of its own and communicate with it to collect results. We traded away AppDomains for interprocess communication, resulting in a new design that met all the original requirements while also working in the context of .NET Framework _and_ .NET Core projects.
![Fixie version 2][7]
This design kept the project alive and allowed us to serve both platforms during those shaky years when it wasn't certain which platform would survive in the long run. However, maintaining support for two worlds became increasingly painful, especially in keeping the Visual Studio Test Explorer plugin alive through every minor Visual Studio release. Every minor Fixie release involved a huge matrix of use cases to do regression testing, and every new little bump in the road brought innovation to a halt.
On top of that, Microsoft was starting to show clear signs that it was abandoning the .NET Framework: the old Framework no longer kept up with advances in .NET Standard, ASP.NET, or C#. The .NET Framework would exist but would quickly fall by the wayside.
Circumstances had changed again.
### Fixie v3: Embracing One .NET
Fixie v3 is a work in progress that we intend to release shortly after .NET 5 arrives. .NET 5 is the resolution to the .NET Framework vs. .NET Core development lines, arriving at [One .NET][8]. Instead of fighting it, we're following Microsoft's evolution: Fixie v3 will no longer run on the .NET Framework. Removing .NET Framework support allowed us to remove a lot of old, slow implementation details and dramatically simplified the regression testing scenarios we had to consider for reach release. It also allowed us to reconsider our design.
The Big Three requirements changed only slightly: .NET Core does away with the notion of an App.config file closely tied to your executable, instead relying on a more convention-based configuration. All of Fixie's assembly-loading requirements remained. More importantly, the circumstances _around_ the design changed in a fundamental way: we were no longer limited to using types available in both .NET Framework and .NET Core.
By promising _less_ with the removal of .NET Framework support, we _gained_ new degrees of freedom to modernize the system.
.NET's [AssemblyLoadContext][9] is a distant cousin of AppDomains. It's not available to the .NET Framework, so it hadn't been an option for us before. AssemblyLoadContext lets you set up a dedicated loading area for an assembly and its own dependencies without polluting the surrounding process and without being limited to the original process's own folder of assemblies. In other words, it gives AppDomains' "load this folder of assemblies off to the side" behavior without the frustrating AppDomains quirks.
We defined the concept of a **TestAssemblyLoadContext**, the little pocket of assembly-loading necessary for one test assembly folder:
```
class TestAssemblyLoadContext : AssemblyLoadContext
{
    readonly AssemblyDependencyResolver resolver;
    public TestAssemblyLoadContext(string testAssemblyPath)
        =&gt; resolver = [new][10] AssemblyDependencyResolver(testAssemblyPath);
    protected override Assembly? Load(AssemblyName assemblyName)
    {
        // Reuse the Fixie.dll already loaded in the containing process.
        if (assemblyName.Name == "Fixie")
            return null;
        var assemblyPath = resolver.ResolveAssemblyToPath(assemblyName);
        if (assemblyPath != null)
            return LoadFromAssemblyPath(assemblyPath);
        return null;
    }
    ...
}
```
Armed with this class, we can successfully load a test assembly and all its dependencies in a safe way and from the right folder. The test runner can work with the loaded Assembly directly, knowing that the loading effort won't pollute the test runner's own dependencies:
```
var assemblyName = [new][10] AssemblyName(Path.GetFileNameWithoutExtension(assemblyPath));
var testAssemblyLoadContext = [new][10] TestAssemblyLoadContext(assemblyPath);
var assembly = testAssemblyLoadContext.LoadFromAssemblyName(assemblyName);
// Use System.Reflection.* against `assembly` to find and run test methods...
```
We've come full circle: The Fixie v3 Visual Studio plugin uses TestAssemblyLoadContext to load test assemblies in process, similar to the way the Fixie v1 plugin did with AppDomains. The core Fixie.dll assembly need only be loaded once. Most importantly, we got to eliminate all the interprocess communication while taking advantage of the best that the new circumstances allowed.
![Fixie version 3][11]
### Always be designing
When you work with any long-lived system, some of your maintenance pains are really clues that outside circumstances have changed. If your circumstances are changing, take a step back and reconsider your design. Are you mistaking your _solution_ for your _requirements_? Articulate your requirements separate from your solution, and see whether your circumstances suggest a new and perhaps even exciting direction.
With news of Microsoft making the server side of .NET open source, we look at how the creation of...
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/testing-net-fixie
作者:[Patrick Lioi][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/patricklioi
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mistake_bug_fix_find_error.png?itok=PZaz3dga (magnifying glass on computer screen, finding a bug in the code)
[2]: https://en.wikipedia.org/wiki/.NET_Framework
[3]: https://github.com/fixie/fixie
[4]: https://headspring.com/2020/04/01/fixie-test-framework-developer-ergonomics/
[5]: https://opensource.com/sites/default/files/fixie-design-diagram-v1-cropped.jpg (Fixie version 1)
[6]: https://www.mono-project.com/docs/tools+libraries/libraries/Mono.Cecil/
[7]: https://opensource.com/sites/default/files/fixie-design-diagram-v2-cropped_0.jpg (Fixie version 2)
[8]: https://channel9.msdn.com/Events/Build/2020/BOD106
[9]: https://docs.microsoft.com/en-us/dotnet/core/tutorials/creating-app-with-plugin-support
[10]: http://www.google.com/search?q=new+msdn.microsoft.com
[11]: https://opensource.com/sites/default/files/fixie-design-diagram-v3-cropped_0.jpg (Fixie version 3)

View File

@ -0,0 +1,79 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The New YubiKey 5C NFC Security Key Lets You Use NFC to Easily Authenticate Your Secure Devices)
[#]: via: (https://itsfoss.com/yubikey-5c-nfc/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
The New YubiKey 5C NFC Security Key Lets You Use NFC to Easily Authenticate Your Secure Devices
======
If you are extra cautious about securing your online accounts with the best possible authentication method, you probably know about [Yubico][1]. They make hardware authentication security keys to replace [two-factor authentication][2] and get rid of the password authentication system for your online accounts.
Basically, you just plug the security key on your computer or use the NFC on your smartphone to unlock access to accounts. In this way, your authentication method stays completely offline.
![][3]
Of course, you can always use a [good password manager for Linux][4] available out there. But if you own or work for a business or just extra cautious about your privacy and security and want to add an extra layer of security, these hardware security keys could be worth a try. These devices have gained some popularity lately.
Yubicos latest product [YubiKey 5C NFC][5] is probably something impressive because it can be used both as USB type C key and NFC (just touch your device with the key).
Here, lets take a look at an overview of this security key.
_Please note that Its FOSS is an affiliate partner of Yubico. Please read our [affiliate policy][6]._
### Yubico 5C NFC: Overview
![][7]
YubiKey 5C NFC is the latest offering that uses both USB-C and NFC. So, you can easily plug it in on Windows, macOS, and Linux computers. In addition to the computers, you can also use it with your Android or iOS smartphones or tablets.
Not just limited to USB-C and NFC support (which is a great thing), it also happens to be the worlds first multi-protocol security key with smart card support as well.
Hardware security keys arent that common because of their cost for an average consumer. But, amidst the pandemic, with the rise of remote work, a safer authentication system will definitely come in handy.
Heres what Yubico mentioned in their press release:
> “The way that people work and go online is vastly different today than it was a few years ago, and especially within the last several months. Users are no longer tied to just one device or service, nor do they want to be. Thats why the YubiKey 5C NFC is one of our most sought-after security keys — its compatible with a majority of modern-day computers and mobile phones and works well across a range of legacy and modern applications. At the end of the day, our customers crave security that just works no matter what.”  said Guido Appenzeller, Chief Product Officer, Yubico.
The protocols that YubiKey 5C NFC supports are FIDO2, WebAuthn, FIDO U2F, PIV (smart card), OATH-HOTP and OATH-TOTP (hash-based and time-based one-time passwords), [OpenPGP][8], YubiOTP, and challenge-response.
Considering all those protocols, you can easily secure any online account that supports hardware authentication while also having the ability to access identity access management (IAM) solutions. So, its a great option for both individual users and enterprises.
### Pricing &amp; Availability
The YubiKey 5C NFC costs $55. You can order it directly from their [online store][5] or get it from any authorized resellers in your country. The cost might also vary depending on the shipping charges but $55 seems to be a sweet spot for serious users who want the best-level of security for their online accounts.
Its also worth noting that you get volume discounts if you order more than two YubiKeys.
[Order YubiKey 5C NFC][5]
### Wrapping Up
No matter whether you want to secure your cloud storage account or any other online account, Yubicos latest offering is something thats worth taking a look at if you dont mind spending some money to secure your data.
Have you ever used YubiKey or some other secure key like LibremKey etc? How is your experience with it? Do you think these devices are worth spending the extra money?
--------------------------------------------------------------------------------
via: https://itsfoss.com/yubikey-5c-nfc/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/recommends/yubikey/
[2]: https://ssd.eff.org/en/glossary/two-factor-authentication
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/yubikey-5c-nfc-desktop.jpg?resize=800%2C671&ssl=1
[4]: https://itsfoss.com/password-managers-linux/
[5]: https://itsfoss.com/recommends/yubico-5c-nfc/
[6]: https://itsfoss.com/affiliate-policy/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/yubico-5c-nfc.jpg?resize=800%2C671&ssl=1
[8]: https://www.openpgp.org/

View File

@ -7,17 +7,16 @@
[#]: via: (https://opensource.com/article/20/8/usb-id-repository)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
Recognize more devices on Linux with this USB ID Repository
利用这个 USB ID 仓库识别更多 Linux 上的设备
======
An open source project contains a public repository of all known IDs
used in USB devices.
一个包含了所有已知 USB 设备 ID 的开源项目。
![Multiple USB plugs in different colors][1]
There are thousands of USB devices on the market—keyboards, scanners, printers, mouses, and countless others that all work on Linux. Their vendor details are stored in the USB ID Repository.
市场上有成千上万的 USB 设备:键盘、扫描仪、打印机、鼠标和其他无数的设备都能在 Linux 上工作。它们的供应商详情都存储在 USB ID 仓库中。
### lsusb
The Linux `lsusb` command lists information about the USB devices connected to a system, but sometimes the information is incomplete. For example, I recently noticed that the brand of one of my USB devices was not recognized. the device was functional, but listing the details of my connected USB devices provided no identification information. Here is the output from my `lsusb` command:
Linux `lsusb` 命令列出了连接到系统的 USB 设备的信息,但有时信息不完整。例如,我最近注意到我的一个 USB 设备的品牌没有被识别。设备是可以使用的,但是在列出我所连接的 USB 设备的详情中没有提供任何识别信息。以下是我的 `lsusb` 命令的输出:
```
@ -30,11 +29,11 @@ Bus 001 Device 005: ID 051d:0002 American Power Conversion Uninterruptible Power
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
```
As you can see in the last column, there is one device with no manufacturers description. To determine what the device is, I would have to do a deeper inspection of my USB device tree. Fortunately, the `lsusb` command has more options. One is `-D device`, to elicit per-device details, as the man page explains:
正如你在最后一栏中看到的,有一个设备没有制造商描述。要确定这个设备是什么,我必须对我的 USB 设备树进行更深入的检查。幸运的是,`lsusb` 命令有更多的选项。其中一个选项是 `-D device`,来获取每个设备的详细信息,正如手册页面所解释的那样。
> "Do not scan the /dev/bus/usb directory, instead display only information about the device whose device file is given. The device file should be something like /dev/bus/usb/001/001. This option displays detailed information like the **v** option; you must be root to do this."
> “不会扫描 /dev/bus/usb 目录,而只显示给定设备文件所属设备的信息。设备文件应该是类似 /dev/bus/usb/001/001 这样的文件。这个选项会像 **v** 选项一样显示详细信息,但你必须是 root 用户才行。"
I didn't think it was easily apparent how to pass the device path to the lsusb command, but after carefully reading the man page and the initial output I was able to determine how to construct it. USB devices reside in the UDEV filesystem. Their device path begins in the USB device directory `/dev/bus/usb/`. The rest of the path is made up of the device's Bus ID and Device ID. My non-descript device is Bus 001, Device 002, which translates to 001/002, and completes the path `/dev/bus/usb/001/002`. Now I can pass this path to `lsusb`. I'll also pipe to `more` since there is often quite a lot of information there:
我认为如何将设备路径传递给 lsusb 命令并不容易但在仔细阅读手册页和初始输出后我能够确定如何构造它。USB 设备驻留在 UDEV 文件系统中。它们的设备路径始于 USB 设备目录 `/dev/bus/usb/`。路径的其余部分由设备的总线 ID 和设备 ID 组成。我的非描述设备是 Bus 001、Device 002被翻译成了 001/002完成路径 `/dev/bus/usb/001/002`。现在我可以把这个路径传给 `lsusb`。我还会用管道传给 `more`,因为这里往往有很多信息:
```
@ -79,30 +78,30 @@ Device Descriptor:
        HID Device Descriptor:
```
Unfortunately, this didn't provide the detail I was hoping to find. The two fields that appear in the initial output, `idVendor` and `idProduct`, are both empty. There is some help, as scanning down a bit reveals the word **Mouse**. A-HA! So, this device is my mouse.
不幸的是,这里并没有提供我希望找到的细节。初始输出中出现的两个字段 `idVendor``idProduct` 都是空的。这里有些帮助,因为往下扫描一下,就会发现 **Mouse** 这个词。所以,这个设备就是我的鼠标。
## The USB ID Repository
## USB ID 仓库
This made me wonder how I could populate these fields, not only for myself but also for other Linux users. It turns out there is already an open source project for this: the [USB ID Repository][2]. It is a public repository of all known IDs used in USB devices. It is also used in various programs, including the [USB Utilities][3], to display human-readable device names.
这让我不禁想知道如何才能填充这些字段,不仅是为了自己,也是为了其他 Linux 用户。原来已经有了一个开源项目:[USB ID Repository][2]。它是一个公共仓库,它包含了 USB 设备中使用的所有已知 ID。它也被用于各种程序中包括 [USB Utilities][3],用于显示人类可读的设备名称。
![The USB ID Repository Site][4]
(Alan Formy-Duval, [CC BY-SA 4.0][5])
You can browse the repository for particular devices either from the website or by downloading the database. Users are also welcome to submit new data. This is what I did for my mouse, which was absent.
你可以从网站上或通过下载数据库来浏览特定设备的仓库。也欢迎用户提交新的数据。这是我为我的鼠标做的,它是没有的。
### Update your USB IDs
### 更新你的 USB ID
The USB ID database is stored in a file called `usb.ids`. This location may vary depending on the Linux distribution.
USB ID 数据库存储在一个名为 `usb.ids` 的文件中。这个文件的位置可能会因 Linux 发行版的不同而不同。
On Ubuntu 18.04, this file is located in `/var/lib/usbutils`. To update the database, use the command `update-usbids`, which you need to run with root privileges or with `sudo`:
在 Ubuntu 18.04 中,这个文件位于 `/var/lib/usbutils`。要更新数据库,使用命令 `update-usbids`,你需要用 root 权限或 `sudo` 来运行。
```
`$ sudo update-usbids`
```
If a new file is available, it will be downloaded. The current file will be backed up and replaced by the new one:
如果有新文件,它就会被下载。当前的文件将被备份,并被替换为新文件:
```
@ -114,7 +113,7 @@ drwxr-xr-x 85 root root   4096 Nov  7 08:05 ..
-rw-r--r--  1 root root 551472 Jan 15 00:34 usb.ids.old
```
Recent versions of Fedora Linux store the database file in `/usr/share/hwdata`. Also, there is no update script. Instead, the database is maintained in a package named `hwdata`.
最新版本的 Fedora Linux 将数据库文件保存在 `/usr/share/hwdata` 中。而且,没有更新脚本。相反,数据库被维护在一个名为 `hwdata` 的包中。
```
@ -136,7 +135,7 @@ Description  : hwdata contains various hardware identification and configuratio
             : such as the pci.ids and usb.ids databases.
```
Now my USB device list shows a name next to this previously unnamed device. Compare this to the output above:
现在我的 USB 设备列表在这个之前未命名的设备旁边显示了一个名字。比较一下上面的输出:
```
@ -149,13 +148,13 @@ Bus 001 Device 005: ID 051d:0002 American Power Conversion Uninterruptible Power
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
```
You may notice that other device descriptions change as the repository is regularly updated with new devices and details about existing ones.
你可能会注意到,随着仓库定期更新新设备和现有设备的详细信息,其他设备的描述也会发生变化。
### Submit new data
### 提交新数据
There are two ways to submit new data: by using the web interface or by emailing a specially formatted patch file. Before I began, I read through the submission guidelines. First, I had to register an account, and then I needed to use the project's submission system to provide my mouse's ID and name. The process is the same for adding any USB device.
提交新数据有两种方式:使用网站或通过电子邮件发送特殊格式的补丁文件。在开始之前,我阅读了提交指南。首先,我必须注册一个账户,然后我需要使用项目的提交系统提供我鼠标的 ID 和名称。添加任何 USB 设备的过程都是一样的。
Have you used the USB ID Repository? If so, please share your reaction in the comments.
你使用过 USB ID 仓库么?如果有,请在评论中分享你的反馈。
--------------------------------------------------------------------------------