mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-21 02:10:11 +08:00
Merge branch 'master' of https://github.com/LCTT/TranslateProject into translating
This commit is contained in:
commit
fe99054240
@ -0,0 +1,315 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12407-1.html)
|
||||
[#]: subject: (Add nodes to your private cloud using Cloud-init)
|
||||
[#]: via: (https://opensource.com/article/20/5/create-simple-cloud-init-service-your-homelab)
|
||||
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
|
||||
|
||||
使用 Cloud-init 将节点添加到你的私有云中
|
||||
======
|
||||
|
||||
> 像主流云提供商的处理方式一样,在家中添加机器到你的私有云。
|
||||
|
||||

|
||||
|
||||
[Cloud-init][2] 是一种广泛使用的行业标准方法,用于初始化云实例。云提供商使用 Cloud-init 来定制实例的网络配置、实例信息,甚至用户提供的配置指令。它也是一个可以在你的“家庭私有云”中使用的很好的工具,可以为你的家庭实验室的虚拟机和物理机的初始设置和配置添加一点自动化 —— 并了解更多关于大型云提供商是如何工作的信息。关于更多的细节和背景,请看我之前的文章《[在你的树莓派家庭实验室中使用 Cloud-init][3]》。
|
||||
|
||||
![A screen showing the boot process for a Linux server running Cloud-init ][4]
|
||||
|
||||
*运行 Cloud-init 的 Linux 服务器的启动过程(Chris Collins,[CC BY-SA 4.0][5])*
|
||||
|
||||
诚然,Cloud-init 对于为许多不同客户配置机器的云提供商来说,比对于由单个系统管理员运行的家庭实验室更有用,而且 Cloud-init 解决的许多问题对于家庭实验室来说可能有点多余。然而,设置它并了解它的工作原理是了解更多关于这种云技术的好方法,更不用说它是首次启动时配置设备的好方法。
|
||||
|
||||
本教程使用 Cloud-init 的 NoCloud 数据源,它允许 Cloud-init 在传统的云提供商环境之外使用。本文将向你展示如何在客户端设备上安装 Cloud-init,并设置一个运行 Web 服务的容器来响应客户端的请求。你还将学习如何审查客户端从 Web 服务中请求的内容,并修改 Web 服务的容器,以提供基本的、静态的 Cloud-init 服务。
|
||||
|
||||
### 在现有系统上设置 Cloud-init
|
||||
|
||||
Cloud-init 可能在新系统首次启动时最有用,它可以查询配置数据,并根据指令对系统进行定制。它可以[包含在树莓派和单板计算机的磁盘镜像中][6],也可以添加到用于<ruby>配给<rt>provision</rt></ruby>虚拟机的镜像中。对于测试用途来说,无论是在现有系统上安装并运行 Cloud-init,还是安装一个新系统,然后设置 Cloud-init,都是很容易的。
|
||||
|
||||
作为大多数云提供商使用的主要服务,大多数 Linux 发行版都支持 Cloud-init。在这个例子中,我将使用 Fedora 31 Server 来安装树莓派,但在 Raspbian、Ubuntu、CentOS 和大多数其他发行版上也可以用同样的方式来完成。
|
||||
|
||||
#### 安装并启用云计算初始服务
|
||||
|
||||
在你想作为 Cloud-init 客户端的系统上,安装 Cloud-init 包。如果你使用的是 Fedora:
|
||||
|
||||
```
|
||||
# Install the cloud-init package
|
||||
dnf install -y cloud-init
|
||||
```
|
||||
|
||||
Cloud-init 实际上是四个不同的服务(至少在 systemd 下是这样),这些服务负责检索配置数据,并在启动过程的不同阶段进行配置更改,这使得可以做的事情更加灵活。虽然你不太可能直接与这些服务进行太多交互,但在你需要排除一些故障时,知道它们是什么还是很有用的。它们是:
|
||||
|
||||
* cloud-init-local.service
|
||||
* cloud-init.service
|
||||
* cloud-config.service
|
||||
* cloud-final.service
|
||||
|
||||
启用所有四个服务:
|
||||
|
||||
```
|
||||
# Enable the four cloud-init services
|
||||
systemctl enable cloud-init-local.service
|
||||
systemctl enable cloud-init.service
|
||||
systemctl enable cloud-config.service
|
||||
systemctl enable cloud-final.service
|
||||
```
|
||||
|
||||
#### 配置数据源以查询
|
||||
|
||||
启用服务后,请配置数据源,客户端将从该数据源查询配置数据。有[许多数据源类型][7],而且大多数都是为特定的云提供商配置的。对于你的家庭实验室,请使用 NoCloud 数据源,(如上所述)它是为在没有云提供商的情况下使用 Cloud-init 而设计的。
|
||||
|
||||
NoCloud 允许以多种方式包含配置信息:以内核参数中的键/值对,用于在启动时挂载的 CD(或虚拟机中的虚拟 CD);包含在文件系统中的文件中;或者像本例中一样,通过 HTTP 从指定的 URL(“NoCloud Net” 选项)获取配置信息。
|
||||
|
||||
数据源配置可以通过内核参数提供,也可以在 Cloud-init 配置文件 `/etc/cloud/cloud.cfg` 中进行设置。该配置文件对于使用自定义磁盘镜像设置 Cloud-init 或在现有主机上进行测试非常方便。
|
||||
|
||||
Cloud-init 还会合并在 `/etc/cloud/cloud.cfg.d/` 中找到的任何 `*.cfg` 文件中的配置数据,因此为了保持整洁,请在 `/etc/cloud/cloud.cfg.d/10_datasource.cfg` 中配置数据源。Cloud-init 可以通过使用以下语法从 `seedfrom` 键指向的 HTTP 数据源中读取数据。
|
||||
|
||||
```
|
||||
seedfrom: http://ip_address:port/
|
||||
```
|
||||
|
||||
IP 地址和端口是你将在本文后面创建的 Web 服务。我使用了我的笔记本电脑的 IP 和 8080 端口。这也可以是 DNS 名称。
|
||||
|
||||
创建 `/etc/cloud/cloud.cfg.d/10_datasource.cfg` 文件:
|
||||
|
||||
```
|
||||
# Add the datasource:
|
||||
# /etc/cloud/cloud.cfg.d/10_datasource.cfg
|
||||
|
||||
# NOTE THE TRAILING SLASH HERE!
|
||||
datasource:
|
||||
NoCloud:
|
||||
seedfrom: http://ip_address:port/
|
||||
```
|
||||
|
||||
客户端设置就是这样。现在,重新启动客户端后,它将尝试从你在 `seedfrom` 键中输入的 URL 检索配置数据,并进行必要的任何配置更改。
|
||||
|
||||
下一步是设置一个 Web 服务器来侦听客户端请求,以便你确定需要提供的服务。
|
||||
|
||||
### 设置网络服务器以审查客户请求
|
||||
|
||||
你可以使用 [Podman][8] 或其他容器编排工具(如 Docker 或 Kubernetes)快速创建和运行 Web 服务器。这个例子使用的是 Podman,但同样的命令也适用于 Docker。
|
||||
|
||||
要开始,请使用 `fedora:31` 容器镜像并创建一个容器文件(对于 Docker 来说,这会是一个 Dockerfile)来安装和配置 Nginx。从该容器文件中,你可以构建一个自定义镜像,并在你希望提供 Cloud-init 服务的主机上运行它。
|
||||
|
||||
创建一个包含以下内容的容器文件:
|
||||
|
||||
```
|
||||
FROM fedora:31
|
||||
|
||||
ENV NGINX_CONF_DIR "/etc/nginx/default.d"
|
||||
ENV NGINX_LOG_DIR "/var/log/nginx"
|
||||
ENV NGINX_CONF "/etc/nginx/nginx.conf"
|
||||
ENV WWW_DIR "/usr/share/nginx/html"
|
||||
|
||||
# Install Nginx and clear the yum cache
|
||||
RUN dnf install -y nginx \
|
||||
&& dnf clean all \
|
||||
&& rm -rf /var/cache/yum
|
||||
|
||||
# forward request and error logs to docker log collector
|
||||
RUN ln -sf /dev/stdout ${NGINX_LOG_DIR}/access.log \
|
||||
&& ln -sf /dev/stderr ${NGINX_LOG_DIR}/error.log
|
||||
|
||||
# Listen on port 8080, so root privileges are not required for podman
|
||||
RUN sed -i -E 's/(listen)([[:space:]]*)(\[\:\:\]\:)?80;$/\1\2\38080 default_server;/' $NGINX_CONF
|
||||
EXPOSE 8080
|
||||
|
||||
# Allow Nginx PID to be managed by non-root user
|
||||
RUN sed -i '/user nginx;/d' $NGINX_CONF
|
||||
RUN sed -i 's/pid \/run\/nginx.pid;/pid \/tmp\/nginx.pid;/' $NGINX_CONF
|
||||
|
||||
# Run as an unprivileged user
|
||||
USER 1001
|
||||
|
||||
CMD ["nginx", "-g", "daemon off;"]
|
||||
```
|
||||
|
||||
注:本例中使用的容器文件和其他文件可以在本项目的 [GitHub 仓库][9]中找到。
|
||||
|
||||
上面容器文件中最重要的部分是改变日志存储方式的部分(写到 STDOUT 而不是文件),这样你就可以在容器日志中看到进入该服务器的请求。其他的一些改变使你可以在没有 root 权限的情况下使用 Podman 运行容器,也可以在没有 root 权限的情况下运行容器中的进程。
|
||||
|
||||
在 Web 服务器上的第一个测试并不提供任何 Cloud-init 数据;只是用它来查看 Cloud-init 客户端的请求。
|
||||
|
||||
创建容器文件后,使用 Podman 构建并运行 Web 服务器镜像:
|
||||
|
||||
```
|
||||
# Build the container image
|
||||
$ podman build -f Containerfile -t cloud-init:01 .
|
||||
|
||||
# Create a container from the new image, and run it
|
||||
# It will listen on port 8080
|
||||
$ podman run --rm -p 8080:8080 -it cloud-init:01
|
||||
```
|
||||
|
||||
这会运行一个容器,让你的终端连接到一个伪 TTY。一开始看起来什么都没有发生,但是对主机 8080 端口的请求会被路由到容器内的 Nginx 服务器,并且在终端窗口中会出现一条日志信息。这一点可以用主机上的 `curl` 命令进行测试。
|
||||
|
||||
```
|
||||
# Use curl to send an HTTP request to the Nginx container
|
||||
$ curl http://localhost:8080
|
||||
```
|
||||
|
||||
运行该 `curl` 命令后,你应该会在终端窗口中看到类似这样的日志信息:
|
||||
|
||||
```
|
||||
127.0.0.1 - - [09/May/2020:19:25:10 +0000] "GET / HTTP/1.1" 200 5564 "-" "curl/7.66.0" "-"
|
||||
```
|
||||
|
||||
现在,有趣的部分来了:重启 Cloud-init 客户端,并观察 Nginx 日志,看看当客户端启动时, Cloud-init 向 Web 服务器发出了什么请求。
|
||||
|
||||
当客户端完成其启动过程时,你应该会看到类似的日志消息。
|
||||
|
||||
```
|
||||
2020/05/09 22:44:28 [error] 2#0: *4 open() "/usr/share/nginx/html/meta-data" failed (2: No such file or directory), client: 127.0.0.1, server: _, request: "GET /meta-data HTTP/1.1", host: "instance-data:8080"
|
||||
127.0.0.1 - - [09/May/2020:22:44:28 +0000] "GET /meta-data HTTP/1.1" 404 3650 "-" "Cloud-Init/17.1" "-"
|
||||
```
|
||||
|
||||
注:使用 `Ctrl+C` 停止正在运行的容器。
|
||||
|
||||
你可以看到请求是针对 `/meta-data` 路径的,即 `http://ip_address_of_the_webserver:8080/meta-data`。这只是一个 GET 请求 —— Cloud-init 并没有向 Web 服务器发送任何数据。它只是盲目地从数据源 URL 中请求文件,所以要由数据源来识别主机的要求。这个简单的例子只是向任何客户端发送通用数据,但一个更大的家庭实验室应该需要更复杂的服务。
|
||||
|
||||
在这里,Cloud-init 请求的是[实例元数据][10]信息。这个文件可以包含很多关于实例本身的信息,例如实例 ID、分配实例的主机名、云 ID,甚至网络信息。
|
||||
|
||||
创建一个包含实例 ID 和主机名的基本元数据文件,并尝试将其提供给 Cloud-init 客户端。
|
||||
|
||||
首先,创建一个可复制到容器镜像中的 `meta-data` 文件。
|
||||
|
||||
```
|
||||
instance-id: iid-local01
|
||||
local-hostname: raspberry
|
||||
hostname: raspberry
|
||||
```
|
||||
|
||||
实例 ID(`instance-id`)可以是任何东西。但是,如果你在 Cloud-init 运行后更改实例 ID,并且文件被送达客户端,就会触发 Cloud-init 再次运行。你可以使用这种机制来更新实例配置,但你应该意识到它是这种工作方式。
|
||||
|
||||
`local-hostname` 和 `hostname` 键正如其名,它们会在 Cloud-init 运行时为客户端设置主机名信息。
|
||||
|
||||
在容器文件中添加以下行以将 `meta-data` 文件复制到新镜像中。
|
||||
|
||||
```
|
||||
# Copy the meta-data file into the image for Nginx to serve it
|
||||
COPY meta-data ${WWW_DIR}/meta-data
|
||||
```
|
||||
|
||||
现在,用元数据文件重建镜像(使用一个新的标签以方便故障排除),并用 Podman 创建并运行一个新的容器。
|
||||
|
||||
```
|
||||
# Build a new image named cloud-init:02
|
||||
podman build -f Containerfile -t cloud-init:02 .
|
||||
|
||||
# Run a new container with this new meta-data file
|
||||
podman run --rm -p 8080:8080 -it cloud-init:02
|
||||
```
|
||||
|
||||
新容器运行后,重启 Cloud-init 客户端,再次观察 Nginx 日志。
|
||||
|
||||
```
|
||||
127.0.0.1 - - [09/May/2020:22:54:32 +0000] "GET /meta-data HTTP/1.1" 200 63 "-" "Cloud-Init/17.1" "-"
|
||||
2020/05/09 22:54:32 [error] 2#0: *2 open() "/usr/share/nginx/html/user-data" failed (2: No such file or directory), client: 127.0.0.1, server: _, request: "GET /user-data HTTP/1.1", host: "instance-data:8080"
|
||||
127.0.0.1 - - [09/May/2020:22:54:32 +0000] "GET /user-data HTTP/1.1" 404 3650 "-" "Cloud-Init/17.1" "-"
|
||||
```
|
||||
|
||||
你看,这次 `/meta-data` 路径被提供给了客户端。成功了!
|
||||
|
||||
然而,客户端接着在 `/user-data` 路径上寻找第二个文件。该文件包含实例所有者提供的配置数据,而不是来自云提供商的数据。对于一个家庭实验室来说,这两个都是你自己提供的。
|
||||
|
||||
你可以使用[许多 user-data 模块][11]来配置你的实例。对于这个例子,只需使用 `write_files` 模块在客户端创建一些测试文件,并验证 Cloud-init 是否工作。
|
||||
|
||||
创建一个包含以下内容的用户数据文件:
|
||||
|
||||
```
|
||||
#cloud-config
|
||||
|
||||
# Create two files with example content using the write_files module
|
||||
write_files:
|
||||
- content: |
|
||||
"Does cloud-init work?"
|
||||
owner: root:root
|
||||
permissions: '0644'
|
||||
path: /srv/foo
|
||||
- content: |
|
||||
"IT SURE DOES!"
|
||||
owner: root:root
|
||||
permissions: '0644'
|
||||
path: /srv/bar
|
||||
```
|
||||
|
||||
除了使用 Cloud-init 提供的 `user-data` 模块制作 YAML 文件外,你还可以将其制作成一个可执行脚本供 Cloud-init 运行。
|
||||
|
||||
创建 `user-data` 文件后,在容器文件中添加以下行,以便在重建映像时将其复制到镜像中:
|
||||
|
||||
```
|
||||
# Copy the user-data file into the container image
|
||||
COPY user-data ${WWW_DIR}/user-data
|
||||
```
|
||||
|
||||
重建镜像,并创建和运行一个新的容器,这次使用用户数据信息:
|
||||
|
||||
```
|
||||
# Build a new image named cloud-init:03
|
||||
podman build -f Containerfile -t cloud-init:03 .
|
||||
|
||||
# Run a new container with this new user-data file
|
||||
podman run --rm -p 8080:8080 -it cloud-init:03
|
||||
```
|
||||
|
||||
现在,重启 Cloud-init 客户端,观察 Web 服务器上的 Nginx 日志:
|
||||
|
||||
```
|
||||
127.0.0.1 - - [09/May/2020:23:01:51 +0000] "GET /meta-data HTTP/1.1" 200 63 "-" "Cloud-Init/17.1" "-"
|
||||
127.0.0.1 - - [09/May/2020:23:01:51 +0000] "GET /user-data HTTP/1.1" 200 298 "-" "Cloud-Init/17.1" "-
|
||||
```
|
||||
|
||||
成功了!这一次,元数据和用户数据文件都被送到了 Cloud-init 客户端。
|
||||
|
||||
### 验证 Cloud-init 已运行
|
||||
|
||||
从上面的日志中,你知道 Cloud-init 在客户端主机上运行并请求元数据和用户数据文件,但它用它们做了什么?你可以在 `write_files` 部分验证 Cloud-init 是否写入了你在用户数据文件中添加的文件。
|
||||
|
||||
在 Cloud-init 客户端上,检查 `/srv/foo` 和 `/srv/bar` 文件的内容:
|
||||
|
||||
```
|
||||
# cd /srv/ && ls
|
||||
bar foo
|
||||
# cat foo
|
||||
"Does cloud-init work?"
|
||||
# cat bar
|
||||
"IT SURE DOES!"
|
||||
```
|
||||
|
||||
成功了!文件已经写好了,并且有你期望的内容。
|
||||
|
||||
如上所述,还有很多其他模块可以用来配置主机。例如,用户数据文件可以配置成用 `apt` 添加包、复制 SSH 的 `authorized_keys`、创建用户和组、配置和运行配置管理工具等等。我在家里的私有云中使用它来复制我的 `authorized_keys`、创建一个本地用户和组,并设置 sudo 权限。
|
||||
|
||||
### 你接下来可以做什么
|
||||
|
||||
Cloud-init 在家庭实验室中很有用,尤其是专注于云技术的实验室。本文所演示的简单服务对于家庭实验室来说可能并不是超级有用,但现在你已经知道 Cloud-init 是如何工作的了,你可以继续创建一个动态服务,可以用自定义数据配置每台主机,让家里的私有云更类似于主流云提供商提供的服务。
|
||||
|
||||
在数据源稍显复杂的情况下,将新的物理(或虚拟)机器添加到家中的私有云中,可以像插入它们并打开它们一样简单。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/5/create-simple-cloud-init-service-your-homelab
|
||||
|
||||
作者:[Chris Collins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clcollins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_desk_home_laptop_browser.png?itok=Y3UVpY0l (Digital images of a computer desktop)
|
||||
[2]: https://cloudinit.readthedocs.io/
|
||||
[3]: https://linux.cn/article-12371-1.html
|
||||
[4]: https://opensource.com/sites/default/files/uploads/cloud-init.jpg (A screen showing the boot process for a Linux server running Cloud-init )
|
||||
[5]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[6]: https://linux.cn/article-12277-1.html
|
||||
[7]: https://cloudinit.readthedocs.io/en/latest/topics/datasources.html
|
||||
[8]: https://podman.io/
|
||||
[9]: https://github.com/clcollins/homelabCloudInit/tree/master/simpleCloudInitService/data
|
||||
[10]: https://cloudinit.readthedocs.io/en/latest/topics/instancedata.html#what-is-instance-data
|
||||
[11]: https://cloudinit.readthedocs.io/en/latest/topics/modules.html
|
109
published/20200628 entr- rerun your build when files change.md
Normal file
109
published/20200628 entr- rerun your build when files change.md
Normal file
@ -0,0 +1,109 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12403-1.html)
|
||||
[#]: subject: (entr: rerun your build when files change)
|
||||
[#]: via: (https://jvns.ca/blog/2020/06/28/entr/)
|
||||
[#]: author: (Julia Evans https://jvns.ca/)
|
||||
|
||||
entr:文件更改时重新运行构建
|
||||
======
|
||||
|
||||

|
||||
|
||||
这是一篇简短的文章。我是最近才发现 [entr][1] 的,我很惊奇从来没有人告诉过我?!因此,如果你和我一样,那么我告诉你它是什么。
|
||||
|
||||
[entr 的网站][1]上对它已经有很好的解释,也有很多示例。
|
||||
|
||||
总结在其头部:`entr` 是一个命令行工具,当每次更改一组指定文件中的任何一个时,都能运行一个任意命令。你在标准输入给它传递要监控的文件列表,如下所示:
|
||||
|
||||
```
|
||||
git ls-files | entr bash my-build-script.sh
|
||||
```
|
||||
|
||||
或者
|
||||
|
||||
```
|
||||
find . -name *.rs | entr cargo test
|
||||
```
|
||||
|
||||
或者任何你希望的。
|
||||
|
||||
### 快速反馈很棒
|
||||
|
||||
就像世界上的每个程序员一样,我发现每次更改代码时都必须手动重新运行构建/测试非常烦人。
|
||||
|
||||
许多工具(例如 hugo 和 flask)都有一个内置的系统,可以在更改文件时自动重建,这很棒!
|
||||
|
||||
但是通常我会自己编写一些自定义的构建过程(例如 `bash build.sh`),而 `entr` 让我有了一种神奇的构建经验,我只用一行 bash 就能得到即时反馈,知道我的改变是否修复了那个奇怪的 bug。万岁!
|
||||
|
||||
### 重启服务器(entr -r)
|
||||
|
||||
但是如果你正在运行服务器,并且每次都需要重新启动服务器怎么办?如果你传递 `-r`,那么 `entr` 会帮你的
|
||||
|
||||
```
|
||||
git ls-files | entr -r python my-server.py
|
||||
```
|
||||
|
||||
### 清除屏幕(entr -c)
|
||||
|
||||
另一个简洁的标志是 `-c`,它让你可以在重新运行命令之前清除屏幕,以免被前面构建的输出分散注意力。
|
||||
|
||||
### 与 git ls-files 一起使用
|
||||
|
||||
通常,我要跟踪的文件集和我在 git 中的文件列表大致相同,因此将 `git ls-files` 传递给 `entr` 是很自然的事情。
|
||||
|
||||
我现在有一个项目,有时候我刚创建的文件还没有在 git 里。那么如果你想包含未被跟踪的文件怎么办呢?这些 `git` 命令行参数就可以做到(我是从一个读者的邮件中得到的,谢谢你!):
|
||||
|
||||
```
|
||||
git ls-files -cdmo --exclude-standard | entr your-build-script
|
||||
```
|
||||
|
||||
有人给我发了邮件,说他们做了一个 `git-entr` 命令,可以执行:
|
||||
|
||||
```
|
||||
git ls-files -cdmo --exclude-standard | entr -d "$@"
|
||||
```
|
||||
|
||||
我觉得这真是一个很棒的主意。
|
||||
|
||||
### 每次添加新文件时重启:entr -d
|
||||
|
||||
`git ls-files` 的另一个问题是有时候我添加一个新文件,当然它还没有在 git 中。`entr` 为此提供了一个很好的功能。如果你传递 `-d`,那么如果你在 `entr` 跟踪的任何目录中添加新文件,它就会退出。
|
||||
|
||||
我将它与一个 `while` 循环配合使用,它将重启 `entr` 来包括新文件,如下所示:
|
||||
|
||||
```
|
||||
while true
|
||||
do
|
||||
{ git ls-files; git ls-files . --exclude-standard --others; } | entr -d your-build-scriot
|
||||
done
|
||||
```
|
||||
|
||||
### entr 在 Linux 上的工作方式:inotify
|
||||
|
||||
在 Linux 中,`entr` 使用 `inotify`(用于跟踪文件更改这样的文件系统事件的系统)工作。如果用 `strace` 跟踪它,那么你会看到每个监控文件的 `inotify_add_watch` 系统调用,如下所示:
|
||||
|
||||
```
|
||||
inotify_add_watch(3, "static/stylesheets/screen.css", IN_ATTRIB|IN_CLOSE_WRITE|IN_CREATE|IN_DELETE_SELF|IN_MOVE_SELF) = 1152
|
||||
```
|
||||
|
||||
### 就这样了
|
||||
|
||||
我希望这可以帮助一些人了解 `entr`!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2020/06/28/entr/
|
||||
|
||||
作者:[Julia Evans][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://jvns.ca/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://eradman.com/entrproject/
|
@ -1,20 +1,22 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12410-1.html)
|
||||
[#]: subject: (Painless file extraction on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3564265/painless-file-extraction-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Linux 上无痛文件解压
|
||||
Linux 上无痛文件提取
|
||||
======
|
||||
|
||||
从 Linux 系统的存档中解压文件要比拔牙要麻烦得多,但有时看起来更麻烦。在本文中,我们将探讨如何轻松地从几乎任何可能在 Linux 系统中使用的存档中解压文件。
|
||||

|
||||
|
||||
它们有很多格式,从 .gz 到 .tbz2,这些文件的命名方式有一些变化。当然,你可以记住所有可用于从存档中解压文件的命令以及它们的选项,但是你也可以将所有经验保存到脚本中,而不必担心细节。
|
||||
从 Linux 系统的存档中提取文件没有拔牙那么痛苦,但有时看起来更复杂。在这篇文章中,我们将看看如何轻松地从 Linux 系统中可能遇到的几乎所有类型的存档中提取文件。
|
||||
|
||||
在本文中,我们将一系列解压命令组合成一个脚本,它会调用适当的命令根据文档名解压文件的内容。该脚本以一些命令开头来验证文件名是否已经提供作为参数,或要求运行脚本的人提供文件名。
|
||||
它们有很多格式,从 .gz 到 .tbz2,这些文件的命名方式都各有一些不同。当然,你可以记住所有从存档中提取文件的各种命令以及它们的选项,但是你也可以将所有经验保存到脚本中,而不再担心细节。
|
||||
|
||||
在本文中,我们将一系列提取命令组合成一个脚本,它会调用适当的命令根据文档名提取文件的内容。该脚本首先以一些命令来验证是否已经提供了一个文件名作为参数,或要求运行脚本的人提供文件名。
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
@ -34,7 +36,7 @@ fi
|
||||
|
||||
了解了么?如果未提供任何参数,脚本将提示输入文件名,如果存在则使用它。然后,它验证文件是否实际存在。如果不是,那么脚本退出。
|
||||
|
||||
下一步是使用 bash case 语句根据存档文件的名称为存档文件调用适当的解压命令。对于其中某些文件类型(例如 .bz2),也可以使用除 tar 之外的其他命令,但是对于每种文件命名约定,我们仅包含一个解压命令。因此,这是带有各种存档文件名的 case 语句。
|
||||
下一步是使用 bash 的 `case` 语句根据存档文件的名称调用适当的提取命令。对于其中某些文件类型(例如 .bz2),也可以使用除 `tar` 之外的其它命令,但是对于每种文件命名约定,我们仅包含一个提取命令。因此,这是带有各种存档文件名的 `case` 语句:
|
||||
|
||||
```
|
||||
case $filename in
|
||||
@ -86,7 +88,7 @@ case $filename in
|
||||
*)
|
||||
```
|
||||
|
||||
如果你希望脚本在解压文件时显示内容,请将详细选项(-v)添加到每个命令参数字符串中:
|
||||
如果你希望脚本在提取文件时显示内容,请将详细选项(`-v`)添加到每个命令参数字符串中:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
@ -120,9 +122,7 @@ esac
|
||||
|
||||
### 总结
|
||||
|
||||
虽然可以为每个可能用到的解压命令创建别名,但是让脚本为遇到的每种文件类型提供命令要比自己停下来编写每个命令和选项容易。
|
||||
|
||||
加入 [Facebook][1] 和 [LinkedIn][2] 上的 Network World 社区,评论热门主题。
|
||||
虽然可以为每个可能用到的提取命令创建别名,但是让脚本为遇到的每种文件类型提供命令要比自己停下来编写每个命令和选项容易。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -131,7 +131,7 @@ via: https://www.networkworld.com/article/3564265/painless-file-extraction-on-li
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,215 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12404-1.html)
|
||||
[#]: subject: (13 Things To Do After Installing Linux Mint 20)
|
||||
[#]: via: (https://itsfoss.com/things-to-do-after-installing-linux-mint-20/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
安装 Linux Mint 20 后需要做的 13 件事
|
||||
======
|
||||
|
||||

|
||||
|
||||
Linux Mint 毫无疑问是 [最佳 Linux 发行版][1] 之一,特别是考虑到 [Linux Mint 20][2] 的功能,我确信你也会同意这一说法。
|
||||
|
||||
假设你错过了我们的新闻报道,[Linux Mint 20 终于可以下载了][3]。
|
||||
|
||||
当然,如果你使用 Linux Mint 有一段时间了,你可能知道最好做一些什么。但是,对于新用户来说,在安装 Linux Mint 20 后,你需要做一些事,让你的体验更比以往任何时候都好。
|
||||
|
||||
### 在安装 Linux Mint 20 后建议做的事
|
||||
|
||||
在这篇文章中,我将列出其中一些要做的事来帮助你改善 Linux Mint 20 的用户体验。
|
||||
|
||||
#### 1、执行一次系统更新
|
||||
|
||||
![][4]
|
||||
|
||||
安装后首先应该马上检查的是 —— 使用更新管理器进行系统更新,如上图所示。
|
||||
|
||||
为什么?因为你需要构建可用软件的本地缓存。更新所有软件包的更新也是一个好主意。
|
||||
|
||||
如果你喜欢使用终端,只需输入下面的命令来执行系统更新:
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
```
|
||||
|
||||
#### 2、使用 Timeshift 来创建系统快照
|
||||
|
||||
![][5]
|
||||
|
||||
如果你想在意外更改或错误更新后快速地恢复系统状态,有一个系统快照总是很有用的。
|
||||
|
||||
因此,如果你希望能够随时备份你的系统状态,那么使用 Timeshift 配置和创建系统快照是超级重要的。
|
||||
|
||||
如果你还不知道如何使用它的话,你可以遵循我们 [使用 Timeshift][6] 的详细指南。
|
||||
|
||||
#### 3、安装有用的软件
|
||||
|
||||
尽管在 Linux Mint 20 中已经安装有一些有用的预安装应用程序,你可能需要安装一些没有随之一起出炉的必不可少的应用程序。
|
||||
|
||||
你可以简单地使用软件包管理器或 synaptic 软件包管理器来查找和安装所需要的软件。
|
||||
|
||||
对于初学者来说,如果你想探索各种各样的工具,那么你可以遵循我们的 [必不可少的 Linux 应用程序][7] 的列表。
|
||||
|
||||
也参见:
|
||||
|
||||
- [75 个最常用的 Linux 应用程序(2018 年)](https://linux.cn/article-10099-1.html)
|
||||
- 100 个最佳 Ubuntu 应用([上](https://linux.cn/article-11044-1.html)、[中](https://linux.cn/article-11048-1.html)、[下](https://linux.cn/article-11057-1.html))
|
||||
|
||||
这里是一个我最喜欢的软件包列表,我希望你也来尝试一下:
|
||||
|
||||
* [VLC 多媒体播放器][8] 用于视频播放
|
||||
* [FreeFileSync][9] 用来同步文件
|
||||
* [Flameshot][10] 用于截图
|
||||
* [Stacer][11] 用来优化和监控系统
|
||||
* [ActivityWatch][12] 用来跟踪你的屏幕时间并保持高效唤醒
|
||||
|
||||
#### 4、自定义主题和图标
|
||||
|
||||
![][13]
|
||||
|
||||
当然,这在技术上不是必不可少的事,除非你想更改 Linux Mint 20 的外观和感觉。
|
||||
|
||||
但是,[在 Linux Mint 中更改主题和图标是非常容易的][14] ,而不需要安装任何额外的东西。
|
||||
|
||||
你会在欢迎屏幕中获得优化外观的选项。或者,你只需要进入 “主题”,并开始自定义主题。
|
||||
|
||||
![][15]
|
||||
|
||||
为此,你可以搜索它或在系统设置中如上图所示找到它。
|
||||
|
||||
根据你正在使用的桌面环境,你也可以看看可用的 [最佳图标主题][16]。
|
||||
|
||||
#### 5、启用 Redshift 来保护你的眼睛
|
||||
|
||||
![][17]
|
||||
|
||||
你可以在 Linux Mint 上搜索 “[Redshift][18]”,并启用它以在晚上保护你的眼睛。如你在上面的截图所见,它将根据时间自动地调整屏幕的色温。
|
||||
|
||||
你可能想启用自动启动选项,以便它在你重新启动计算机时自动启动。它可能与 [Ubuntu 20.04 LTS][19] 的夜光功能不太一样,但是如果你不需要自定义时间表或微调色温的能力,那么它就足够好了。
|
||||
|
||||
#### 6、启用 snap(如果需要的话)
|
||||
|
||||
尽管 Ubuntu 比前所未有的更倾向于推崇使用 Snap,但是 Linux Mint 团队却反对使用它。因此,它禁止 APT 使用 snapd。
|
||||
|
||||
因此,你将无法获得 snap 开箱即用的支持。然而,你迟早会意识到一些软件包只以 Snap 的格式打包。在这种情况下,你将不得不在 Mint 上启用对 snap 的支持。
|
||||
|
||||
```
|
||||
sudo apt install snapd
|
||||
```
|
||||
|
||||
在你完成后,你可以遵循我们的指南来了解更多关于 [在 Linux 上安装和使用 snap][20] 的信息。
|
||||
|
||||
#### 7、学习使用 Flatpak
|
||||
|
||||
默认情况下,Linux Mint 带有对 Flatpak 的支持。所以,不管你是讨厌使用 snap 或只是更喜欢使用 Flatpak,在系统中保留它是个好主意。
|
||||
|
||||
现在,你只需要遵循我们关于 [在 Linux 上使用 Flatpak][21] 的指南来开始吧!
|
||||
|
||||
#### 8、清理或优化你的系统
|
||||
|
||||
优化或清理系统总是好的,以避免不必要的垃圾文件占用存储空间。
|
||||
|
||||
你可以通过在终端机上输入以下内容,快速删除系统中不需要的软件包:
|
||||
|
||||
```
|
||||
sudo apt autoremove
|
||||
```
|
||||
|
||||
除此之外,你也可以遵循我们 [在 Linux Mint 上释放空间的一些建议][22] 。
|
||||
|
||||
#### 9、使用 Warpinator 通过网络发送/接收文件
|
||||
|
||||
Warpinator 是 Linux Mint 20 的一个新功能,可以让你在连接到网络的多台电脑上共享文件。这里是它看起来的样子:
|
||||
|
||||
![][23]
|
||||
|
||||
你只需要在菜单中搜索它就可以开始了!
|
||||
|
||||
#### 10、使用驱动程序管理器
|
||||
|
||||
![驱动器管理器][24]
|
||||
|
||||
如果你正在使用需要驱动程序的 Wi-Fi 设备、NVIDIA 显卡或 AMD 显卡,以及其它设备(如果适用的话)时,驱动程序管理器是一个重要的部分。
|
||||
|
||||
你只需要找到驱动程序管理器并启用它。它应该可以检测到正在使用的任何专有驱动程序,或者你也可以使用驱动程序管理器来利用 DVD 来安装驱动程序。
|
||||
|
||||
#### 11、设置防火墙
|
||||
|
||||
![][25]
|
||||
|
||||
在大多数情况下,你可能已经保护了你的家庭网络。但是,如果你想在 Linux Mint 上有一些特殊的防火墙设置,你可以通过在菜单中搜索 “Firewall” 来实现。
|
||||
|
||||
正如你在上述截图中所看到的,你可以为家庭、企业和公共设置不同的配置文件。你只需要添加规则,并定义什么是允许访问互联网的,什么是不允许的。
|
||||
|
||||
你可以阅读我们关于 [使用 UFW 配置防火墙][26] 的详细指南。
|
||||
|
||||
#### 12、学习管理启动应用程序
|
||||
|
||||
如果你是一个有经验的用户,你可能已经知道这一点了。但是,新用户经常忘记管理他们的启动应用程序,最终影响了系统启动时间。
|
||||
|
||||
你只需要从菜单中搜索 “Startup Applications” ,你可以启动它来查找像这样的东西:
|
||||
|
||||
![][27]
|
||||
|
||||
你可以简单地切换你想要禁用的那些启动应用程序,添加一个延迟计时器,或从启动应用程序的列表中完全地移除它。
|
||||
|
||||
#### 13、安装必不可少的游戏应用程序
|
||||
|
||||
当然,如果你对游戏感兴趣,你可能想去阅读我们关于 [在 Linux 上的游戏][28] 的文章来探索所有的选择。
|
||||
|
||||
但是,对于初学者来说,你可以尝试安装 [GameHub][29]、[Steam][30] 和 [Lutris][31] 来玩一些游戏。
|
||||
|
||||
### 总结
|
||||
|
||||
就是这样,各位!在大多数情况下,如果你在安装 Linux Mint 20 后按照上面的要点进行操作,使其发挥出最好的效果,应该就可以了。
|
||||
|
||||
我确信你还能够做更多的事。我想知道你喜欢在安装 Linux Mint 20 后马上做了什么。在下面的评论中告诉我你的想法吧!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/things-to-do-after-installing-linux-mint-20/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/best-linux-distributions/
|
||||
[2]: https://linux.cn/article-12297-1.html
|
||||
[3]: https://linux.cn/article-12376-1.html
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-system-update.png?ssl=1
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/07/snapshot-linux-mint-timeshift.jpeg?ssl=1
|
||||
[6]: https://linux.cn/article-11619-1.html
|
||||
[7]: https://linux.cn/article-10165-1.html
|
||||
[8]: https://www.videolan.org/vlc/
|
||||
[9]: https://itsfoss.com/freefilesync/
|
||||
[10]: https://itsfoss.com/flameshot/
|
||||
[11]: https://itsfoss.com/optimize-ubuntu-stacer/
|
||||
[12]: https://itsfoss.com/activitywatch/
|
||||
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-theme.png?ssl=1
|
||||
[14]: https://itsfoss.com/install-icon-linux-mint/
|
||||
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-system-settings.png?ssl=1
|
||||
[16]: https://itsfoss.com/best-icon-themes-ubuntu-16-04/
|
||||
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-redshift-1.png?ssl=1
|
||||
[18]: https://itsfoss.com/install-redshift-linux-mint/
|
||||
[19]: https://itsfoss.com/ubuntu-20-04-release-features/
|
||||
[20]: https://itsfoss.com/install-snap-linux/
|
||||
[21]: https://itsfoss.com/flatpak-guide/
|
||||
[22]: https://itsfoss.com/free-up-space-ubuntu-linux/
|
||||
[23]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/mint-20-warpinator-1.png?ssl=1
|
||||
[24]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2013/12/Additional-Driver-Linux-Mint-16.png?ssl=1
|
||||
[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-firewall.png?ssl=1
|
||||
[26]: https://itsfoss.com/set-up-firewall-gufw/
|
||||
[27]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-startup-applications.png?ssl=1
|
||||
[28]: https://itsfoss.com/linux-gaming-guide/
|
||||
[29]: https://itsfoss.com/gamehub/
|
||||
[30]: https://store.steampowered.com
|
||||
[31]: https://lutris.net
|
@ -1,28 +1,28 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12401-1.html)
|
||||
[#]: subject: (Ex-Solus Dev is Now Creating a Truly Modern Linux Distribution Called Serpent Linux)
|
||||
[#]: via: (https://itsfoss.com/serpent-os-announcement/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
前 Solus 开发人员正创建一个名为 Serpent Linux 的真正现代的 Linux 发行版
|
||||
Solus Linux 创始人正在开发一个没有 GNU 的“真正现代”的 Linux 发行版
|
||||
======
|
||||
|
||||
曾经创建独立 Linux 发行版 Solus 的开发人员 [Ikey Doherty][1] 宣布了他的新项目:Serpent OS。
|
||||
|
||||
[Serpent OS][2]是一个不想被归类为“轻便、用户友好、注重隐私的 Linux 桌面发行版”。
|
||||
[Serpent OS][2] 是一个**不想**被归类为“轻量级、用户友好、注重隐私的 Linux 桌面发行版”。
|
||||
|
||||
相反,Serpent OS 具有“与主流产品不同的目标”。具体怎么样?请继续阅读。
|
||||
|
||||
### Serpent OS:制作“真正现代的” Linux 发行版
|
||||
### Serpent OS:制作“真正现代”的 Linux 发行版
|
||||
|
||||
![][3]
|
||||
|
||||
Serpent 采用发行优先,兼容靠后的方法。这使他们可以做出一些非常大胆的决定。
|
||||
Serpent 采用发行版优先,兼容靠后的方法。这使他们可以做出一些非常大胆的决定。
|
||||
|
||||
Ikey 表示,这个项目不会对阻碍 Linux 的负面角色容忍。例如,不会容忍 NVIDIA 在其 GPU 上缺乏对 Wayland 加速支持的支持,并且 NVIDIA 专有驱动将加入发行版黑名单。
|
||||
Ikey 表示,这个项目不会对阻碍 Linux 的负面角色容忍。例如,不会容忍 NVIDIA 在其 GPU 上缺乏对 Wayland 加速的支持,并将 NVIDIA 专有驱动加入发行版黑名单。
|
||||
|
||||
这是 Serpent Linux 项目的拟议计划(摘自[其网站][4]):
|
||||
|
||||
@ -30,36 +30,34 @@ Ikey 表示,这个项目不会对阻碍 Linux 的负面角色容忍。例如
|
||||
* 100% clang 构建(包括内核)
|
||||
* musl 作为 libc,依靠编译器优化而不是内联 asm
|
||||
* 使用 libc++ 而不是 libstdc++
|
||||
* LLVM的 binutils 变体(lld、as 等)
|
||||
* 混合源/二进制分发
|
||||
* 从 x86_64 通用基线转移到更新的 CPU,包括针对 Intel 和 AMD 的优化
|
||||
* LLVM 的 binutils 变体(lld、as 等)
|
||||
* 混合源代码/二进制分发
|
||||
* 从 x86\_64 通用基线转移到更新的 CPU,包括针对 Intel 和 AMD 的优化
|
||||
* 包管理器中基于功能的订阅(硬件/用户选择等)
|
||||
* 只支持 `UEFI`。不支持老式启动方式。
|
||||
* 只支持 UEFI。不支持老式启动方式
|
||||
* 完全开源,包括引导程序/重建脚本
|
||||
* 针对高工作负载进行了认真的优化。
|
||||
* 针对高工作负载进行了认真的优化
|
||||
* 第三方应用仅依赖于容器。没有兼容性修改
|
||||
* 仅支持 Wayland。将调查通过容器的 X11 兼容性
|
||||
* 完全无状态的管理工具和上游补丁
|
||||
|
||||
|
||||
|
||||
Ikey 大胆地宣称 Serpent Linux 不是 Serpent GNU/Linux,因为它不再依赖于 GNU 工具链或运行时。
|
||||
|
||||
Serpent OS 项目的开发将于 7 月底开始。没有确定最终稳定版本的时间表。
|
||||
|
||||
### 要求过高?但是 Ikey 过去做过
|
||||
### 要求过高?但是 Ikey 过去做到了
|
||||
|
||||
你可能会怀疑 Serpent OS 是否会出现,是否能够兑现其所作的所有承诺。
|
||||
|
||||
但是 Ikey Doherty 过去已经做过。如果我没记错的话,他首先基于 Debian 创建了 SolusOS。他于 2013 年停止了基于 [Debian 的 SolusOS][5] 的开发,甚至它还没有进入 Beta 阶段。
|
||||
但是 Ikey Doherty 过去已经做到了。如果我没记错的话,他首先基于 Debian 创建了 SolusOS。他于 2013 年停止了基于 [Debian 的 SolusOS][5] 的开发,甚至它还没有进入 Beta 阶段。
|
||||
|
||||
然后,他从头开始创建 [evolve OS][6],而不是使用其他发行版作为基础。由于某些命名版权问题,项目名称已更改为 Solus(是的,相同的旧名称)。[Ikey 在 2018 年退出了 Solus项目][7],其他开发人员现在负责该项目。
|
||||
然后,他从头开始创建 [evolve OS][6],而不是使用其他发行版作为基础。由于某些命名版权问题,项目名称已更改为 Solus(是的,相同的旧名称)。[Ikey 在 2018 年退出了 Solus 项目][7],其他开发人员现在负责该项目。
|
||||
|
||||
Solus 是一个独立的 Linux 发行版,它为我们提供了漂亮的 Budgie 桌面环境。
|
||||
|
||||
Ikey 过去做到了(当然,在其他开发人员的帮助下)。他现在也应该能够做到。
|
||||
|
||||
**看好还是不看好?**
|
||||
### 看好还是不看好?
|
||||
|
||||
你如何看待这个 Serpent Linux?你是否认为是时候让开发人员采取大胆的立场,并着眼于未来开发操作系统,而不是坚持过去?请分享你的观点。
|
||||
|
||||
@ -70,7 +68,7 @@ via: https://itsfoss.com/serpent-os-announcement/
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,84 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Yufei-Yan)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12408-1.html)
|
||||
[#]: subject: (What you need to know about hash functions)
|
||||
[#]: via: (https://opensource.com/article/20/7/hash-functions)
|
||||
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
|
||||
|
||||
关于哈希(散列)函数你应该知道的东西
|
||||
======
|
||||
|
||||
> 从输出的哈希值反推回输入,这从计算的角度是不可行的。
|
||||
|
||||

|
||||
|
||||
无论安全从业人员用计算机做什么,有一种工具对他们每个人都很有用:加密<ruby>哈希(散列)<rt>hash</rt></ruby>函数。这听起来很神秘、很专业,甚至可能有点乏味,但是, 在这里,关于什么是哈希函数以及它们为什么对你很重要,我会作出一个简洁的解释。
|
||||
|
||||
加密哈希函数,比如 SHA-256 或者 MD5,接受一组二进制数据(通常是字节)作为输入,并且对每个可能的输入集给出一个<ruby>希望唯一<rt>hopefully unique</rt></ruby>的输出。对于任意模式的输入,给定的哈希函数的输出(“哈希值”)的长度都是一样的(对于 SHA-256,是 32 字节或者 256 比特,这从名字中就能看出来)。最重要的是:从输出的哈希值反推回输入,这从计算的角度是<ruby>不可行的<rt>implausible</rt></ruby>(密码学家讨厌 “<ruby>不可能<rt>impossible</rt></ruby>” 这个词)。这就是为什么它们有时候被称作<ruby>单向哈希函数<rt>one-way hash function</rt></ruby>。
|
||||
|
||||
但是哈希函数是用来做什么的呢?为什么“唯一”的属性如此重要?
|
||||
|
||||
### 唯一的输出
|
||||
|
||||
在描述哈希函数的输出时,“<ruby>希望唯一<rt>hopefully unique</rt></ruby>”这个短语是至关重要的,因为哈希函数就是用来呈现完全唯一的输出。比如,哈希函数可以用于验证 *你* 下载的文件副本的每一个字节是否和 *我* 下载的文件一样。你下载一个 Linux 的 ISO 文件或者从 Linux 的仓库中下载软件时,你会看到使用这个验证过程。没有了唯一性,这个技术就没用了,至少就通常的目的而言是这样的。
|
||||
|
||||
如果两个不同的输入产生了相同的输出,那么这样的哈希过程就称作“<ruby>碰撞<rt>collision</rt></ruby>”。事实上,MD5 算法已经被弃用,因为虽然可能性微乎其微,但它现在可以用市面上的硬件和软件系统找到碰撞。
|
||||
|
||||
另外一个重要的特性是,消息中的一个微小变化,甚至只是改变一个比特位,都可能会在输出中产生一个明显的变化(这就是“<ruby>雪崩效应<rt>avalanche effect</rt></ruby>”)。
|
||||
|
||||
### 验证二进制数据
|
||||
|
||||
哈希函数的典型用途是当有人给你一段二进制数据,确保这些数据是你所期望的。无论是文本、可执行文件、视频、图像或者一个完整的数据库数据,在计算世界中,所有的数据都可以用二进制的形式进行描述,所以至少可以这么说,哈希是广泛适用的。直接比较二进制数据是非常缓慢的且计算量巨大,但是哈希函数在设计上非常快。给定两个大小为几 M 或者几 G 的文件,你可以事先生成它们的哈希值,然后在需要的时候再进行比较。
|
||||
|
||||
通常,对哈希值进行签名比对大型数据集本身进行签名更容易。这个特性太重要了,以至于密码学中对哈希值最常见的应用就是生成“数字”签名。
|
||||
|
||||
由于生成数据的哈希值很容易,所以通常不需要有两套数据。假设你想在你的电脑上运行一个可执行文件。但是在你运行之前,你需要检查这个文件就是你要的文件,没有被黑客篡改。你可以方便快捷的对文件生成哈希值,只要你有一个这个哈希值的副本,你就可以相当肯定这就是你想要的文件。
|
||||
|
||||
下面是一个简单的例子:
|
||||
|
||||
```
|
||||
$ shasum -a256 ~/bin/fop
|
||||
87227baf4e1e78f6499e4905e8640c1f36720ae5f2bd167de325fd0d4ebc791c /home/bob/bin/fop
|
||||
```
|
||||
|
||||
如果我知道 `fop` 这个可执行文件的 SHA-256 校验和,这是由供应商(这个例子中是 Apache 基金会)提供的:
|
||||
|
||||
```
|
||||
87227baf4e1e78f6499e4905e8640c1f36720ae5f2bd167de325fd0d4ebc791c
|
||||
```
|
||||
|
||||
然后我就可以确信,我驱动器上的这个可执行文件和 Apache 基金会网站上发布的文件是一模一样的。这就是哈希函数难以发生碰撞(或者至少是 *很难通过计算得到碰撞*)这个性质的重要之处。如果黑客能将真实文件用哈希值相同的文件轻易的进行替换,那么这个验证过程就毫无用处。
|
||||
|
||||
事实上,这些性质还有更技术性的名称,我上面所描述的将三个重要的属性混在了一起。更准确地说,这些技术名称是:
|
||||
|
||||
1. <ruby>抗原像性<rt>pre-image resistance</rt></ruby>:给定一个哈希值,即使知道用了什么哈希函数,也很难得到用于创建它的消息。
|
||||
2. <ruby>抗次原像性<rt>second pre-image resistance</rt><ruby>:给定一个消息,很难找到另一个消息,使得这个消息可以产生相同的哈希值。
|
||||
3. <ruby>抗碰撞性<rt>collision resistance</rt></ruby>:很难得到任意两个可以产生相同哈希值的消息。
|
||||
|
||||
*抗碰撞性* 和 *抗次原像性* 也许听上去是同样的性质,但它们具有细微而显著的不同。*抗次原像性* 说的是如果 *已经* 有了一个消息,你也很难得到另一个与之哈希值相匹配的消息。*抗碰撞性* 使你很难找到两个可以生成相同哈希值的消息,并且要在哈希函数中实现这一性质则更加困难。
|
||||
|
||||
让我回到黑客试图替换文件(可以通过哈希值进行校验)的场景。现在,要在“外面”使用加密哈希算法(除了使用那些在现实世界中由独角兽公司开发的完全无 Bug 且安全的实现之外),还有一些重要且困难的附加条件需要满足。认真的读者可能已经想到了其中一些,特别需要指出的是:
|
||||
|
||||
1. 你必须确保自己所拥有的哈希值副本也没有被篡改。
|
||||
2. 你必须确保执行哈希算法的实体能够正确执行并报告了结果。
|
||||
3. 你必须确保对比两个哈希值的实体确实报告了这个对比的正确结果。
|
||||
|
||||
确保你能满足这些条件绝对不是一件容易的事。这就是<ruby>可信平台模块<rt>Trusted Platform Modules</rt></ruby>(TPM)成为许多计算系统一部分的原因之一。它们扮演着信任的硬件基础,可以为验证重要二进制数据真实性的加密工具提供保证。TPM 对于现实中的系统来说是有用且重要的工具,我也打算将来写一篇关于 TPM 的文章。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/hash-functions
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Yufei-Yan](https://github.com/Yufei-Yan)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mikecamel
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_OpenData_CityNumbers.png?itok=lC03ce76
|
||||
[2]: https://aliceevebob.com/2020/06/16/whats-a-hash-function/
|
@ -1,334 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Add nodes to your private cloud using Cloud-init)
|
||||
[#]: via: (https://opensource.com/article/20/5/create-simple-cloud-init-service-your-homelab)
|
||||
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
|
||||
|
||||
Add nodes to your private cloud using Cloud-init
|
||||
======
|
||||
Make adding machines to your private cloud at home similar to how the
|
||||
major cloud providers handle it.
|
||||
![Digital images of a computer desktop][1]
|
||||
|
||||
[Cloud-init][2] is a widely utilized industry-standard method for initializing cloud instances. Cloud providers use Cloud-init to customize instances with network configuration, instance information, and even user-provided configuration directives. It is also a great tool to use in your "private cloud at home" to add a little automation to the initial setup and configuration of your homelab's virtual and physical machines—and to learn more about how large cloud providers work. For a bit more detail and background, see my previous article on [Cloud-init and why it is useful][3].
|
||||
|
||||
![A screen showing the boot process for a Linux server running Cloud-init ][4]
|
||||
|
||||
Boot process for a Linux server running Cloud-init (Chris Collins, [CC BY-SA 4.0][5])
|
||||
|
||||
Admittedly, Cloud-init is more useful for a cloud provider provisioning machines for many different clients than for a homelab run by a single sysadmin, and much of what Cloud-init solves might be a little superfluous for a homelab. However, getting it set up and learning how it works is a great way to learn more about this cloud technology, not to mention that it's a great way to configure your devices on first boot.
|
||||
|
||||
This tutorial uses Cloud-init's "NoCloud" datasource, which allows Cloud-init to be used outside a traditional cloud provider setting. This article will show you how to install Cloud-init on a client device and set up a container running a web service to respond to the client's requests. You will also learn to investigate what the client is requesting from the web service and modify the web service's container to serve a basic, static Cloud-init service.
|
||||
|
||||
### Set up Cloud-init on an existing system
|
||||
|
||||
Cloud-init probably is most useful on a new system's first boot to query for configuration data and make changes to customize the system as directed. It can be [included in a disk image for Raspberry Pis and single-board computers][6] or added to images used to provision virtual machines. For testing, it is easy to install and run Cloud-init on an existing system or to install a new system and then set up Cloud-init.
|
||||
|
||||
As a major service used by most cloud providers, Cloud-init is supported on most Linux distributions. For this example, I will be using Fedora 31 Server for the Raspberry Pi, but it can be done the same way on Raspbian, Ubuntu, CentOS, and most other distributions.
|
||||
|
||||
#### Install and enable the cloud-init services
|
||||
|
||||
On a system that you want to be a Cloud-init client, install the Cloud-init package. If you're using Fedora:
|
||||
|
||||
|
||||
```
|
||||
# Install the cloud-init package
|
||||
dnf install -y cloud-init
|
||||
```
|
||||
|
||||
Cloud-init is actually four different services (at least with systemd), and each is in charge of retrieving config data and performing configuration changes during a different part of the boot process, allowing greater flexibility in what can be done. While it is unlikely you will interact much with these services directly, it is useful to know what they are in the event you need to troubleshoot something. They are:
|
||||
|
||||
* cloud-init-local.service
|
||||
* cloud-init.service
|
||||
* cloud-config.service
|
||||
* cloud-final.service
|
||||
|
||||
|
||||
|
||||
Enable all four services:
|
||||
|
||||
|
||||
```
|
||||
# Enable the four cloud-init services
|
||||
systemctl enable cloud-init-local.service
|
||||
systemctl enable cloud-init.service
|
||||
systemctl enable cloud-config.service
|
||||
systemctl enable cloud-final.service
|
||||
```
|
||||
|
||||
#### Configure the datasource to query
|
||||
|
||||
Once the service is enabled, configure the datasource from which the client will query the config data. There are a [large number of datasource types][7], and most are configured for specific cloud providers. For your homelab, use the NoCloud datasource, which (as mentioned above) is designed for using Cloud-init without a cloud provider.
|
||||
|
||||
NoCloud allows configuration information to be included a number of ways: as key/value pairs in kernel parameters, for using a CD (or virtual CD, in the case of virtual machines) mounted at startup, in a file included on the filesystem, or, as in this example, via HTTP from a specified URL (the "NoCloud Net" option).
|
||||
|
||||
The datasource configuration can be provided via the kernel parameter or by setting it in the Cloud-init configuration file, `/etc/cloud/cloud.cfg`. The configuration file works very well for setting up Cloud-init with customized disk images or for testing on existing hosts.
|
||||
|
||||
Cloud-init will also merge configuration data from any `*.cfg` files found in `/etc/cloud/cloud.cfg.d/`, so to keep things cleaner, configure the datasource in `/etc/cloud/cloud.cfg.d/10_datasource.cfg`. Cloud-init can be told to read from an HTTP datasource with the seedfrom key by using the syntax:
|
||||
|
||||
|
||||
```
|
||||
`seedfrom: http://ip_address:port/`
|
||||
```
|
||||
|
||||
The IP address and port are the web service you will create later in this article. I used the IP of my laptop and port 8080; this can also be a DNS name.
|
||||
|
||||
Create the `/etc/cloud/cloud.cfg.d/10_datasource.cfg` file:
|
||||
|
||||
|
||||
```
|
||||
# Add the datasource:
|
||||
# /etc/cloud/cloud.cfg.d/10_datasource.cfg
|
||||
|
||||
# NOTE THE TRAILING SLASH HERE!
|
||||
datasource:
|
||||
NoCloud:
|
||||
seedfrom: <http://ip\_address:port/>
|
||||
```
|
||||
|
||||
That's it for the client setup. Now, when the client is rebooted, it will attempt to retrieve configuration data from the URL you entered in the seedfrom key and make any configuration changes that are necessary.
|
||||
|
||||
The next step is to set up a webserver to listen for client requests, so you can figure out what needs to be served.
|
||||
|
||||
### Set up a webserver to investigate client requests
|
||||
|
||||
You can create and run a webserver quickly with [Podman][8] or other container orchestration tools (like Docker or Kubernetes). This example uses Podman, but the same commands work with Docker.
|
||||
|
||||
To get started, use the Fedora:31 container image and create a Containerfile (for Docker, this would be a Dockerfile) that installs and configures Nginx. From that Containerfile, you can build a custom image and run it on the host you want to act as the Cloud-init service.
|
||||
|
||||
Create a Containerfile with the following contents:
|
||||
|
||||
|
||||
```
|
||||
FROM fedora:31
|
||||
|
||||
ENV NGINX_CONF_DIR "/etc/nginx/default.d"
|
||||
ENV NGINX_LOG_DIR "/var/log/nginx"
|
||||
ENV NGINX_CONF "/etc/nginx/nginx.conf"
|
||||
ENV WWW_DIR "/usr/share/nginx/html"
|
||||
|
||||
# Install Nginx and clear the yum cache
|
||||
RUN dnf install -y nginx \
|
||||
&& dnf clean all \
|
||||
&& rm -rf /var/cache/yum
|
||||
|
||||
# forward request and error logs to docker log collector
|
||||
RUN ln -sf /dev/stdout ${NGINX_LOG_DIR}/access.log \
|
||||
&& ln -sf /dev/stderr ${NGINX_LOG_DIR}/error.log
|
||||
|
||||
# Listen on port 8080, so root privileges are not required for podman
|
||||
RUN sed -i -E 's/(listen)([[:space:]]*)(\\[\:\:\\]\:)?80;$/\1\2\38080 default_server;/' $NGINX_CONF
|
||||
EXPOSE 8080
|
||||
|
||||
# Allow Nginx PID to be managed by non-root user
|
||||
RUN sed -i '/user nginx;/d' $NGINX_CONF
|
||||
RUN sed -i 's/pid \/run\/nginx.pid;/pid \/tmp\/nginx.pid;/' $NGINX_CONF
|
||||
|
||||
# Run as an unprivileged user
|
||||
USER 1001
|
||||
|
||||
CMD ["nginx", "-g", "daemon off;"]
|
||||
```
|
||||
|
||||
_Note: The example Containerfile and other files used in this example can be found in this project's [GitHub repository][9]._
|
||||
|
||||
The most important part of the Containerfile above is the section that changes how the logs are stored (writing to STDOUT rather than a file), so you can see requests coming into the server in the container logs. A few other changes enable you to run the container with Podman without root privileges and to run processes in the container without root, as well.
|
||||
|
||||
This first pass at the webserver does not serve any Cloud-init data; just use this to see what the Cloud-init client is requesting from it.
|
||||
|
||||
With the Containerfile created, use Podman to build and run a webserver image:
|
||||
|
||||
|
||||
```
|
||||
# Build the container image
|
||||
$ podman build -f Containerfile -t cloud-init:01 .
|
||||
|
||||
# Create a container from the new image, and run it
|
||||
# It will listen on port 8080
|
||||
$ podman run --rm -p 8080:8080 -it cloud-init:01
|
||||
```
|
||||
|
||||
This will run the container, leaving your terminal attached and with a pseudo-TTY. It will appear that nothing is happening at first, but requests to port 8080 of the host machine will be routed to the Nginx server inside the container, and a log message will appear in the terminal window. This can be tested with a curl command from the host machine:
|
||||
|
||||
|
||||
```
|
||||
# Use curl to send an HTTP request to the Nginx container
|
||||
$ curl <http://localhost:8080>
|
||||
```
|
||||
|
||||
After running that curl command, you should see a log message similar to this in the terminal window:
|
||||
|
||||
|
||||
```
|
||||
`127.0.0.1 - - [09/May/2020:19:25:10 +0000] "GET / HTTP/1.1" 200 5564 "-" "curl/7.66.0" "-"`
|
||||
```
|
||||
|
||||
Now comes the fun part: reboot the Cloud-init client and watch the Nginx logs to see what Cloud-init requests from the webserver when the client boots up.
|
||||
|
||||
As the client finishes its boot process, you should see log messages similar to:
|
||||
|
||||
|
||||
```
|
||||
2020/05/09 22:44:28 [error] 2#0: *4 open() "/usr/share/nginx/html/meta-data" failed (2: No such file or directory), client: 127.0.0.1, server: _, request: "GET /meta-data HTTP/1.1", host: "instance-data:8080"
|
||||
127.0.0.1 - - [09/May/2020:22:44:28 +0000] "GET /meta-data HTTP/1.1" 404 3650 "-" "Cloud-Init/17.1" "-"
|
||||
```
|
||||
|
||||
_Note: Use Ctrl+C to stop the running container._
|
||||
|
||||
You can see the request is for the /meta-data path, i.e., `http://ip_address_of_the_webserver:8080/meta-data`. This is just a GET request—Cloud-init is not POSTing (sending) any data to the webserver. It is just blindly requesting the files from the datasource URL, so it is up to the datasource to identify what the host is asking. This simple example is just sending generic data to any client, but a larger homelab will need a more sophisticated service.
|
||||
|
||||
Here, Cloud-init is requesting the [instance metadata][10] information. This file can include a lot of information about the instance itself, such as the instance ID, the hostname to assign the instance, the cloud ID—even networking information.
|
||||
|
||||
Create a basic metadata file with an instance ID and a hostname for the host, and try serving that to the Cloud-init client.
|
||||
|
||||
First, create a metadata file that can be copied into the container image:
|
||||
|
||||
|
||||
```
|
||||
instance-id: iid-local01
|
||||
local-hostname: raspberry
|
||||
hostname: raspberry
|
||||
```
|
||||
|
||||
The instance ID can be anything. However, if you change the instance ID after Cloud-init runs and the file is served to the client, it will trigger Cloud-init to run again. You can use this mechanism to update instance configurations, but you should be aware that it works that way.
|
||||
|
||||
The local-hostname and hostname keys are just that; they set the hostname information for the client when Cloud-init runs.
|
||||
|
||||
Add the following line to the Containerfile to copy the metadata file into the new image:
|
||||
|
||||
|
||||
```
|
||||
# Copy the meta-data file into the image for Nginx to serve it
|
||||
COPY meta-data ${WWW_DIR}/meta-data
|
||||
```
|
||||
|
||||
Now, rebuild the image (use a new tag for easy troubleshooting) with the metadata file, and create and run a new container with Podman:
|
||||
|
||||
|
||||
```
|
||||
# Build a new image named cloud-init:02
|
||||
podman build -f Containerfile -t cloud-init:02 .
|
||||
|
||||
# Run a new container with this new meta-data file
|
||||
podman run --rm -p 8080:8080 -it cloud-init:02
|
||||
```
|
||||
|
||||
With the new container running, reboot your Cloud-init client and watch the Nginx logs again:
|
||||
|
||||
|
||||
```
|
||||
127.0.0.1 - - [09/May/2020:22:54:32 +0000] "GET /meta-data HTTP/1.1" 200 63 "-" "Cloud-Init/17.1" "-"
|
||||
2020/05/09 22:54:32 [error] 2#0: *2 open() "/usr/share/nginx/html/user-data" failed (2: No such file or directory), client: 127.0.0.1, server: _, request: "GET /user-data HTTP/1.1", host: "instance-data:8080"
|
||||
127.0.0.1 - - [09/May/2020:22:54:32 +0000] "GET /user-data HTTP/1.1" 404 3650 "-" "Cloud-Init/17.1" "-"
|
||||
```
|
||||
|
||||
You see that this time the /meta-data path was served to the client. Success!
|
||||
|
||||
However, the client is looking for a second file at the /user-data path. This file contains configuration data provided by the instance owner, as opposed to data from the cloud provider. For a homelab, both of these are you.
|
||||
|
||||
There are a [large number of user-data modules][11] you can use to configure your instance. For this example, just use the write_files module to create some test files on the client and verify that Cloud-init is working.
|
||||
|
||||
Create a user-data file with the following content:
|
||||
|
||||
|
||||
```
|
||||
#cloud-config
|
||||
|
||||
# Create two files with example content using the write_files module
|
||||
write_files:
|
||||
- content: |
|
||||
"Does cloud-init work?"
|
||||
owner: root:root
|
||||
permissions: '0644'
|
||||
path: /srv/foo
|
||||
- content: |
|
||||
"IT SURE DOES!"
|
||||
owner: root:root
|
||||
permissions: '0644'
|
||||
path: /srv/bar
|
||||
```
|
||||
|
||||
In addition to a YAML file using the user-data modules provided by Cloud-init, you could also make this an executable script for Cloud-init to run.
|
||||
|
||||
After creating the user-data file, add the following line to the Containerfile to copy it into the image when the image is rebuilt:
|
||||
|
||||
|
||||
```
|
||||
# Copy the user-data file into the container image
|
||||
COPY user-data ${WWW_DIR}/user-data
|
||||
```
|
||||
|
||||
Rebuild the image and create and run a new container, this time with the user-data information:
|
||||
|
||||
|
||||
```
|
||||
# Build a new image named cloud-init:03
|
||||
podman build -f Containerfile -t cloud-init:03 .
|
||||
|
||||
# Run a new container with this new user-data file
|
||||
podman run --rm -p 8080:8080 -it cloud-init:03
|
||||
```
|
||||
|
||||
Now, reboot your Cloud-init client, and watch the Nginx logs on the webserver:
|
||||
|
||||
|
||||
```
|
||||
127.0.0.1 - - [09/May/2020:23:01:51 +0000] "GET /meta-data HTTP/1.1" 200 63 "-" "Cloud-Init/17.1" "-"
|
||||
127.0.0.1 - - [09/May/2020:23:01:51 +0000] "GET /user-data HTTP/1.1" 200 298 "-" "Cloud-Init/17.1" "-
|
||||
```
|
||||
|
||||
Success! This time both the metadata and user-data files were served to the Cloud-init client.
|
||||
|
||||
### Validate that Cloud-init ran
|
||||
|
||||
From the logs above, you know that Cloud-init ran on the client host and requested the metadata and user-data files, but did it do anything with them? You can validate that Cloud-init wrote the files you added in the user-data file, in the write_files section.
|
||||
|
||||
On your Cloud-init client, check the contents of the `/srv/foo` and `/srv/bar` files:
|
||||
|
||||
|
||||
```
|
||||
# cd /srv/ && ls
|
||||
bar foo
|
||||
# cat foo
|
||||
"Does cloud-init work?"
|
||||
# cat bar
|
||||
"IT SURE DOES!"
|
||||
```
|
||||
|
||||
Success! The files were written and have the content you expect.
|
||||
|
||||
As mentioned above, there are plenty of other modules that can be used to configure the host. For example, the user-data file can be configured to add packages with apt, copy SSH authorized_keys, create users and groups, configure and run configuration-management tools, and many other things. I use it in my private cloud at home to copy my authorized_keys, create a local user and group, and set up sudo permissions.
|
||||
|
||||
### What you can do next
|
||||
|
||||
Cloud-init is useful in a homelab, especially a lab focusing on cloud technologies. The simple service demonstrated in this article may not be super useful for a homelab, but now that you know how Cloud-init works, you can move on to creating a dynamic service that can configure each host with custom data, making a private cloud at home more similar to the services provided by the major cloud providers.
|
||||
|
||||
With a slightly more complicated datasource, adding new physical (or virtual) machines to your private cloud at home can be as simple as plugging them in and turning them on.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/5/create-simple-cloud-init-service-your-homelab
|
||||
|
||||
作者:[Chris Collins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clcollins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_desk_home_laptop_browser.png?itok=Y3UVpY0l (Digital images of a computer desktop)
|
||||
[2]: https://cloudinit.readthedocs.io/
|
||||
[3]: https://opensource.com/article/20/5/cloud-init-raspberry-pi-homelab
|
||||
[4]: https://opensource.com/sites/default/files/uploads/cloud-init.jpg (A screen showing the boot process for a Linux server running Cloud-init )
|
||||
[5]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[6]: https://opensource.com/article/20/5/disk-image-raspberry-pi
|
||||
[7]: https://cloudinit.readthedocs.io/en/latest/topics/datasources.html
|
||||
[8]: https://podman.io/
|
||||
[9]: https://github.com/clcollins/homelabCloudInit/tree/master/simpleCloudInitService/data
|
||||
[10]: https://cloudinit.readthedocs.io/en/latest/topics/instancedata.html#what-is-instance-data
|
||||
[11]: https://cloudinit.readthedocs.io/en/latest/topics/modules.html
|
@ -1,210 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (13 Things To Do After Installing Linux Mint 20)
|
||||
[#]: via: (https://itsfoss.com/things-to-do-after-installing-linux-mint-20/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
13 Things To Do After Installing Linux Mint 20
|
||||
======
|
||||
|
||||
Linux Mint is easily one of the [best Linux distributions][1] out there and especially considering the features of [Linux Mint 20][2], I’m sure you will agree with that.
|
||||
|
||||
In case you missed our coverage, [Linux Mint 20 is finally available to download][3].
|
||||
|
||||
Of course, if you’ve been using Linux Mint for a while, you probably know what’s best for you. But, for new users, there are a few things that you need to do after installing Linux Mint 20 to make your experience better than ever.
|
||||
|
||||
### Recommended things to do after installing Linux Mint 20
|
||||
|
||||
In this article, I’m going to list some of them for to help you improve your Linux Mint 20 experience.
|
||||
|
||||
#### 1\. Perform a System Update
|
||||
|
||||
![][4]
|
||||
|
||||
The first thing you should check right after installation is — system updates using the update manager as shown in the image above.
|
||||
|
||||
Why? Because you need to build the local cache of available software. It is also a good idea to update all the software updates.
|
||||
|
||||
If you prefer to use the terminal, simply type the following command to perform a system update:
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
```
|
||||
|
||||
#### 2\. Use Timeshift to Create System Snapshots
|
||||
|
||||
![][5]
|
||||
|
||||
It’s always useful have system snapshots if you want to quickly restore your system state after an accidental change or maybe after a bad update.
|
||||
|
||||
Hence, it’s super important to configure and create system snapshots using Timeshift if you want the ability to have a backup of your system state from time to time.
|
||||
|
||||
You can follow our detailed guide on [using Timeshift][6], if you didn’t know already.
|
||||
|
||||
#### 3\. Install Useful Software
|
||||
|
||||
Even though you have a bunch of useful pre-installed applications on Linux Mint 20, you probably need to install some essential apps that do not come baked in.
|
||||
|
||||
You can simply utilize the software manager or the synaptic package manager to find and install software that you need.
|
||||
|
||||
For starters, you can follow our list of [essential Linux apps][7] if you want to explore a variety of tools.
|
||||
|
||||
Here’s a list of my favorite software that I’d want you to try:
|
||||
|
||||
* [VLC media player][8] for video
|
||||
* [FreeFileSync][9] to sync files
|
||||
* [Flameshot][10] for screenshots
|
||||
* [Stacer][11] to optimize and monitor system
|
||||
* [ActivityWatch][12] to track your screen time and stay productive
|
||||
|
||||
|
||||
|
||||
#### 4\. Customize the Themes and Icons
|
||||
|
||||
![][13]
|
||||
|
||||
Of course, this isn’t something technically essential unless you want to change the look and feel of Linux Mint 20.
|
||||
|
||||
But, it’s very [easy to change the theme and icons in Linux Mint][14] 20 without installing anything extra.
|
||||
|
||||
You get the option to customize the look in the welcome screen itself. In either case, you just need to head on to “**Themes**” and start customizing.
|
||||
|
||||
![][15]
|
||||
|
||||
To do that, you can search for it or find it inside the System Settings as shown in the screenshot above.
|
||||
|
||||
Depending on what desktop environment you are on, you can also take a look at some of the [best icon themes][16] available.
|
||||
|
||||
#### 5\. Enable Redshift to protect your eyes
|
||||
|
||||
![][17]
|
||||
|
||||
You can search for “[Redshift][18]” on Linux Mint and launch it to start protecting your eyes at night. As you can see in the screenshot above, it will automatically adjust the color temperature of the screen depending on the time.
|
||||
|
||||
You may want to enable the autostart option so that it launches automatically when you restart the computer. It may not be the same as the night light feature on [Ubuntu 20.04 LTS][19] but it’s good enough if you don’t need custom schedules or the ability to the tweak the color temperature.
|
||||
|
||||
#### 6\. Enable snap (if needed)
|
||||
|
||||
Even though Ubuntu is pushing to use Snap more than ever, the Linux Mint team is against it. Hence, it forbids APT to use snapd.
|
||||
|
||||
So, you won’t have the support for snap out-of-the-box. However, sooner or later, you’ll realize that some software is packaged only in Snap format. In such cases, you’ll have to enable snap support on Mint.
|
||||
|
||||
```
|
||||
sudo apt install snapd
|
||||
```
|
||||
|
||||
Once you do that, you can follow our guide to know more about [installing and using snaps on Linux][20].
|
||||
|
||||
#### 7\. Learn to use Flatpak
|
||||
|
||||
By default, Linux Mint comes with the support for Flatpak. So, no matter whether you hate using snap or simply prefer to use Flatpak, it’s good to have it baked in.
|
||||
|
||||
Now, all you have to do is follow our guide on [using Flatpak on Linux][21] to get started!
|
||||
|
||||
#### 8\. Clean or Optimize Your System
|
||||
|
||||
It’s always good to optimize or clean up your system to get rid of unnecessary junk files occupying storage space.
|
||||
|
||||
You can quickly remove unwanted packages from your system by typing this in your terminal:
|
||||
|
||||
```
|
||||
sudo apt autoremove
|
||||
```
|
||||
|
||||
In addition to this, you can also follow some of our [tips to free up space on Linux Mint][22].
|
||||
|
||||
#### 9\. Using Warpinator to send/receive files across the network
|
||||
|
||||
Warpinator is a new addition to Linux Mint 20 to give you the ability to share files across multiple computers connected to a network. Here’s how it looks:
|
||||
|
||||
![][23]
|
||||
|
||||
You can just search for it in the menu and get started!
|
||||
|
||||
#### 10\. Using the driver manager
|
||||
|
||||
![Driver Manager][24]
|
||||
|
||||
The driver manager is an important place to look for if you’re using Wi-Fi devices that needs a driver, NVIDIA graphics, or AMD graphics, and drivers for other devices if applicable.
|
||||
|
||||
You just need look for the driver manager and launch it. It should detect any proprietary drivers in use or you can also utilize a DVD to install the driver using the driver manager.
|
||||
|
||||
#### 11\. Set up a Firewall
|
||||
|
||||
![][25]
|
||||
|
||||
For the most part, you might have already secured your home connection. But, if you want to have some specific firewall settings on Linux Mint, you can do that by searching for “Firewall” in the menu.
|
||||
|
||||
As you can observe the screenshot above, you get the ability to have different profiles for home, business, and public. You just need to add the rules and define what is allowed and what’s not allowed to access the Internet.
|
||||
|
||||
You may read our detailed guide on [using UFW for configuring a firewall][26].
|
||||
|
||||
#### 12\. Learn to Manage Startup Apps
|
||||
|
||||
If you’re an experienced user, you probably know this already. But, new users often forget to manage their startup applications and eventually, the system boot time gets affected.
|
||||
|
||||
You just need to search for “**Startup Applications**” from the menu and you can launch it find something like this:
|
||||
|
||||
![][27]
|
||||
|
||||
You can simply toggle the ones that you want to disable, add a delay timer, or remove it completely from the list of startup applications.
|
||||
|
||||
#### 13\. Install Essential Apps For Gaming
|
||||
|
||||
Of course, if you’re into gaming, you might want to read our article for [Gaming on Linux][28] to explore all the options.
|
||||
|
||||
But, for starters, you can try installing [GameHub][29], [Steam][30], and [Lutris][31] to play some games.
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
That’s it folks! For the most part, you should be good to go if you follow the points above after installing Linux Mint 20 to make the best out of it.
|
||||
|
||||
I’m sure there are more things you can do. I’d like to know what you prefer to do right after installing Linux Mint 20. Let me know your thoughts in the comments below!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/things-to-do-after-installing-linux-mint-20/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/best-linux-distributions/
|
||||
[2]: https://itsfoss.com/linux-mint-20/
|
||||
[3]: https://itsfoss.com/linux-mint-20-download/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-system-update.png?ssl=1
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/07/snapshot-linux-mint-timeshift.jpeg?ssl=1
|
||||
[6]: https://itsfoss.com/backup-restore-linux-timeshift/
|
||||
[7]: https://itsfoss.com/essential-linux-applications/
|
||||
[8]: https://www.videolan.org/vlc/
|
||||
[9]: https://itsfoss.com/freefilesync/
|
||||
[10]: https://itsfoss.com/flameshot/
|
||||
[11]: https://itsfoss.com/optimize-ubuntu-stacer/
|
||||
[12]: https://itsfoss.com/activitywatch/
|
||||
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-theme.png?ssl=1
|
||||
[14]: https://itsfoss.com/install-icon-linux-mint/
|
||||
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-system-settings.png?ssl=1
|
||||
[16]: https://itsfoss.com/best-icon-themes-ubuntu-16-04/
|
||||
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-redshift-1.png?ssl=1
|
||||
[18]: https://itsfoss.com/install-redshift-linux-mint/
|
||||
[19]: https://itsfoss.com/ubuntu-20-04-release-features/
|
||||
[20]: https://itsfoss.com/install-snap-linux/
|
||||
[21]: https://itsfoss.com/flatpak-guide/
|
||||
[22]: https://itsfoss.com/free-up-space-ubuntu-linux/
|
||||
[23]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/mint-20-warpinator-1.png?ssl=1
|
||||
[24]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2013/12/Additional-Driver-Linux-Mint-16.png?ssl=1
|
||||
[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-firewall.png?ssl=1
|
||||
[26]: https://itsfoss.com/set-up-firewall-gufw/
|
||||
[27]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-startup-applications.png?ssl=1
|
||||
[28]: https://itsfoss.com/linux-gaming-guide/
|
||||
[29]: https://itsfoss.com/gamehub/
|
||||
[30]: https://store.steampowered.com
|
||||
[31]: https://lutris.net
|
@ -1,93 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Yufei-Yan)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What you need to know about hash functions)
|
||||
[#]: via: (https://opensource.com/article/20/7/hash-functions)
|
||||
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
|
||||
|
||||
What you need to know about hash functions
|
||||
======
|
||||
It should be computationally implausible to work backwards from the
|
||||
output hash to the input.
|
||||
![City with numbers overlay][1]
|
||||
|
||||
There is a tool in the security practitioner's repertoire that's helpful for everyone to understand, regardless of what they do with computers: cryptographic hash functions. That may sound mysterious, technical, and maybe even boring, but I have a concise explanation of what hashes are and why they matter to you.
|
||||
|
||||
A cryptographic hash function, such as SHA-256 or MD5, takes as input a set of binary data (typically as bytes) and gives output that is hopefully unique for each set of possible inputs. The length of the output—"the hash"—for any particular hash function is typically the same for any pattern of inputs (for SHA-256, it is 32 bytes or 256 bits—the clue's in the name). The important thing is this: It should be computationally implausible (cryptographers hate the word _impossible_) to work backward from the output hash to the input. This is why they are sometimes referred to as one-way hash functions.
|
||||
|
||||
But what are hash functions used for? And why is the property of being unique so important?
|
||||
|
||||
### Unique output
|
||||
|
||||
The phrase "hopefully unique" when describing the output of a hash function is vital because hash functions are used to render wholly unique output. Hash functions, for example, are used as a way to verify that the copy of a file _you_ downloaded is a byte-for-byte duplicate of the file _I_ downloaded. You'll see this verification process at work when you download a Linux ISO, or software from a Linux repository. Without uniqueness, the technology is rendered useless, at least for the purpose you generally have for it.
|
||||
|
||||
Should two inputs yield the same output, the hash is said to have a "collision." In fact, MD5 has become deprecated because it is now trivially possible to find collisions with commercially available hardware and software systems.
|
||||
|
||||
Another important property is that a tiny change in a message, even changing a single bit, is expected to generate a noticeable change to the output (this is the "avalanche effect").
|
||||
|
||||
### Verifying binary data
|
||||
|
||||
The typical use for hash functions is to ensure that when someone hands you a piece of binary data, it is what you expect. All data in the world of computing can be described in binary format, whether it is text, an executable, a video, an image, or a complete database of data, so hashes are broadly applicable, to say the least. Comparing binary data directly is slow and arduous computationally, but hash functions are designed to be very quick. Given two files several megabytes or gigabytes in size, you can produce hashes of them ahead of time and defer the comparisons to when you need them.
|
||||
|
||||
It's also generally easier to digitally sign hashes of data rather than large sets of data themselves. This is such an important feature that one of the most common uses of hashes in cryptography is to generate "digital" signatures.
|
||||
|
||||
Given the fact that it is easy to produce hashes of data, there's often no need to have both sets of data. Let's say you want to run an executable file on your computer. Before you do, though, you want to check that it really is the file you think it is and that no malicious actor has tampered with it. You can hash that file very quickly and easily, and as long as you have a copy of what the hash should look like, you can be fairly certain you have the file you want.
|
||||
|
||||
Here's a simple example:
|
||||
|
||||
|
||||
```
|
||||
$ shasum -a256 ~/bin/fop
|
||||
87227baf4e1e78f6499e4905e8640c1f36720ae5f2bd167de325fd0d4ebc791c /home/bob/bin/fop
|
||||
```
|
||||
|
||||
If I know that the SHA-256 sum of the `fop` executable, as delivered by its vendor (Apache Foundation, in this case) is:
|
||||
|
||||
|
||||
```
|
||||
`87227baf4e1e78f6499e4905e8640c1f36720ae5f2bd167de325fd0d4ebc791c`
|
||||
```
|
||||
|
||||
then I can be confident that the executable on my drive is indeed the same executable that Apache Foundation distributes on its website. This is where the lack of collisions (or at least the _difficulty in computing collisions_) property of hash functions is so important. If a malicious actor can craft a _replacement_ file that shares the same hash as the real file, then the process of verification is essentially useless.
|
||||
|
||||
In fact, there are more technical names for the various properties, and what I've described above mashes three important ones together. More accurately, those technical names are:
|
||||
|
||||
1. **Pre-image resistance:** Given a hash, it should be difficult to find the message from which it was created, even if you know the hash function used.
|
||||
2. **Second pre-image resistance:** Given a message, it should be difficult to find another message that, when hashed, generates the same hash.
|
||||
3. **Collision resistance:** It should be difficult to find any two messages that generate the same hash.
|
||||
|
||||
|
||||
|
||||
_Collision resistance_ and _second pre-image resistance_ may sound like the same property, but they're subtly (and significantly) different. Pre-image resistance says that if you _already_ have a message, you will not be able to find another message with a matching hash. Collision resistance makes it hard for you to invent two messages that will generate the same hash and is a much harder property to fulfill in a hash function.
|
||||
|
||||
Allow me to return to the scenario of a malicious actor attempting to exchange a file (with a hash, which you can check) with another one. Now, to use cryptographic hashes "in the wild"—out there in the real world, beyond the perfectly secure, bug-free implementations populated by unicorns and overflowing with fat-free doughnuts—there are some important and difficult provisos that need to be met. Very paranoid readers may already have spotted some of them; in particular:
|
||||
|
||||
1. You must have assurances that the copy of the hash you have has also not been subject to tampering.
|
||||
2. You must have assurances that the entity performing the hash performs and reports it correctly.
|
||||
3. You must have assurances that the entity comparing the two hashes does indeed report the result of that comparison correctly.
|
||||
|
||||
|
||||
|
||||
Ensuring that you can meet such assurances is not necessarily an easy task. This is one of the reasons Trusted Platform Modules (TPMs) are part of many computing systems. They act as a hardware root of trust with capabilities to provide assurances about the cryptography tools verifying the authenticity of important binary data. TPMs are a useful and important tool for real-world systems, and I plan to write an article about them in the future.
|
||||
|
||||
* * *
|
||||
|
||||
_This article was originally published on [Alice, Eve, and Bob][2] and is adapted and reprinted with the author's permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/hash-functions
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mikecamel
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_OpenData_CityNumbers.png?itok=lC03ce76 (City with numbers overlay)
|
||||
[2]: https://aliceevebob.com/2020/06/16/whats-a-hash-function/
|
@ -0,0 +1,602 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Expand your Raspberry Pi with Arduino ports)
|
||||
[#]: via: (https://opensource.com/article/20/7/arduino-raspberry-pi)
|
||||
[#]: author: (Patrick Martins de Lima https://opensource.com/users/pattrickx)
|
||||
|
||||
Expand your Raspberry Pi with Arduino ports
|
||||
======
|
||||
For this project, explore Raspberry Pi port expansions using Java,
|
||||
serial, and Arduino.
|
||||
![Parts, modules, containers for software][1]
|
||||
|
||||
As members of the maker community, we are always looking for creative ways to use hardware and software. This time, [Patrick Lima][2] and I decided we wanted to expand the Raspberry Pi's ports using an Arduino board, so we could access more functionality and ports and add a layer of protection to the device. There are a lot of ways to use this setup, such as building a solar panel that follows the sun, a home weather station, joystick interaction, and more.
|
||||
|
||||
We decided to start by building a dashboard that allows the following serial port interactions:
|
||||
|
||||
* Control three LEDs to turn them on and off
|
||||
* Control three LEDs to adjust their light intensity
|
||||
* Identify which ports are being used
|
||||
* Show input movements on a joystick
|
||||
* Measure temperature
|
||||
|
||||
|
||||
|
||||
We also want to show all the interactions between ports, hardware, and sensors in a nice user interface (UI) like this:
|
||||
|
||||
![UI dashboard][3]
|
||||
|
||||
(Bruno Muniz, [CC BY-SA 4.0][4])
|
||||
|
||||
You can use the concepts in this article to build many different projects that use many different components. Your imagination is the limit!
|
||||
|
||||
### 1\. Get started
|
||||
|
||||
![Raspberry Pi and Arduino logos][5]
|
||||
|
||||
(Bruno Muniz, [CC BY-SA 4.0][4])
|
||||
|
||||
The first step is to expand the Raspberry Pi's ports to also use Arduino ports. This is possible using Linux ARM's native serial communication implementation that enables you to use an Arduino's digital, analogical, and Pulse Width Modulation (PWM) ports to run an application on the Raspberry Pi.
|
||||
|
||||
This project uses [TotalCross][6], an open source software development kit for building UIs for embedded devices, to execute external applications through the terminal and use the native serial communication. There are two classes you can use to achieve this: [Runtime.exec][7] and [PortConnector][8]. They represent different ways to execute these actions, so we will show how to use both in this tutorial, and you can decide which way is best for you.
|
||||
|
||||
To start this project, you need:
|
||||
|
||||
* 1 Raspberry Pi 3
|
||||
* 1 Arduino Uno
|
||||
* 3 LEDs
|
||||
* 2 resistors between 1K and 2.2K ohms
|
||||
* 1 push button
|
||||
* 1 potentiometer between 1K and 50K ohms
|
||||
* 1 protoboard (aka breadboard)
|
||||
* Jumpers
|
||||
|
||||
|
||||
|
||||
### 2\. Set up the Arduino
|
||||
|
||||
Create a communication protocol to receive messages, process them, execute the request, and send a response between the Raspberry Pi and the Arduino. This is done on the Arduino.
|
||||
|
||||
#### 2.1 Define the message format
|
||||
|
||||
Every message received will have the following format:
|
||||
|
||||
* Indication of the function called
|
||||
* Port used
|
||||
* A char separator, if needed
|
||||
* A value to be sent, if needed
|
||||
* Indication of the message's end
|
||||
|
||||
|
||||
|
||||
The following table presents the list of characters with their respective functions, example values, and descriptions of the example. The choice of characters used in this example is arbitrary and can be changed anytime.
|
||||
|
||||
Characters | Function | Example | Description of the example
|
||||
---|---|---|---
|
||||
* | End of the instruction | - | -
|
||||
, | Separator | - | -
|
||||
# | Set mode | #8,0* | Pin 8 input mode
|
||||
< | Set digital value | <1,0* | Set pin 1 low
|
||||
> | Get digital value | >13* | Get value pin 13
|
||||
+ | Get PWM value | +6,250* | Set pin 6 value 250
|
||||
- | Get analogic value | -14* | Get value pin A0
|
||||
|
||||
#### 2.2 Source code
|
||||
|
||||
The following source code implements the communication protocol specified above. It must be sent to the Arduino, so it can interpret and execute messages' commands:
|
||||
|
||||
|
||||
```
|
||||
void setup() {
|
||||
Serial.begin(9600);
|
||||
Serial.println("Connected");
|
||||
Serial.println("Waiting command...");
|
||||
}
|
||||
|
||||
void loop() {
|
||||
String text="";
|
||||
char character;
|
||||
String pin="";
|
||||
String value="0";
|
||||
char separator='.';
|
||||
char inst='.';
|
||||
|
||||
while(Serial.available()){ // verify RX is getting data
|
||||
delay(10);
|
||||
character= Serial.read();
|
||||
if(character=='*'){
|
||||
action(inst,pin,value);
|
||||
break;
|
||||
}
|
||||
else {
|
||||
text.concat(character);}
|
||||
|
||||
if(character==',') {
|
||||
separator=character;
|
||||
|
||||
if(inst=='.'){
|
||||
inst = character;}
|
||||
else if(separator!=',' && character!=inst ){
|
||||
pin.concat(character);}
|
||||
else if (character!=separator && character!=inst ){
|
||||
value.concat(character);}
|
||||
}
|
||||
}
|
||||
|
||||
void action(char instruction, String pin, String value){
|
||||
if (instruction=='#'){//pinMode
|
||||
pinMode(pin.toInt(),value.toInt());
|
||||
}
|
||||
|
||||
if (instruction=='<'){//digitalWrite
|
||||
digitalWrite(pin.toInt(),value.toInt());
|
||||
}
|
||||
|
||||
if (instruction=='>'){ //digitalRead
|
||||
String aux= pin+':'+String(digitalRead(pin.toInt()));
|
||||
Serial.println(aux);
|
||||
}
|
||||
|
||||
if (instruction=='+'){ // analogWrite = PWM
|
||||
analogWrite(pin.toInt(),value.toInt());
|
||||
}
|
||||
|
||||
if (instruction=='-'){ // analogRead
|
||||
String aux= pin+':'+String(analogRead(pin.toInt()));
|
||||
Serial.println(aux);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### 2.3 Build the electronics
|
||||
|
||||
Define what you need to test to check communication with the Arduino and ensure the inputs and outputs are responding as expected:
|
||||
|
||||
* LEDs are connected with positive logic. Connect to the GND pin through a resistor and activate it with the digital port I/O 2 and PWM 3.
|
||||
* The button has a pull-down resistor connected to the digital port I/O 4, which sends a signal of 0 if not pressed and 1 if pressed.
|
||||
* The potentiometer is connected with the central pin to the analog input A0 with one of the side pins on the positive and the other on the negative.
|
||||
|
||||
|
||||
|
||||
![Connecting the hardware][9]
|
||||
|
||||
(Bruno Muniz, [CC BY-SA 4.0][4])
|
||||
|
||||
#### 2.4 Test communications
|
||||
|
||||
Send the code in section 2.2 to the Arduino. Open the serial monitor and check the communication protocol by sending the commands below:
|
||||
|
||||
|
||||
```
|
||||
#2,1*<2,1*>2*
|
||||
#3,1*+3,10*
|
||||
#4,0*>4*
|
||||
#14,0*-14*
|
||||
```
|
||||
|
||||
This should be the result in the serial monitor:
|
||||
|
||||
![Testing communications in Arduino][10]
|
||||
|
||||
(Bruno Muniz, [CC BY-SA 4.0][4])
|
||||
|
||||
One LED on the device should be on at maximum intensity and the other at a lower intensity.
|
||||
|
||||
![LEDs lit on board][11]
|
||||
|
||||
(Bruno Muniz, [CC BY-SA 4.0][4])
|
||||
|
||||
Pressing the button and changing the position of the potentiometer when sending reading commands will display different values. For example, turn the potentiometer to the positive side and press the button. With the button still pressed, send the commands:
|
||||
|
||||
|
||||
```
|
||||
>4*
|
||||
-14*
|
||||
```
|
||||
|
||||
Two lines should appear:
|
||||
|
||||
![Testing communications in Arduino][12]
|
||||
|
||||
(Bruno Muniz, [CC BY-SA 4.0][4])
|
||||
|
||||
### 3\. Set up the Raspberry Pi
|
||||
|
||||
Use a Raspberry Pi to access the serial port via the terminal using the `cat` command to read the entries and the `echo` command to send the message.
|
||||
|
||||
#### 3.1 Do a serial test
|
||||
|
||||
Connect the Arduino to one of the USB ports on the Raspberry Pi, open the terminal, and execute this command:
|
||||
|
||||
|
||||
```
|
||||
`cat /dev/ttyUSB0 9600`
|
||||
```
|
||||
|
||||
This will initiate the connection with the Arduino and display what is returned to the serial.
|
||||
|
||||
![Testing serial on Arduino][13]
|
||||
|
||||
(Bruno Muniz, [CC BY-SA 4.0][4])
|
||||
|
||||
To test sending commands, open a new terminal window (keeping the previous one open), and send this command:
|
||||
|
||||
|
||||
```
|
||||
`echo "command" > /dev/ttyUSB0 9600`
|
||||
```
|
||||
|
||||
You can send the same commands used in section 2.4.
|
||||
|
||||
You should see feedback in the first terminal along with the same result you got in section 2.4:
|
||||
|
||||
![Testing serial on Arduino][14]
|
||||
|
||||
(Bruno Muniz, [CC BY-SA 4.0][4])
|
||||
|
||||
### 4\. Create the graphical user interface
|
||||
|
||||
The UI for this project will be simple, as the objective is just to show the ports expansion using the serial. Another article will use TotalCross to create a high-quality GUI for this project and start the application backend (working with sensors), as shown in the dashboard image at the top of this article.
|
||||
|
||||
This first part uses two UI components: a Listbox and an Edit. These build a connection between the Raspberry Pi and the Arduino and test that everything is working as expected.
|
||||
|
||||
Simulate the terminal where you put the commands and watch for answers:
|
||||
|
||||
* Edit is used to send messages. Place it at the bottom with a FILL width that extends the component to the entire width of the screen.
|
||||
* Listbox is used to show results, e.g., in the terminal. Add it at the TOP position, starting at the LEFT side, with a width equal to Edit and a FIT height to vertically occupy all space not filled by Edit.
|
||||
|
||||
|
||||
|
||||
|
||||
```
|
||||
package com.totalcross.sample.serial;
|
||||
|
||||
import totalcross.sys.Settings;
|
||||
import totalcross.ui.Edit;
|
||||
import totalcross.ui.ListBox;
|
||||
import totalcross.ui.MainWindow;
|
||||
import totalcross.ui.gfx.Color;
|
||||
|
||||
public class SerialSample extends MainWindow {
|
||||
ListBox Output;
|
||||
Edit Input;
|
||||
public SerialSample() {
|
||||
setUIStyle(Settings.MATERIAL_UI);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void initUI() {
|
||||
Input = new Edit();
|
||||
add(Input, LEFT, BOTTOM, FILL, PREFERRED);
|
||||
Output = new ListBox();
|
||||
Output.setBackForeColors([Color][15].BLACK, [Color][15].WHITE);
|
||||
add(Output, LEFT, TOP, FILL, FIT);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
It should look like this:
|
||||
|
||||
![UI][16]
|
||||
|
||||
(Bruno Muniz, [CC BY-SA 4.0][4])
|
||||
|
||||
### 5\. Set up serial communication
|
||||
|
||||
As stated above, there are two ways to set up serial communication: Runtime.exec and PortConnector.
|
||||
|
||||
#### 5.1 Option 1: Use Runtime.exec
|
||||
|
||||
The `java.lang.Runtime` class allows the application to create a connection interface with the environment where it is running. It allows the program to use the Raspberry Pi's native serial communication.
|
||||
|
||||
Use the same commands you used in section 3.1, but now use the Edit component on the UI to send the commands to the device.
|
||||
|
||||
##### Read the serial
|
||||
|
||||
The application must constantly read the serial, and if a value is returned, add it to the Listbox using threads. Threads are a great way to work with processes in the background without blocking user interaction.
|
||||
|
||||
The following code creates a new process on this thread that executes the `cat` command, tests the serial, and starts an infinite loop to check if something new is received. If something is received, the value is added to the next line of the Listbox component. This process will continue to run as long as the application is running:
|
||||
|
||||
|
||||
```
|
||||
new [Thread][17] () {
|
||||
@Override
|
||||
public void run() {
|
||||
try {
|
||||
[Process][18] Runexec2 = [Runtime][19].getRuntime().exec("cat /dev/ttyUSB0 9600\n");
|
||||
LineReader lineReader = new LineReader(Stream.asStream(Runexec2.getInputStream()));
|
||||
[String][20] input;
|
||||
|
||||
while (true) {
|
||||
if ((input = lineReader.readLine()) != null) {
|
||||
Output.add(input);
|
||||
Output.selectLast();
|
||||
Output.repaintNow();
|
||||
}
|
||||
}
|
||||
} catch ([IOException][21] ioe) {
|
||||
ioe.printStackTrace();
|
||||
}
|
||||
}
|
||||
}.start();
|
||||
}
|
||||
```
|
||||
|
||||
##### Send commands
|
||||
|
||||
Sending commands is a simpler process. It happens whenever you press **Enter** on the Edit component.
|
||||
|
||||
To forward the commands to the device, as shown in section 3.1, you must instantiate a new terminal. For that, the Runtime class must execute a `sh` command on Linux:
|
||||
|
||||
|
||||
```
|
||||
try{
|
||||
Runexec = [Runtime][19].getRuntime().exec("sh").getOutputStream() }catch ([IOException][21] ioe) {
|
||||
ioe.printStackTrace();
|
||||
}
|
||||
```
|
||||
|
||||
After the user writes the command in Edit and presses **Enter**, the application triggers an event that executes the `echo` command with the value indicated in Edit:
|
||||
|
||||
|
||||
```
|
||||
Input.addKeyListener(new [KeyListener][22]() {
|
||||
|
||||
@Override
|
||||
public void specialkeyPressed([KeyEvent][23] e) {
|
||||
if (e.key == SpecialKeys.ENTER) {
|
||||
[String][20] s = Input.getText();
|
||||
Input.clear();
|
||||
try {
|
||||
Runexec.write(("echo \"" + s + "\" > /dev/ttyUSB0 9600\n").getBytes());
|
||||
} catch ([IOException][21] ioe) {
|
||||
ioe.printStackTrace();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void keyPressed([KeyEvent][23] e) {} //auto-generate code
|
||||
@Override
|
||||
public void actionkeyPressed([KeyEvent][23] e) {} //auto-generate code
|
||||
});
|
||||
```
|
||||
|
||||
Run the application on the Raspberry Pi with the Arduino connected and send the commands for testing. The result should be:
|
||||
|
||||
![Testing application running on Raspberry Pi][24]
|
||||
|
||||
(Bruno Muniz, [CC BY-SA 4.0][4])
|
||||
|
||||
##### Runtime.exec source code
|
||||
|
||||
Following is the source code with all parts explained. It includes the thread that will read the serial on line 31 and the `KeyListener` that will send the commands on line 55:
|
||||
|
||||
|
||||
```
|
||||
package com.totalcross.sample.serial;
|
||||
import totalcross.ui.MainWindow;
|
||||
import totalcross.ui.event.KeyEvent;
|
||||
import totalcross.ui.event.KeyListener;
|
||||
import totalcross.ui.gfx.Color;
|
||||
import totalcross.ui.Edit;
|
||||
import totalcross.ui.ListBox;
|
||||
import java.io.IOException;
|
||||
import java.io.OutputStream;
|
||||
import totalcross.io.LineReader;
|
||||
import totalcross.io.Stream;
|
||||
import totalcross.sys.Settings;
|
||||
import totalcross.sys.SpecialKeys;
|
||||
|
||||
public class SerialSample extends MainWindow {
|
||||
[OutputStream][25] Runexec;
|
||||
ListBox Output;
|
||||
|
||||
public SerialSample() {
|
||||
setUIStyle(Settings.MATERIAL_UI);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void initUI() {
|
||||
Edit Input = new Edit();
|
||||
add(Input, LEFT, BOTTOM, FILL, PREFERRED);
|
||||
Output = new ListBox();
|
||||
Output.setBackForeColors([Color][15].BLACK, [Color][15].WHITE);
|
||||
add(Output, LEFT, TOP, FILL, FIT);
|
||||
new [Thread][17]() {
|
||||
@Override
|
||||
public void run() {
|
||||
try {
|
||||
[Process][18] Runexec2 = [Runtime][19].getRuntime().exec("cat /dev/ttyUSB0 9600\n");
|
||||
LineReader lineReader = new
|
||||
LineReader(Stream.asStream(Runexec2.getInputStream()));
|
||||
[String][20] input;
|
||||
|
||||
while (true) {
|
||||
if ((input = lineReader.readLine()) != null) {
|
||||
Output.add(input);
|
||||
Output.selectLast();
|
||||
Output.repaintNow();
|
||||
}
|
||||
}
|
||||
|
||||
} catch ([IOException][21] ioe) {
|
||||
ioe.printStackTrace();
|
||||
}
|
||||
}
|
||||
}.start();
|
||||
|
||||
try {
|
||||
Runexec = [Runtime][19].getRuntime().exec("sh").getOutputStream();
|
||||
} catch ([IOException][21] ioe) {
|
||||
ioe.printStackTrace();
|
||||
}
|
||||
|
||||
Input.addKeyListener(new [KeyListener][22]() {
|
||||
@Override
|
||||
public void specialkeyPressed([KeyEvent][23] e) {
|
||||
if (e.key == SpecialKeys.ENTER) {
|
||||
[String][20] s = Input.getText();
|
||||
Input.clear();
|
||||
try {
|
||||
Runexec.write(("echo \"" + s + "\" > /dev/ttyUSB0 9600\n").getBytes());
|
||||
} catch ([IOException][21] ioe) {
|
||||
ioe.printStackTrace();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void keyPressed([KeyEvent][23] e) {
|
||||
}
|
||||
@Override
|
||||
public void actionkeyPressed([KeyEvent][23] e) {
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### 5.2 Option 2: Use PortConnector
|
||||
|
||||
PortConnector is specifically for working with serial communication. If you want to follow the original example, you can skip this section, as the intention here is to show another, easier way to work with serial.
|
||||
|
||||
Change the original source code to work with PortConnector:
|
||||
|
||||
|
||||
```
|
||||
package com.totalcross.sample.serial;
|
||||
import totalcross.io.LineReader;
|
||||
import totalcross.io.device.PortConnector;
|
||||
import totalcross.sys.Settings;
|
||||
import totalcross.sys.SpecialKeys;
|
||||
import totalcross.ui.Edit;
|
||||
import totalcross.ui.ListBox;
|
||||
import totalcross.ui.MainWindow;
|
||||
import totalcross.ui.event.KeyEvent;
|
||||
import totalcross.ui.event.KeyListener;
|
||||
import totalcross.ui.gfx.Color;
|
||||
|
||||
public class SerialSample extends MainWindow {
|
||||
PortConnector pc;
|
||||
ListBox Output;
|
||||
|
||||
public SerialSample() {
|
||||
setUIStyle(Settings.MATERIAL_UI);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void initUI() {
|
||||
Edit Input = new Edit();
|
||||
add(Input, LEFT, BOTTOM, FILL, PREFERRED);
|
||||
Output = new ListBox();
|
||||
Output.setBackForeColors([Color][15].BLACK, [Color][15].WHITE);
|
||||
add(Output, LEFT, TOP, FILL, FIT);
|
||||
new [Thread][17]() {
|
||||
@Override
|
||||
public void run() {
|
||||
try {
|
||||
pc = new PortConnector(PortConnector.USB, 9600);
|
||||
LineReader lineReader = new LineReader(pc);
|
||||
[String][20] input;
|
||||
while (true) {
|
||||
if ((input = lineReader.readLine()) != null) {
|
||||
Output.add(input);
|
||||
Output.selectLast();
|
||||
Output.repaintNow();
|
||||
}
|
||||
}
|
||||
} catch (totalcross.io.[IOException][21] ioe) {
|
||||
ioe.printStackTrace();
|
||||
}
|
||||
}
|
||||
}.start();
|
||||
Input.addKeyListener(new [KeyListener][22]() {
|
||||
@Override
|
||||
public void specialkeyPressed([KeyEvent][23] e) {
|
||||
if (e.key == SpecialKeys.ENTER) {
|
||||
[String][20] s = Input.getText();
|
||||
Input.clear();
|
||||
try {
|
||||
pc.writeBytes(s);
|
||||
} catch (totalcross.io.[IOException][21] ioe) {
|
||||
ioe.printStackTrace();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void keyPressed([KeyEvent][23] e) {
|
||||
}
|
||||
|
||||
@Override
|
||||
public void actionkeyPressed([KeyEvent][23] e) {
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
You can find all the code in the [project's repository][26].
|
||||
|
||||
### 6\. Next steps
|
||||
|
||||
This article shows how to use Raspberry Pi serial ports with Java by using either the Runtime or PortConnector classes. You can also call external files in other languages and create countless other projects—like a water quality monitoring system for an aquarium with temperature measurement via the analog inputs, or a chicken brooder with temperature and humidity regulation and a servo motor to rotate the eggs.
|
||||
|
||||
A future article will use the PortConnector implementation (because it is focused on serial connection) to finish the communications with all sensors. It will also add a digital input and complete the UI.
|
||||
|
||||
Here are some references for more reading:
|
||||
|
||||
* [Get started with TotalCross][27]
|
||||
* [TotalCross PortConnector class][8]
|
||||
* [Running C++ applications with TotalCross][7]
|
||||
* [VSCode TotalCross Project Extension plugin][28]
|
||||
|
||||
|
||||
|
||||
After you connect your Arduino and Raspberry Pi, please leave comments below with your results. We'd love to read them!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/arduino-raspberry-pi
|
||||
|
||||
作者:[Patrick Martins de Lima][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/pattrickx
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_modules_networking_hardware_parts.png?itok=rPpVj92- (Parts, modules, containers for software)
|
||||
[2]: https://github.com/pattrickx
|
||||
[3]: https://opensource.com/sites/default/files/uploads/gui-dashboard.png (UI dashboard)
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/raspberrypi_arduino.png (Raspberry Pi and Arduino logos)
|
||||
[6]: https://totalcross.com/
|
||||
[7]: https://learn.totalcross.com/documentation/guides/running-c++-applications-with-totalcross
|
||||
[8]: https://rs.totalcross.com/doc/totalcross/io/device/PortConnector.html
|
||||
[9]: https://opensource.com/sites/default/files/uploads/connecting-electronics.png (Connecting the hardware)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/communication-test-result.png (Testing communications in Arduino)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/leds.jpg (LEDs lit on board)
|
||||
[12]: https://opensource.com/sites/default/files/uploads/communication-test-result2.png (Testing communications in Arduino)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/serial-test.png (Testing serial on Arduino)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/serial-test2.png (Testing serial on Arduino)
|
||||
[15]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+color
|
||||
[16]: https://opensource.com/sites/default/files/uploads/ui_0.png (UI)
|
||||
[17]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+thread
|
||||
[18]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+process
|
||||
[19]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+runtime
|
||||
[20]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[21]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+ioexception
|
||||
[22]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+keylistener
|
||||
[23]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+keyevent
|
||||
[24]: https://opensource.com/sites/default/files/uploads/test-commands.png (Testing application running on Raspberry Pi)
|
||||
[25]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+outputstream
|
||||
[26]: https://github.com/pattrickx/TotalCrossSerialCommunication
|
||||
[27]: https://learn.totalcross.com/documentation/get-started/
|
||||
[28]: https://marketplace.visualstudio.com/items?itemName=Italo.totalcross
|
@ -0,0 +1,168 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Making compliance scalable in a container world)
|
||||
[#]: via: (https://opensource.com/article/20/7/compliance-containers)
|
||||
[#]: author: (Scott K Peterson https://opensource.com/users/skpeterson)
|
||||
|
||||
Making compliance scalable in a container world
|
||||
======
|
||||
When software is distributed via container images, source code should be
|
||||
made available in order to ensure seamless compliance with licensing.
|
||||
![Shipping containers stacked in a yard][1]
|
||||
|
||||
Software is increasingly being distributed as container images. Container images include the many software components needed to support the featured software in the container. Thus, distribution of a container image involves distribution of many software components, which typically include GPL-licensed components. We can't expect every company that distributes container images to become an open source compliance expert, so we need to build compliance into container technology.
|
||||
|
||||
We should design source code availability into container tools and processes to facilitate open source license compliance that is efficient and portable:
|
||||
|
||||
* Efficient—Address compliance once when a container image is created. Avoid depending on later actions, especially actions external to existing container distribution tooling.
|
||||
* Portable—When an image is moved to another registry, it should be straightforward to move compliance along with it.
|
||||
|
||||
|
||||
|
||||
This can be done with a registry-native approach to source code availability. This doesn't require updating all container registries to include source-code-specific features. It is possible to exploit features that registries already have. For software distributed through container registries, we can use those same registries to facilitate compliance.
|
||||
|
||||
### Container technology
|
||||
|
||||
Container technology helps manage the challenges of deploying and operating complex software systems.
|
||||
|
||||
The word "container" refers to a runtime experience—a program, together with its dependencies, can execute in an environment that provides certain isolation from other executing programs; this is running in a container.
|
||||
|
||||
The set of files that are used to provision such a container is called a "container image." While the purpose of a particular container image might be to run one application program, the container image includes much more: that application and its dependencies—the files necessary to run that application, except for the operating system kernel. A container image is a form of those files that can be stored and transferred, enabling the application to be reliably deployed again and again, and run independently of the many other programs that may be running and changing in other containers in the system.
|
||||
|
||||
Distribution of software in the form of container images, which are distributed from services known as "container registries," is growing. Once one invests in organizing computing around containers, it becomes valuable to package and distribute software in container images. Those who use containers to run their software find it is useful to be able to obtain software as pre-built container images, rather than doing all of the container assembly themselves. Even when container images are customized, it is often valuable to start from a pre-built image.
|
||||
|
||||
### Container images are different
|
||||
|
||||
In the past, it has been common to obtain each software component separately. In contrast, a container image includes a heterogeneous collection of software–often hundreds of software components. The unit of software delivery is changing from one driven by how the software is **built** to one driven by how the software is **used**. (See [What's in a container image: Meeting the legal challenges][2].)
|
||||
|
||||
Package maintainers and package management tools have played an underappreciated role in source availability for over two decades. The focused nature of a package, the role of a package maintainer, and the tooling that has been built to support package management systems results in the expectation that someone (the package maintainer) will take responsibility for seeing that the sources are available. Tools that build binaries also collect the corresponding sources into an archive that can be delivered alongside the binaries. The result is that most people don't need to think about source code availability. The sources are available in the same unit as the delivery of the executable software and via the same distribution mechanism; for software delivered as an RPM, the corresponding source is available in a source RPM.
|
||||
|
||||
In contrast, there is no convention for providing the source code that corresponds to a container image.
|
||||
|
||||
The many software components in a container image often include GPL-licensed software. Companies that may not have much experience with distribution of FOSS software may begin distributing GPL-licensed software when they start offering their software in the form of container images. Let's make it straightforward for everyone, including companies who may be new to FOSS, to provide source code in a consistent way.
|
||||
|
||||
### Identification of the corresponding source code
|
||||
|
||||
Identifying the corresponding source code is more challenging for container images than it is for packages. For packages, the source is available to the person building the package (although there can be some challenges with the source for dependencies if a package is built to include dependencies, not merely refer to them). In contrast, container images are conventionally built using, mostly, previously compiled components. When the container image is built, the source for those software components may be readily at hand or not, depending on how the binaries were acquired and how they were built.
|
||||
|
||||
When we build a container image, we should collect the corresponding source code for each of the components used to build the image. If not when the image is built, then when? That collected source code should then become a logical part of the corresponding container image.
|
||||
|
||||
Making the corresponding source code available as a logical part of a container image, of course, facilitates compliance in the distribution of that image. But, also, that practice supports a compliant ecosystem. If someone builds on a base image, how do they know what source code might need to be made available? If container images have corresponding source images, then the solution is straightforward. It is not necessary to start by figuring out what is in the base. Rather, start source availability for the overall image by using the source image for the base.
|
||||
|
||||
Here is an opportunity to build compliance into the tools—when someone builds on a base image, the tools should make it easy for them to make the corresponding source available for the base, as well as what is built on top. Tools should create a source image corresponding to the final combined image by starting with the source image for the base and adding whatever source is appropriate for the software that is built on top of that base.
|
||||
|
||||
### Delivery
|
||||
|
||||
Suppose one has a list of the source code artifacts that correspond to a container image. Where is that list going to be hosted? Where are the source artifacts going to be hosted? If the container image is hosted on various registries, how is source code going to be made available for each of those distribution points? Is there work to be done in each registry or in each of the catalogs associated with the registries? How is someone pulling a container image from each of a number of different registries going to know where these materials are? How does this work at scale? How many mechanisms are needed?
|
||||
|
||||
We should build a container ecosystem with compliance that is portable across registries. One shouldn't need to get a new guidebook from each registry.
|
||||
|
||||
As mentioned above, the supporting software components in a container image often include software licensed under the GPL. Consider the various alternatives for meeting the source code availability requirement for commercial distribution of software via download: binaries accompanied by the source code; binaries accompanied by a written offer; or, equivalent access to copy. As to equivalent access option, let's look at [GPLv2, section 3][3], last paragraph: "If distribution of executable or object code is made by offering access to copy from a designated place, then **offering equivalent access to copy the source code from the same place** counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code."
|
||||
|
||||
Why deliver the source code through a container registry? Equivalent access, efficiency, and portability.
|
||||
|
||||
* **Equivalent access**—A registry-native approach to source availability (making source for container images available as container images) is a good way of providing equivalent access to the source.
|
||||
* **Efficient compliance**—Creating a source availability "package" (a source image) when the executable image is built and then using the same tooling for making the source available avoids inefficiencies of maintaining additional processes and mechanisms for making the source available.
|
||||
* **Portability of compliance**—The same tooling that moves the executable images can be used to move, store, and serve the source images—on all hosting registries.
|
||||
|
||||
|
||||
|
||||
### A registry-native approach to source code availability
|
||||
|
||||
Delivering source through container registries exploits the design of the container image format and certain related characteristics of container registries.
|
||||
|
||||
A container image includes (see container image details in the two examples linked below):
|
||||
|
||||
* an image manifest, which identifies the other elements of the image;
|
||||
* config data, which includes metadata about the image, including information used to configure the image for execution in a container;
|
||||
* a number of "layers" (each of which is a tar archive) that store the file system with which to provision a container.
|
||||
|
||||
|
||||
|
||||
Consider the challenge of making the corresponding source code available—a collection of source code artifacts (RPMs, zip files, tarballs, etc.) that we would like a server to make available. A container image is a list of components that a registry makes available. If the list of source artifacts is arranged as an image manifest, then a container registry can serve the source code artifacts like it serves other container image parts. And, tools for moving container images can be directly applied to move that compliance package (manifest plus the referenced source artifacts).
|
||||
|
||||
A source image (aka a source container) is simply a container image. Instead of the layers in the image being tarballs of files to provision a container for execution, each layer is a source artifact.
|
||||
|
||||
This concept of delivering non-file-system content from a container registry is of interest to container registries for other purposes, too, not just for source code. An example of non-layer content being served by container registries is Helm charts (used to describe a set of Kubernetes resources). The delivery of non-layer content via a container registry is the subject of the [Open Container Initiative's artifacts project][4].
|
||||
|
||||
### Deduplication
|
||||
|
||||
Container registries store and deliver an image as parts, rather than as a single digital object. This is valuable because container images are extremely redundant. Many images differ in a small fraction of their content. A container image can simplify data center operations by packaging together all of the components needed to run a particular program; as a result, container images are large, and each contains much content that is identical to other images in the registry. There can be 100-300 software components in an image, most of which are present in many other container images. Also, images are immutable. When an image needs to be updated, a new image is created that is almost identical to the prior version. This size challenge is mitigated by breaking archives for container file systems into multiple layers and using content-addressable storage.
|
||||
|
||||
That deduplication capability is advantageous in source code, too. The key to taking advantage of this registry feature is to not store all of the corresponding sources for an image in a single blob. More than 100:1 deduplication can be readily achieved if the source code is stored with the granularity of a separate source artifact for each of the hundred or more software components from which the image is built.
|
||||
|
||||
### Implementation
|
||||
|
||||
Red Hat has started the first of a two-stage approach to implementing registry-native delivery of source code.
|
||||
|
||||
In the first stage, Red Hat has started by producing source images that can be hosted on existing registries and will not confuse tools that expect all images to be executable. In this approach, source artifacts masquerade as regular layers—each source artifact is wrapped in a tar archive; the media types for the layers are the same as for a regular executable image. The source image is associated with the corresponding executable image by a naming convention.
|
||||
|
||||
The second stage involves taking advantage of certain OCI image format features.
|
||||
|
||||
Container images should be linked to the corresponding executable image(s) via the image index, rather than with a tag naming convention. The container image format makes it possible for a source image to literally be a part of the overall image of which the executable image is a part. Unrelated to source code, it is useful to be able to have images that are built for different processor architectures be part of a single overall image. That overall image can be managed as a single object, and a consumer of the image can select which version(s) and/or part(s) of the image to download (e.g., amd64, or arm64, or source). Thus, rather than associating the source with the corresponding executable image via a labeling convention, the source image manifest should be listed in the image index.
|
||||
|
||||
In addition, in the future, source artifacts should not masquerade as executable layer tar files; the extra wrapping of source artifacts in a tar archive should be eliminated, and the source artifacts should be identified with appropriate media types. Successful deduplication requires careful, consistent tar archives. Simply storing the source artifact directly (and marking it with an appropriate media type) reduces this potential deduplication loss from inconsistent tarball wrappers.
|
||||
|
||||
Finally, source artifacts in a registry should be consistent with the approach for other non-layer content in registries (such as Helm charts). Providing a consistent way to serve such content is the subject of the "artifacts" project within the Open Container Initiative. Registry-native distribution of source can be a beneficiary of that project.
|
||||
|
||||
#### In summary
|
||||
|
||||
The current approach (see the [current approach example][5]):
|
||||
|
||||
* Source artifacts masquerade as regular layers—wrap each source artifact in a tarball and mark it with a regular layer media type.
|
||||
* Identify the name of a source artifact inside the tar wrapper.
|
||||
* Associate the source and executable images using a tag name convention—tag the source image manifest with the tag for the corresponding executable image extended with "-source."
|
||||
|
||||
|
||||
|
||||
The future approach (see the [future approach example][6]):
|
||||
|
||||
* No masquerade—store source artifacts directly in the registry (named, as other registry content, by a hash digest) and give them non-layer media type(s).
|
||||
* Identify the name of a source artifact with a layer annotation in the manifest.
|
||||
* Associate the source and executable images using the image index—list the source manifest in the image index, along with manifests for each machine architecture.
|
||||
* Use a distinctive config. media type in source manifests (as proposed in the OCI artifacts project).
|
||||
|
||||
|
||||
|
||||
#### Where are we now?
|
||||
|
||||
Source "containers," which are actually images, for UBI images are in the Red Hat registry now. These source images have been built using production tools. The next step is to roll this out to other images.
|
||||
|
||||
#### Where are we going?
|
||||
|
||||
We need an industry-wide, consistent approach to making source code available for container images. Let's work together in the OCI's artifacts project and agree on the details of the no-masquerade approach.
|
||||
|
||||
### Opportunities
|
||||
|
||||
Including source code leads to opportunities. Making the full corresponding source code available as a corresponding source image can address more than merely GPL source availability:
|
||||
|
||||
* The exact license for open source software can be complicated (see [Is open source software licensing broken?][7]). But, with the full source code, one can determine whatever details are important to them (see [The source code is the license][8]).
|
||||
* How about license notices? You could attempt to extract all of them. Or, you could make the notices available directly via the source code (see [An economically efficient model for open source software license compliance][9]).
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/compliance-containers
|
||||
|
||||
作者:[Scott K Peterson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/skpeterson
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-containers2.png?itok=idd8duC_ (Shipping containers stacked in a yard)
|
||||
[2]: https://opensource.com/article/18/7/whats-container-image-meeting-legal-challenges
|
||||
[3]: https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html#section3
|
||||
[4]: https://github.com/opencontainers/artifacts
|
||||
[5]: https://blockchain-forensics.com/containers/current.svg
|
||||
[6]: https://blockchain-forensics.com/containers/future.svg
|
||||
[7]: https://opensource.com/article/20/2/open-source-licensing
|
||||
[8]: https://opensource.com/article/17/12/source-code-license
|
||||
[9]: https://opensource.com/article/17/9/economically-efficient-model
|
@ -0,0 +1,89 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What you need to know about automation testing in CI/CD)
|
||||
[#]: via: (https://opensource.com/article/20/7/automation-testing-cicd)
|
||||
[#]: author: (Taz Brown https://opensource.com/users/heronthecli)
|
||||
|
||||
What you need to know about automation testing in CI/CD
|
||||
======
|
||||
Continuous integration and continuous delivery is powered by testing.
|
||||
Here's how.
|
||||
![Net catching 1s and 0s or data in the clouds][1]
|
||||
|
||||
> "If things seem under control, you're just not going fast enough." —Mario Andretti
|
||||
|
||||
Test automation means focusing continuously on detecting defects, errors, and bugs as early and quickly as possible in the software development process. This is done using tools that pursue quality as the highest value and are put in place to _ensure_ quality—not just pursue it.
|
||||
|
||||
One of the most compelling features of a continuous integration/continuous delivery (CI/CD) solution (also called a DevOps pipeline) is the opportunity to test more frequently without burdening developers or operators with more manual work. Let's talk about why that's important.
|
||||
|
||||
### Why automate testing in CI/CD?
|
||||
|
||||
Agile teams iterate faster to deliver software and customer satisfaction at higher rates, and these pressures can jeopardize quality. Global competition has created _low tolerance_ for defects while increasing pressure on agile teams for _faster iterations_ of software delivery. What's the industry solution to alleviate this pressure? [DevOps][2].
|
||||
|
||||
DevOps is a big idea with many definitions, but one technology that is consistently essential to DevOps success is CI/CD. Designing a continuous cycle of improvement through a pipeline of software development can lead to new opportunities for testing.
|
||||
|
||||
### What does this mean for testers?
|
||||
|
||||
For testers, this generally means they must:
|
||||
|
||||
* Test earlier and more often (with automation)
|
||||
* Continue to test "real-world" workflows (automated and manual)
|
||||
|
||||
|
||||
|
||||
To be more specific, the role of testing in any form, whether it's run by the developers who write the code or designed by a team of quality assurance engineers, is to take advantage of the CI/CD infrastructure to increase quality while moving fast.
|
||||
|
||||
### What else do testers need to do?
|
||||
|
||||
To get more specific, testers are responsible for:
|
||||
|
||||
* Testing new and existing software applications
|
||||
* Verifying and validating functionality by evaluating software against system requirements
|
||||
* Utilizing automated-testing tools to develop and maintain reusable automated tests
|
||||
* Collaborating with all members of the scrum team to understand the functionality being developed and the implementation's technical design to design and develop accurate, high-quality automated tests
|
||||
* Analyzing documented user requirements and creating or assisting in designing test plans for moderately to highly complex software or IT systems
|
||||
* Developing automated tests and working with the functional team to review and evaluate test scenarios
|
||||
* Collaborating with the technical team to identify the proper approach to automating tests within the development environment
|
||||
* Working with the team to understand and resolve software problems with automated tests, and responding to suggestions for modifications or enhancements
|
||||
* Participating in backlog grooming, estimation, and other agile scrum ceremonies
|
||||
* Assisting in defining standards and procedures to support testing activities and materials (e.g., scripts, configurations, utilities, tools, plans, and results)
|
||||
|
||||
|
||||
|
||||
Testing is a great deal of work, but it's an essential part of building software effectively.
|
||||
|
||||
### What kind of continuous testing is important?
|
||||
|
||||
There are many types of tests you can use. The different types aren't firm lines between disciplines; instead, they are different ways of expressing how to test. It is less important to compare the types of tests and more important to have coverage for each test type.
|
||||
|
||||
* **Functional testing:** Ensures that the software has the functionality in its requirements
|
||||
* **Unit testing:** Independently tests smaller units/components of a software application to check their functionality
|
||||
* **Load testing:** Tests the performance of the software application during heavy load or usage
|
||||
* **Stress testing:** Determines the software application's breakpoint when under stress (maximum load)
|
||||
* **Integration testing:** Tests a group of components that are combined or integrated to produce an output
|
||||
* **Regression testing:** Tests the entire application's functionality when any component (no matter how small) has been modified
|
||||
|
||||
|
||||
|
||||
### Conclusion
|
||||
|
||||
Any software development process that includes continuous testing is on its way toward establishing a critical feedback loop to go fast and build effective software. Most importantly, the practice builds quality into the CI/CD pipeline and implies an understanding of the connection between increasing speed while reducing risk and waste in the software development lifecycle.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/automation-testing-cicd
|
||||
|
||||
作者:[Taz Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/heronthecli
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds)
|
||||
[2]: https://opensource.com/resources/devops
|
175
sources/tech/20200710 Use DNS over TLS.md
Normal file
175
sources/tech/20200710 Use DNS over TLS.md
Normal file
@ -0,0 +1,175 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Use DNS over TLS)
|
||||
[#]: via: (https://fedoramagazine.org/use-dns-over-tls/)
|
||||
[#]: author: (Thomas Bianchi https://fedoramagazine.org/author/thobianchi/)
|
||||
|
||||
Use DNS over TLS
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
The [Domain Name System (DNS)][2] that modern computers use to find resources on the internet was designed [35 years ago][3] without consideration for user privacy. It is exposed to security risks and attacks like [DNS Hijacking][4]. It also allows [ISPs][5] to intercept the queries.
|
||||
|
||||
Luckily, [DNS over TLS][6] and [DNSSEC][7] are available. DNS over TLS and DNSSEC allow safe and encrypted end-to-end tunnels to be created from a computer to its configured DNS servers. On Fedora, the steps to implement these technologies are easy and all the necessary tools are readily available.
|
||||
|
||||
This guide will demonstrate how to configure DNS over TLS on Fedora using systemd-resolved. Refer to the [documentation][8] for further information about the systemd-resolved service.
|
||||
|
||||
### Step 1 : Set-up systemd-resolved
|
||||
|
||||
Modify _/etc/systemd/resolved.conf_ so that it is similar to what is shown below. Be sure to enable DNS over TLS and to configure the IP addresses of the DNS servers you want to use.
|
||||
|
||||
```
|
||||
$ cat /etc/systemd/resolved.conf
|
||||
[Resolve]
|
||||
DNS=1.1.1.1 9.9.9.9
|
||||
DNSOverTLS=yes
|
||||
DNSSEC=yes
|
||||
FallbackDNS=8.8.8.8 1.0.0.1 8.8.4.4
|
||||
#Domains=~.
|
||||
#LLMNR=yes
|
||||
#MulticastDNS=yes
|
||||
#Cache=yes
|
||||
#DNSStubListener=yes
|
||||
#ReadEtcHosts=yes
|
||||
```
|
||||
|
||||
A quick note about the options:
|
||||
|
||||
* **DNS**: A space-separated list of IPv4 and IPv6 addresses to use as system DNS servers
|
||||
* **FallbackDNS**: A space-separated list of IPv4 and IPv6 addresses to use as the fallback DNS servers.
|
||||
* **Domains**: These domains are used as search suffixes when resolving single-label host names, _~._ stand for use the system DNS server defined with DNS= preferably for all domains.
|
||||
* **DNSOverTLS:** If true all connections to the server will be encrypted. Note that this mode requires a DNS server that supports DNS-over-TLS and has a valid certificate for it’s IP.
|
||||
|
||||
|
||||
|
||||
> _NOTE: The DNS servers listed in the above example are my personal choices. You should decide which DNS servers you want to use; being mindful of whom you are asking IPs for internet navigation_.
|
||||
|
||||
### Step 2 : Tell NetworkManager to push info to systemd-resolved
|
||||
|
||||
Create a file in _/etc/NetworkManager/conf.d_ named _10-dns-systemd-resolved.conf_.
|
||||
|
||||
```
|
||||
$ cat /etc/NetworkManager/conf.d/10-dns-systemd-resolved.conf
|
||||
[main]
|
||||
dns=systemd-resolved
|
||||
```
|
||||
|
||||
The setting shown above (_dns=systemd-resolved_) will cause NetworkManager to push DNS information acquired from DHCP to the systemd-resolved service. This will override the DNS settings configured in _Step 1_. This is fine on a trusted network, but feel free to set _dns=none_ instead to use the DNS servers configured in _/etc/systemd/resolved.conf_.
|
||||
|
||||
### Step 3 : start & restart services
|
||||
|
||||
To make the settings configured in the previous steps take effect, start and enable _systemd-resolved_. Then restart _NetworkManager_.
|
||||
|
||||
**CAUTION**: This will lead to a loss of connection for a few seconds while NetworkManager is restarting.
|
||||
|
||||
```
|
||||
$ sudo systemctl start systemd-resolved
|
||||
$ sudo systemctl enable systemd-resolved
|
||||
$ sudo systemctl restart NetworkManager
|
||||
```
|
||||
|
||||
> _NOTE: Currently, the systemd-resolved service is disabled by default and its use is opt-in. [There are plans][9] to enable systemd-resolved by default in Fedora 33._
|
||||
|
||||
### Step 4 : Check if everything is fine
|
||||
|
||||
Now you should be using DNS over TLS. Confirm this by checking DNS resolution status with:
|
||||
|
||||
```
|
||||
$ resolvectl status
|
||||
MulticastDNS setting: yes
|
||||
DNSOverTLS setting: yes
|
||||
DNSSEC setting: yes
|
||||
DNSSEC supported: yes
|
||||
Current DNS Server: 1.1.1.1
|
||||
DNS Servers: 1.1.1.1
|
||||
9.9.9.9
|
||||
Fallback DNS Servers: 8.8.8.8
|
||||
1.0.0.1
|
||||
8.8.4.4
|
||||
```
|
||||
|
||||
/etc/resolv.conf should point to 127.0.0.53
|
||||
|
||||
```
|
||||
$ cat /etc/resolv.conf
|
||||
# Generated by NetworkManager
|
||||
search lan
|
||||
nameserver 127.0.0.53
|
||||
```
|
||||
|
||||
To see the address and port that systemd-resolved is sending and receiving secure queries on, run:
|
||||
|
||||
```
|
||||
$ sudo ss -lntp | grep '\(State\|:53 \)'
|
||||
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
|
||||
LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:* users:(("systemd-resolve",pid=10410,fd=18))
|
||||
```
|
||||
|
||||
To make a secure query, run:
|
||||
|
||||
```
|
||||
$ resolvectl query fedoraproject.org
|
||||
fedoraproject.org: 8.43.85.67 -- link: wlp58s0
|
||||
8.43.85.73 -- link: wlp58s0
|
||||
|
||||
[..]
|
||||
|
||||
-- Information acquired via protocol DNS in 36.3ms.
|
||||
-- Data is authenticated: yes
|
||||
```
|
||||
|
||||
### BONUS Step 5 : Use Wireshark to verify the configuration
|
||||
|
||||
First, install and run [Wireshark][10]:
|
||||
|
||||
```
|
||||
$ sudo dnf install wireshark
|
||||
$ sudo wireshark
|
||||
```
|
||||
|
||||
It will ask you which link device it have to begin capturing packets on. In my case, because I use a wireless interface, I will go ahead with _wlp58s0_. Set up a filter in Wireshark like _tcp.port == 853_ (853 is the DNS over TLS protocol port). You need to flush the local DNS caches before you can capture a DNS query:
|
||||
|
||||
```
|
||||
$ sudo resolvectl flush-caches
|
||||
```
|
||||
|
||||
Now run:
|
||||
|
||||
```
|
||||
$ nslookup fedoramagazine.org
|
||||
```
|
||||
|
||||
You should see a TLS-encryped exchange between your computer and your configured DNS server:
|
||||
|
||||
![][11]
|
||||
|
||||
— _Poster in Cover Image Approved for Release by NSA on 04-17-2018, FOIA Case # 83661_ —
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/use-dns-over-tls/
|
||||
|
||||
作者:[Thomas Bianchi][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/thobianchi/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/06/use-dns-over-tls-816x345.jpg
|
||||
[2]: https://en.wikipedia.org/wiki/Domain_Name_System
|
||||
[3]: https://tools.ietf.org/html/rfc1035
|
||||
[4]: https://en.wikipedia.org/wiki/DNS_hijacking
|
||||
[5]: https://en.wikipedia.org/wiki/Internet_service_provider
|
||||
[6]: https://en.wikipedia.org/wiki/DNS_over_TLS
|
||||
[7]: https://en.wikipedia.org/wiki/Domain_Name_System_Security_Extensions
|
||||
[8]: https://www.freedesktop.org/wiki/Software/systemd/resolved/
|
||||
[9]: https://fedoraproject.org/wiki/Changes/systemd-resolved
|
||||
[10]: https://www.wireshark.org/
|
||||
[11]: https://fedoramagazine.org/wp-content/uploads/2020/06/1-1024x651.png
|
@ -1,101 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (entr: rerun your build when files change)
|
||||
[#]: via: (https://jvns.ca/blog/2020/06/28/entr/)
|
||||
[#]: author: (Julia Evans https://jvns.ca/)
|
||||
|
||||
entr:文件更改时重新运行构建
|
||||
======
|
||||
|
||||
这将是一篇很快的文章。我是最近才发现 [[entr]][1] 的,我觉得为什么之前没有人告诉我?!?!因此,如果你和我一样,我会告诉你有关情况。
|
||||
|
||||
[entr 的网站][1]上有很多示例,对它已经有很好的解释。
|
||||
|
||||
它标题的总结是:`entr` 是一个命令行工具,当每次更改一组指定文件中的任何一个时,你都可以使用它运行任意命令。你给它传递要在标准输入监控的文件列表,如下所示:
|
||||
|
||||
```
|
||||
git ls-files | entr bash my-build-script.sh
|
||||
```
|
||||
|
||||
或者
|
||||
|
||||
```
|
||||
find . -name *.rs | entr cargo test
|
||||
```
|
||||
|
||||
或者任何你希望的。
|
||||
|
||||
### 快速反馈很棒
|
||||
|
||||
就像世界上的每个程序员一样,我发现每次更改代码时都必须手动重新运行构建/测试非常烦人。
|
||||
|
||||
许多工具(例如 hugo 和 flask)都有内置的系统,可以在更改文件时自动重建,这很棒!
|
||||
|
||||
但是通常我会自己编写一些自定义的构建过程(例如 “bash build.sh”),而 `entr` 让我有一种神奇的构建经验,我可以立即获得我是否只用一行 bash 修复了奇怪 bug 的反馈。万岁!
|
||||
|
||||
### 重启服务器(`entr -r`)
|
||||
|
||||
好的,但是如果你正在运行服务器,并且每次都需要重新启动服务器怎么办? entr 了解你,如果你传递 `-r`,那么
|
||||
|
||||
```
|
||||
git ls-files | entr -r python my-server.py
|
||||
```
|
||||
|
||||
### 清除屏幕(`entr -c`)
|
||||
|
||||
另一个简洁的标志是 `-c`,它让你可以在重新运行命令之前清除屏幕,以免被前面构建的输出分散注意力。
|
||||
|
||||
### 与 `git ls-files` 一起使用
|
||||
|
||||
通常,我要跟踪的文件集与 git 中的文件列表大致相同,因此将 git ls-files 传递给 `entr` 是很自然的事情。
|
||||
|
||||
我现在有一个项目,有时我刚刚创建的文件还不在 git 中。那么,如果要包括未跟踪的文件怎么办?这是我拼凑的一点 bash 语句:
|
||||
|
||||
```
|
||||
{ git ls-files; git ls-files . --exclude-standard --others; } | entr your-build-scriot
|
||||
```
|
||||
|
||||
也许可以只用一条 git 命令做到,但我不知道是什么。
|
||||
|
||||
### 每次添加新文件时重启:`entr -d`
|
||||
|
||||
`git ls-files` 的另一个问题是有时候我添加一个新文件,当然它还没有在 git 中。entr 为此提供了一个很好的功能。如果你传递 `-d`,那么如果你在 entr 跟踪的任何目录中添加新文件,它将退出。
|
||||
|
||||
我将它与一个 while 循环配合使用,它将重启 `entr` 来包括新文件,如下所示:
|
||||
|
||||
```
|
||||
while true
|
||||
do
|
||||
{ git ls-files; git ls-files . --exclude-standard --others; } | entr -d your-build-scriot
|
||||
done
|
||||
```
|
||||
|
||||
### entr 在 Linux 上的工作方式:inotify
|
||||
|
||||
在 Linux 中,entr 使用 `inotify`(用于跟踪文件系统事件(如文件更改)的系统)工作。如果跟踪它,那么你会看到每个监控文件的 `inotify_add_watch` 系统调用,如下所示:
|
||||
|
||||
```
|
||||
inotify_add_watch(3, "static/stylesheets/screen.css", IN_ATTRIB|IN_CLOSE_WRITE|IN_CREATE|IN_DELETE_SELF|IN_MOVE_SELF) = 1152
|
||||
```
|
||||
|
||||
### 就是这些
|
||||
|
||||
我希望这可以帮助一些人了解 `entr`!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2020/06/28/entr/
|
||||
|
||||
作者:[Julia Evans][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://jvns.ca/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://eradman.com/entrproject/
|
Loading…
Reference in New Issue
Block a user