Merge pull request #11 from LCTT/master

update 0823
This commit is contained in:
SamMa 2021-08-23 10:39:31 +08:00 committed by GitHub
commit 9207966c92
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
46 changed files with 4547 additions and 1783 deletions

View File

@ -0,0 +1,121 @@
[#]: subject: (Use VS Code to develop in containers)
[#]: via: (https://opensource.com/article/21/7/vs-code-remote-containers-podman)
[#]: author: (Brant Evans https://opensource.com/users/branic)
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13708-1.html)
使用 VS Code 在容器中开发
======
> 一致性可以避免当你有多个开发人员开发同一个项目时出现问题。
![](https://img.linux.net.cn/data/attachment/album/202108/22/090306jlkzyrw8cytcatw8.jpg)
当你有多个不同开发环境的开发人员在一个项目上工作时,编码和测试的不一致性是一种风险。[Visual Studio Code][2]VS Code是一个集成开发环境IDE可以帮助减少这些问题。它可以和容器结合起来为每个应用程序提供独立的开发环境同时提供一个一致的开发环境。
VS Code 的 [“Remote - Containers” 扩展][3] 使你能够创建一个容器定义,使用该定义来构建一个容器,并在容器内进行开发。这个容器定义可以和应用程序代码一起被签入到源代码库中,这使得所有的开发人员可以使用相同的定义在容器中进行构建和开发。
默认情况下“Remote - Containers” 扩展使用 Docker 来构建和运行容器,但使用 [Podman][4] 的容器运行环境环境也很容易,它可以让你使用 [免 root 容器][5]。
本文将带领你完成设置,通过 Podman 在免 root 容器内使用 VS Code 和 “Remote - Containers” 扩展进行开发。
### 初始配置
在继续之前,请确保你的红帽企业 LinuxRHEL或 Fedora 工作站已经更新了最新的补丁,并且安装了 VS Code 和 “Remote - Containers” 扩展。(参见 [VS Code 网站][2]了解更多安装信息)
接下来,用一个简单的 `dnf install` 命令来安装 Podman 和它的支持包:
```
$ sudo dnf install -y podman
```
安装完 Podman 后,配置 VS Code 以使用 Podman 的可执行文件(而不是 Docker与容器进行交互。在 VS Code 中,导航到 “文件 > 首选项 > 设置”,点击 “扩展” 旁边的 “>” 图标。在出现的下拉菜单中,选择 “Remote - Containers”并向下滚动找到 “Remote - Containers: Docker Path” 选项。在文本框中,用 “podman” 替换 “docker”。
![在文本框中输入 “podman”][6]
现在配置已经完成,在 VS Code 中为该项目创建一个新的文件夹或打开现有的文件夹。
### 定义容器
本教程以创建 Python 3 开发的容器为例。
“Remote - Containers” 扩展可以在项目文件夹中添加必要的基本配置文件。要添加这些文件,通过在键盘上输入 `Ctrl+Shift+P` 打开命令面板,搜索 “Remote-Containers: Add Development Container Configuration Files”并选择它。
![Remote-Containers: Add Development Container Configuration Files][8]
在接下来的弹出窗口中,定义你想设置的开发环境的类型。对于这个例子的配置,搜索 “Python 3” 定义并选择它。
![选择 Python 3 定义][9]
接下来,选择将在容器中使用的 Python 的版本。选择 “3 (default)” 选项以使用最新的版本。
![选择 “3 (default)” 选项][10]
Python 配置也可以安装 Node.js但在这个例子中取消勾选 “Install Node.js”然后点击 “OK”。
![取消勾选 “Install Node.js"][11]
它将创建一个 `.devcontainer` 文件夹,包含文件`devcontainer.json`和`Dockerfile`。VS Code 会自动打开`devcontainer.json` 文件,这样你就可以对它进行自定义。
### 启用免 root 容器
除了明显的安全优势外,以免 root 方式运行容器的另一个原因是,在项目文件夹中创建的所有文件将由容器外的正确用户 IDUID拥有。要将开发容器作为免 root 容器运行,请修改 `devcontainer.json` 文件,在它的末尾添加以下几行:
```
"workspaceMount": "source=${localWorkspaceFolder},target=/workspace,type=bind,Z",
"workspaceFolder": "/workspace",
"runArgs": ["--userns=keep-id"],
"containerUser": "vscode"
```
这些选项告诉 VS Code 用适当的 SELinux 上下文挂载工作区,创建一个用户命名空间,将你的 UID 和 GID 原样映射到容器内,并在容器内使用 `vscode` 作为你的用户名。`devcontainer.json` 文件应该是这样的(别忘了行末的逗号,如图所示):
![更新后的 devcontainer.json 文件][12]
现在你已经设置好了容器的配置,你可以构建容器并打开里面的工作空间。重新打开命令调板(用 `Ctrl+Shift+P`),并搜索 “Remote-Containers: Rebuild and Reopen in Container”。点击它VS Code 将开始构建容器。现在是休息一下的好时机(拿上你最喜欢的饮料),因为构建容器可能需要几分钟时间:
![构建容器][13]
一旦容器构建完成项目将在容器内打开。在容器内创建或编辑的文件将反映在容器外的文件系统中并对这些文件应用适当的用户权限。现在你可以在容器内进行开发了。VS Code 甚至可以把你的 SSH 密钥和 Git 配置带入容器中,这样提交代码就会像在容器外编辑时那样工作。
### 接下来的步骤
现在你已经完成了基本的设置和配置,你可以进一步加强配置的实用性。比如说:
* 修改 Dockerfile 以安装额外的软件(例如,所需的 Python 模块)。
* 使用一个定制的容器镜像。例如,如果你正在进行 Ansible 开发,你可以使用 Quay.io 的 [Ansible Toolset][14]。(确保通过 Dockerfile 将 `vscode` 用户添加到容器镜像中)
* 将 `.devcontainer` 目录下的文件提交到源代码库,以便其他开发者可以利用容器的定义进行开发工作。
在容器内开发有助于防止不同项目之间的冲突,因为隔离了不同项目的依赖关系及代码。你可以使用 Podman 在免 root 环境下运行容器,从而提高安全性。通过结合 VS Code、“Remote - Containers” 扩展和 Podman你可以轻松地为多个开发人员建立一个一致的环境减少设置时间并以安全的方式减少开发环境的差异带来的错误。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/vs-code-remote-containers-podman
作者:[Brant Evans][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/branic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard2.png?itok=WnKfsl-G (Women programming)
[2]: https://code.visualstudio.com/
[3]: https://code.visualstudio.com/docs/remote/containers
[4]: https://podman.io/
[5]: https://www.redhat.com/sysadmin/rootless-podman-makes-sense
[6]: https://opensource.com/sites/default/files/uploads/vscode-remote_podman.png (Enter "podman" in the text box)
[7]: https://creativecommons.org/licenses/by-sa/4.0/
[8]: https://opensource.com/sites/default/files/uploads/adddevelopmentcontainerconfigurationfiles.png (Remote-Containers: Add Development Container Configuration Files)
[9]: https://opensource.com/sites/default/files/uploads/python3.png (Select Python 3 definition)
[10]: https://opensource.com/sites/default/files/uploads/python3default.png (Select the 3 \(default\) option)
[11]: https://opensource.com/sites/default/files/uploads/unchecknodejs.png (Uncheck "Install Node.js")
[12]: https://opensource.com/sites/default/files/uploads/newdevcontainerjson.png (Updated devcontainer.json file)
[13]: https://opensource.com/sites/default/files/uploads/buildingcontainer.png (Building the container)
[14]: https://quay.io/repository/ansible/toolset

View File

@ -3,36 +3,40 @@
[#]: author: (Onuralp SEZER https://fedoramagazine.org/author/thunderbirdtr/)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13698-1.html)
在 Fedora Linux 上使用 OpenCV 第一部分
在 Fedora Linux 上使用 OpenCV(一)
======
![][1]
封面图片选自[文森特-凡高][2]的《星空》,公共领域,通过维基共享资源发布
*封面图片选自[文森特·梵高][2]的《星空》,公共领域,通过维基共享资源发布*
技术世界每天都在变化,对计算机视觉、人工智能和机器学习的需求也在增加。让计算机和手机能够看到周围环境的技术被称为[计算机视觉][3]。重新创造人眼的工作始于 50 年代。从那时起,计算机视觉技术有了长足的发展。计算机视觉已经通过不同的应用进入了我们的手机。这篇文章将介绍 Fedora Linux 上的[OpenCV][4]。
技术世界每天都在变化,对计算机视觉、人工智能和机器学习的需求也在增加。让计算机和手机能够看到周围环境的技术被称为 [计算机视觉][3]。这个重新创造人眼的工作始于 50 年代。从那时起,计算机视觉技术有了长足的发展。计算机视觉已经通过不同的应用进入了我们的手机。这篇文章将介绍 Fedora Linux 上的 [OpenCV][4]。
### **什么是 OpenCV?**
### 什么是 OpenCV
> OpenCV 开源计算机视觉库是一个开源的计算机视觉和机器学习软件库。OpenCV 的建立是为了给计算机视觉应用提供一个通用的基础设施,并加速机器感知在商业产品中的应用。它有超过 2500 种优化算法,其中包括一套全面的经典和最先进的计算机视觉和机器学习算法。这些算法可用于检测和识别人脸,识别物体,对视频中的人类行为进行分类,并建立标记,将其与增强现实叠加等等。
> OpenCV<ruby>开源计算机视觉库<rt>Open Source Computer Vision Library</rt></ruby>是一个开源的计算机视觉和机器学习软件库。OpenCV 的建立是为了给计算机视觉应用提供一个通用的基础设施,并加速机器感知在商业产品中的应用。它有超过 2500 种优化后的算法,其中包括一套全面的经典和最先进的计算机视觉和机器学习算法。这些算法可用于检测和识别人脸、识别物体、对视频中的人类行为进行分类,并建立标记,将其与增强现实叠加等等。
>
> [opencv.org about][5]
### 在 Fedora Linux 上安装 OpenCV
要开始使用 OpenCV请从 Fedora Linux 仓库中安装它
要开始使用 OpenCV请从 Fedora Linux 仓库中安装它
```
$ sudo dnf install opencv opencv-contrib opencv-doc python3-opencv python3-matplotlib python3-numpy
```
**注意:**在 Fedora Silverblue 或 CoreOsPython 3.9 是核心提交的一部分。用以下方法安装 OpenCV 和所需工具:_rpm-ostree install opencv opencv-doc python3-opencv python3-matplotlib python3-numpy_。
**注意:** 在 Fedora Silverblue 或 CoreOSPython 3.9 是核心提交的一部分。用以下方法安装 OpenCV 和所需工具:
接下来,在终端输入以下命令,以验证 OpenCV 是否已经安装(用户输入的内容以粗体显示)。
```
rpm-ostree install opencv opencv-doc python3-opencv python3-matplotlib python3-numpy
```
接下来,在终端输入以下命令,以验证 OpenCV 是否已经安装:
```
$ python
@ -45,20 +49,20 @@ Type "help", "copyright", "credits" or "license" for more information.
>>> exit()
```
当你输入 _print_ 命令时,应该显示当前的 OpenCV 版本,如上图所示。这表明 OpenCV 和 Python-OpenCV 库已经成功安装。
当你输入 `print` 命令时,应该显示当前的 OpenCV 版本,如上图所示。这表明 OpenCV 和 Python-OpenCV 库已经成功安装。
此外,如果你想用 Jupyter Notebook 做笔记和写代码,并了解更多关于数据科学工具的信息,请查看早期的 Fedora Magazine 文章:[_Fedora 中的 Jupyter 和数据科学_][6]。
此外,如果你想用 Jupyter Notebook 做笔记和写代码,并了解更多关于数据科学工具的信息,请查看早期的 Fedora Magazine 文章:[Fedora 中的 Jupyter 和数据科学][6]。
### 开始使用 OpenCV
安装完成后,使用 Python 和 OpenCV 库加载一个样本图像(按 **S** 键以 _png_ 格式保存图像的副本并完成程序):
安装完成后,使用 Python 和 OpenCV 库加载一个样本图像(按 `S` 键以 png 格式保存图像的副本并完成程序):
```
$ cp /usr/share/opencv4/samples/data/starry_night.jpg .
$ python starry_night.py
```
_starry_night.py_ 的内容:
`starry_night.py` 的内容:
```
import cv2 as cv
@ -74,7 +78,7 @@ if k == ord("s"):
![][7]
通过在 _cv.imread_ 函数中添加参数 **0**,对图像进行灰度处理,如下所示。
通过在 `cv.imread` 函数中添加参数 `0`,对图像进行灰度处理,如下所示。
```
img = cv.imread(cv.samples.findFile("starry_night.jpg"),0)
@ -82,13 +86,11 @@ img = cv.imread(cv.samples.findFile("starry_night.jpg"),0)
![][8]
这些是一些可以用于 _cv.imread_ 函数的第二个参数的替代值。
* **cv2.IMREAD_GRAYSCALE** 或 **0** 以灰度模式加载图像。
* **cv2.IMREAD_COLOR** 或 **1** 以彩色模式载入图像。图像中的任何透明度将被移除。这是默认的。
* **cv2.IMREAD_UNCHANGED** 或 **-1**载入未经修改的图像。包括 alpha 通道。
这些是一些可以用于 `cv.imread` 函数的第二个参数的替代值:
* `cv2.IMREAD_GRAYSCALE``0`:以灰度模式加载图像。
* `cv2.IMREAD_COLOR** 或 `1`:以彩色模式载入图像。图像中的任何透明度将被移除。这是默认的。
* `cv2.IMREAD_UNCHANGED** 或 `-1`:载入未经修改的图像。包括 alpha 通道。
#### 使用 OpenCV 显示图像属性
@ -121,10 +123,8 @@ Image 2D numpy array
...
```
* **img.shape** 返回一个行数、列数和通道数的元组(如果是彩色图像)。
* **img.dtype** 返回图像的数据类型。
* `img.shape`:返回一个行数、列数和通道数的元组(如果是彩色图像)。
* `img.dtype`:返回图像的数据类型。
接下来用 Matplotlib 显示图像:
@ -140,7 +140,7 @@ plt.show()
#### 发生了什么?
该图像是作为灰度图像读入的,但是当使用 Matplotlib 的 _imshow_ 函数时,它不一定会以灰度显示。这是因为 _imshow_ 函数默认使用不同的颜色映射。要指定使用灰度颜色映射,请将 _imshow_ 函数的第二个参数设置为 _cmap='gray'_,如下所示。
该图像是作为灰度图像读入的,但是当使用 Matplotlib 的 `imshow` 函数时,它不一定会以灰度显示。这是因为 `imshow` 函数默认使用不同的颜色映射。要指定使用灰度颜色映射,请将 `imshow` 函数的第二个参数设置为 `cmap='gray'`,如下所示:
```
plt.imshow(img,cmap='gray')
@ -192,16 +192,14 @@ plt.show()
![][12]
* **cv2.split**将一个多通道数组分割成几个单通道数组。
* **cv2.merge** 将几个数组合并成一个多通道数组。所有的输入矩阵必须具有相同的大小。
* `cv2.split`将一个多通道数组分割成几个单通道数组。
* `cv2.merge`将几个数组合并成一个多通道数组。所有的输入矩阵必须具有相同的大小。
**注意:**白色较多的图像具有较高的颜色密度。相反,黑色较多的图像,其颜色密度较低。在上面的例子中,红色的密度是最低的。
**注意:** 白色较多的图像具有较高的颜色密度。相反,黑色较多的图像,其颜色密度较低。在上面的例子中,红色的密度是最低的。
#### 转换到不同的色彩空间
_cv2.cvtColor_ 函数将一个输入图像从一个颜色空间转换到另一个颜色空间。在 RGB 和 BGR 色彩空间之间转换时,应明确指定通道的顺序(_RGB2BGR_ 或 _BGR2RGB_)。**注意OpenCV 中的默认颜色格式通常被称为 RGB但它实际上是 BGR字节是相反的。**因此标准24 位)彩色图像的第一个字节将是一个 8 位蓝色分量,第二个字节是绿色,第三个字节是红色。然后第四、第五和第六个字节将是第二个像素(蓝色然后是绿色,然后是红色),以此类推。
`cv2.cvtColor` 函数将一个输入图像从一个颜色空间转换到另一个颜色空间。在 RGB 和 BGR 色彩空间之间转换时,应明确指定通道的顺序(`RGB2BGR` 或 `BGR2RGB`)。**注意OpenCV 中的默认颜色格式通常被称为 RGB但它实际上是 BGR字节是相反的。** 因此标准24 位)彩色图像的第一个字节将是一个 8 位蓝色分量,第二个字节是绿色,第三个字节是红色。然后第四、第五和第六个字节将是第二个像素(蓝色然后是绿色,然后是红色),以此类推。
```
import cv2 as cv
@ -218,7 +216,7 @@ plt.show()
关于 OpenCV 的更多细节可以在[在线文档][14]中找到。
谢谢
感谢阅读
--------------------------------------------------------------------------------
@ -227,7 +225,7 @@ via: https://fedoramagazine.org/use-opencv-on-fedora-linux-part-1/
作者:[Onuralp SEZER][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,175 @@
[#]: subject: "Install OpenVPN on your Linux PC"
[#]: via: "https://opensource.com/article/21/7/openvpn-router"
[#]: author: "D. Greg Scott https://opensource.com/users/greg-scott"
[#]: collector: "lujun9972"
[#]: translator: "perfiffer"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13702-1.html"
如何在免费 WiFi 中保护隐私(二)
======
> 安装完服务器之后,下一步就是安装和配置 0penVPN。
![](https://img.linux.net.cn/data/attachment/album/202108/20/123417yn554549p92ujt54.jpg)
0penVPN 在两点之间创建一个加密通道,阻止第三方访问你的网络流量数据。通过设置你的 “虚拟专用网络” 服务,你可以成为你自己的 “虚拟专用网络” 服务商。许多流行的 “虚拟专用网络” 服务都使用 [0penVPN][2],所以当你可以掌控自己的网络时,为什么还要将你的网络连接绑定到特定的提供商呢?
本系列的 [第一篇文章][3] 展示了如何安装和配置一台作为你的 0penVPN 服务器的 Linux 计算机。同时也讲述了如何配置你的路由器以便你可以在外部网络连接到你的服务器。
第二篇文章将演示根据 [0penVPN wiki][4] 给定的步骤安装一个 0penVPN 服务软件。
### 安装 0penVPN
首先,使用包管理器安装 0penVPN 和 `easy-rsa` 应用程序(帮助你在服务器上设置身份验证)。本例使用的是 Fedora Linux如果你选择了不同的发行版请选用合适的命令。
```
$ sudo dnf install openvpn easy-rsa
```
此操作会创建一些空目录:
* `/etc/openvpn`
* `/etc/openvpn/client`
* `/etc/openvpn/server`
如果这些目录在安装的过程中没有创建,请手动创建它们。
### 设置身份验证
0penVPN 依赖于 `easy-rsa` 脚本,并且应该有自己的副本。复制 `easy-rsa` 脚本和文件:
```
$ sudo mkdir /etc/openvpn/easy-rsa
$ sudo cp -rai /usr/share/easy-rsa/3/* /etc/openvpn/easy-rsa/
```
身份验证很重要0penVPN 非常重视它。身份验证的理论是,如果 Alice 需要访问 Bob 公司内部的私人信息,那么 Bob 确保 Alice 真的是 Alice 就至关重要。同样的Alice 也必须确保 Bob 是真正的 Bob。我们称之为相互认证。
现有的最佳实践是从三个可能因素中的选择两个检查属性:
* 你拥有的
* 你知道的
* 你是谁
选择有很多。0penVPN 安装使用如下:
* **证书**:客户端和服务端都拥有的东西
* **证书口令**:某人知道的东西
Alice 和 Bob 需要帮助彼此来验证身份。由于他们都相信 CathyCathy 承担了称为 <ruby>证书颁发机构<rt>certificate authority</rt></ruby>CA的角色。Cathy 证明 Alice 和 Bob 都是他们自己。因为 Alice 和 Bob 都信任 Cathy现在他们也相互信任了。
但是是什么让 Cathy 相信 Alice 和 Bob 是真的 Alice 和 BobCathy 在社区的声誉取决于如何正确处理这件事,因此如果她希望 Denielle、Evan、Fiona、Greg 和其他人也信任她,她就需要严格测试 Alice 和 Bob 的宣称内容。当 Alice 和 Bob 向 Cathy 证明了他们是真的 Alice 和 Bob 之后Cathy 将向 Alice 和 Bob 签署证书,让他们彼此和全世界分享。
Alice 和 Bob 如何知道是 Cathy 签署了证书,而不是某个人冒充她签发了证书?他们使用一项叫做**公钥加密**的技术:
* 找到一种用一个密钥加密并用另一个密钥解密的加密算法。
* 将其中一个设为私钥,将另外一个设为公钥。
* Cathy 与全世界分享她的公钥和她的签名的明文副本。
* Cathy 用她的私钥加密她的签名,任何人都可以用她分享的公钥解密。
* 如果 Cathy 的签名解密后与明文副本匹配Alice 和 Bob 就可以相信 Cathy 确实签署了它。
每次在线购买商品和服务时,使用的就是这种技术。
### 认证实现
0penVPN 的 [文档][5] 建议在单独的系统上或者至少在 0penVPN 服务器的单独目录上设置 CA。该文档还建议分别从服务端和客户端生成各自的证书。因为这是一个简单的演示设置你可以使用 0penVPN 服务器设置 CA并将证书和密钥放入服务器上的指定目录中。
从服务端生成证书,并将证书拷贝到各个客户端,避免客户端再次设置。
此实现使用自签名证书。这是因为服务器信任自己,而客户端信任服务器。因此,服务器是签署证书的最佳 CA。
在 0penVPN 服务器上设置 CA
```
$ sudo mkdir /etc/openvpn/ca
$ cd /etc/openvpn/ca
$ sudo /etc/openvpn/easy-rsa/easyrsa init-pki
$ sudo /etc/openvpn/easy-rsa/easyrsa build-ca
```
使用一个易记难猜的密码。
设置服务器密钥对和认证请求:
```
$ cd /etc/openvpn/server
$ sudo /etc/openvpn/easy-rsa/easyrsa init-pki
$ sudo /etc/openvpn/easy-rsa/easyrsa gen-req OVPNserver2020 nopass
```
在此例中,`OVPNServer2020` 是你在本系列第一篇文章中为 0penVPN 服务器设置的主机名。
### 生成和签署证书
现在你必须向 CA 发送服务器请求并生成和签署服务器证书。
此步骤实质上是将请求文件从 `/etc/openvpn/server/pki/reqs/OVPNserver2020.req` 复制到 `/etc/openvpn/ca/pki/reqs/OVPNserver2020.req` 以准备审查和签名:
```
$ cd /etc/openvpn/ca
$ sudo /etc/openvpn/easy-rsa/easyrsa \
import-req /etc/openvpn/server/pki/reqs/OVPNserver2020.req OVPNserver2020
```
### 审查并签署请求
你已经生成了一个请求,所以现在你必须审查并签署证书:
```
$ cd /etc/openvpn/ca
$ sudo /etc/openvpn/easy-rsa/easyrsa \
show-req OVPNserver2020
```
以服务器身份签署请求:
```
$ cd /etc/openvpn/ca
$ sudo /etc/openvpn/easy-rsa/easyrsa \
sign-req server OVPNserver2020
```
将服务器和 CA 证书的副本放在它们所属的位置,以便配置文件获取它们:
```
$ sudo cp /etc/openvpn/ca/pki/issued/OVPNserver2020.crt \
/etc/openvpn/server/pki/
$ sudo cp /etc/openvpn/ca/pki/ca.crt \
/etc/openvpn/server/pki/
```
接下来,生成 [Diffie-Hellman][6] 参数,以便客户端和服务器可以交换会话密钥:
```
$ cd /etc/openvpn/server
$ sudo /etc/openvpn/easy-rsa/easyrsa gen-dh
```
### 快完成了
本系列的下一篇文章将演示如何配置和启动你刚刚构建的 0penVPN 服务器。
本文的部分内容改编自 D. Greg Scott 的博客,并经许可重新发布。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/openvpn-router
作者:[D. Greg Scott][a]
选题:[lujun9972][b]
译者:[perfiffer](https://github.com/perfiffer)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/greg-scott
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab (Open ethernet cords.)
[2]: https://openvpn.net/
[3]: https://linux.cn/article-13680-1.html
[4]: https://community.openvpn.net/openvpn/wiki
[5]: https://openvpn.net/community-resources/
[6]: https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange
[7]: https://www.dgregscott.com/how-to-build-a-vpn-in-four-easy-steps-without-spending-one-penny/

View File

@ -3,61 +3,57 @@
[#]: author: "D. Greg Scott https://opensource.com/users/greg-scott"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "turbokernel"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13707-1.html"
在 Linux 上配置你的 OpenVPN 服务器
如何在免费 WiFi 中保护隐私(三)
======
在你安装了 OpenVPN 之后,是时候配置它了。
![Lock][1]
OpenVPN 在两点之间建立一个加密的隧道防止第三方访问你的网络流量。通过设置你的虚拟私人网络VPN服务器你就成为你自己的 VPN 供应商。许多流行的 VPN 服务已经使用 [OpenVPN][2],所以当你可以完全控制时,为什么要把你的连接绑定到一个特定的供应商?
> 在你安装了 0penVPN 之后,是时候配置它了。
本系列中的[第一篇][3]设置了一个 VPN 服务器,[第二篇][4]演示了如何安装和配置 OpenVPN 服务器软件。这第三篇文章展示了如何在认证到位的情况下启动 OpenVPN。
![](https://img.linux.net.cn/data/attachment/album/202108/22/081708mvgwwzv8f58vgwqz.jpg)
要设置一个 OpenVPN 服务器,你必须:
0penVPN 在两点之间建立一条加密的隧道,阻止第三方访问你的网络流量。通过设置你的 “虚拟专用网络” 服务,你就成为你自己的 “虚拟专用网络” 供应商。许多流行的 “虚拟专用网络” 服务已支持 [0penVPN][2],所以当你可以掌控自己的网络时,为什么还要将你的网络连接绑定到特定的提供商呢?
本系列中的 [第一篇][3] 展示了如何安装和配置一台作为你的 0penVPN 服务器的 Linux 计算机。,[第二篇][4] 演示了如何安装和配置 0penVPN 服务器软件。这第三篇文章演示了如何在认证成功的情况下启动 0penVPN。
要设置一个 0penVPN 服务器,你必须:
* 创建一个配置文件。
* 设置 `sysctl``net.ipv4.ip_forward = 1` 以启用路由。
* 为所有的配置和认证文件设置适当的所有权,以便在一个非 root 账户下运行 OpenVPN 服务器守护程序。
* 设置 OpenVPN 以适当的配置文件启动。
* 使用 `sysctl` 设置`net.ipv4.ip_forward = 1` 以启用路由。
* 为所有的配置和认证文件设置适当的所有权,以便使用非 root 账户运行 0penVPN 服务器守护程序。
* 设置 0penVPN 加载适当的配置文件启动。
* 配置你的防火墙。
### 配置文件
你必须在 `/etc/openvpn/server/` 中创建一个服务器配置文件。如果你想的话,你可以从头开始,OpenVPN 包括了几个样本配置文件,可以作为开始。看看 `/usr/share/doc/openvpn/sample/sample-config-files/` 就知道了。
你必须在 `/etc/openvpn/server/` 中创建一个服务器配置文件。如果你想的话,你可以从头开始,0penVPN 包括了几个配置示例示例文件,可以以此作为开始。看看 `/usr/share/doc/openvpn/sample/sample-config-files/` 就知道了。
如果你想手工建立一个配置文件,从 `server.conf``roadwarrior-server.conf` 开始(视情况而定),并将你的配置文件放在 `/etc/openvpn/server` 中。这两个文件都有大量的注释,所以请阅读注释并决定哪一个适用你的情况
如果你想手工建立一个配置文件,可以`server.conf``roadwarrior-server.conf` 开始(视情况而定),并将你的配置文件放在 `/etc/openvpn/server` 中。这两个文件都有大量的注释,所以请阅读注释并根据你的情况作出决定
你可以通过使用我预先建立的服务器和客户端配置文件模板和 `sysctl` 文件来打开网络路由,从而节省时间和麻烦。这个配置还包括自定义记录连接和断开的情况。它在 OpenVPN 服务器的 `/etc/openvpn/server/logs` 中保存日志。
你可以使用我预先建立的服务器和客户端配置文件模板和 `sysctl` 文件来打开网络路由,从而节省时间和麻烦。这个配置还包括自定义记录连接和断开的情况。它在 0penVPN 服务器的 `/etc/openvpn/server/logs` 中保存日志。
如果你使用我的模板,你需要编辑它们以使用你的 IP 地址和主机名。
如果你使用我的模板,你需要使用你的 IP 地址和主机名编辑它们
要使用我的预建配置模板、脚本和 `sysctl` 来打开 IP 转发,请下载我的脚本:
```
$ curl \
<https://www.dgregscott.com/ovpn/OVPNdownloads.sh> &gt; \
OVPNdownloads.sh
https://www.dgregscott.com/ovpn/OVPNdownloads.sh > \
OVPNdownloads.sh
```
阅读该脚本,了解它的工作内容。下面是它的行概述:
阅读该脚本,了解它的工作内容。下面是它的行概述:
* 在你的 OpenVPN 服务器上创建适当的目录
* 在你的 0penVPN 服务器上创建适当的目录
* 从我的网站下载服务器和客户端的配置文件模板
* 下载我的自定义脚本,并以正确的权限把它们放到正确的目录中
* 下载 `99-ipforward.conf` 并把它放到 `/etc/sysctl.d` 中,以便在下次启动时打开 IP 转发功能
* 下载我的自定义脚本,并以正确的权限把它们放到正确的目录中
* 下载 `99-ipforward.conf` 并把它放到 `/etc/sysctl.d` 中,以便在下次启动时打开 IP 转发功能
* 为 `/etc/openvpn` 中的所有内容设置了所有权
当你确定你理解了这个脚本的作用,就使它可执行并运行它:
```
$ chmod +x OVPNdownloads.sh
$ sudo ./OVPNdownloads.sh
@ -65,7 +61,6 @@ $ sudo ./OVPNdownloads.sh
下面是它复制的文件(注意文件的所有权):
```
$ ls -al -R /etc/openvpn
/etc/openvpn:
@ -104,7 +99,6 @@ drwxr-xr-x. 4 openvpn openvpn 56 Apr 6 20:35 ..
下面是 `99-ipforward.conf` 文件:
```
# Turn on IP forwarding. OpenVPN servers need to do routing
net.ipv4.ip_forward = 1
@ -114,8 +108,7 @@ net.ipv4.ip_forward = 1
### 文件所有权
如果你使用了我网站上的自动脚本,文件所有权就已经到位了。如果没有,你必须确保你的系统有一个叫 `openvpn` 的用户,并且是 `openvpn` 组的成员。你必须将 `/etc/openvpn` 中的所有内容的所有权设置为该用户和组。如果你不确定该用户和组是否已经存在,这样做是安全的,因为 `useradd` 会拒绝创建一个与已经存在的用户同名的用户:
如果你使用了我网站上的自动脚本,文件所有权就已经到位了。如果没有,你必须确保你的系统有一个叫 `openvpn` 的用户,并且是 `openvpn` 组的成员。你必须将 `/etc/openvpn` 中的所有内容的所有权设置为该用户和组。如果你不确定该用户和组是否已经存在,这样做也是安全的,因为 `useradd` 会拒绝创建一个与已经存在的用户同名的用户:
```
$ sudo useradd openvpn
@ -124,8 +117,7 @@ $ sudo chown -R openvpn.openvpn /etc/openvpn
### 防火墙
如果你在步骤 1 中决定不禁用 firewalld 服务,那么你的服务器的防火墙服务可能默认不允许 VPN 流量。使用 [`firewall-cmd` 命令][5],你可以启用 OpenVPN 服务,它可以打开必要的端口并根据需要路由流量:
如果你在步骤 1 中启用 firewalld 服务,那么你的服务器的防火墙服务可能默认不允许 “虚拟专用网络” 流量。使用 [firewall-cmd 命令][5],你可以启用 0penVPN 服务,它可以打开必要的端口并按需路由流量:
```
$ sudo firewall-cmd --add-service openvpn --permanent
@ -136,19 +128,15 @@ $ sudo firewall-cmd --reload
### 启动你的服务器
现在你可以启动你的 OpenVPN 服务器了。为了让它在重启后自动启动,使用 `systemctl``enable` 子命令:
现在你可以启动 0penVPN 服务器了。为了让它在重启后自动运行,使用 `systemctl``enable` 子命令:
```
`systemctl enable --now openvpn-server@OVPNserver2020.service`
systemctl enable --now openvpn-server@OVPNserver2020.service
```
### 最后的步骤
本文的第四篇也是最后一篇文章将演示如何设置客户端,以便从远处连接到你的 OpenVPN。
* * *
本文的第四篇也是最后一篇文章将演示如何设置客户端,以便远程连接到你的 0penVPN。
_本文基于 D.Greg Scott 的[博客][6]经许可后重新使用。_
@ -159,7 +147,7 @@ via: https://opensource.com/article/21/7/openvpn-firewall
作者:[D. Greg Scott][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[turbokernel](https://github.com/turbokernel)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -167,7 +155,7 @@ via: https://opensource.com/article/21/7/openvpn-firewall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum (Lock)
[2]: https://openvpn.net/
[3]: https://opensource.com/article/21/7/vpn-openvpn-part-1
[4]: https://opensource.com/article/21/7/vpn-openvpn-part-2
[3]: https://linux.cn/article-13680-1.html
[4]: https://linux.cn/article-13702-1.html
[5]: https://www.redhat.com/sysadmin/secure-linux-network-firewall-cmd
[6]: https://www.dgregscott.com/how-to-build-a-vpn-in-four-easy-steps-without-spending-one-penny/

View File

@ -0,0 +1,183 @@
[#]: subject: "Change your Linux Desktop Wallpaper Every Hour [Heres How]"
[#]: via: "https://www.debugpoint.com/2021/08/change-wallpaper-every-hour/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13701-1.html"
如何每小时改变你的 Linux 桌面壁纸
======
![](https://img.linux.net.cn/data/attachment/album/202108/19/223054ga6b8a8paa61u31u.jpg)
这个 shell 脚本 `styli.sh` 可以帮助你每小时自动改变你的 Linux 桌面壁纸,并且有几个选项。
用一张漂亮的壁纸来开始你的一天,你的桌面让人耳目一新。但寻找壁纸,然后保存,最终设置为壁纸,是非常麻烦的。所有这些步骤都可以通过这个叫做 [styli.sh][1] 的脚本完成。
### styli.sh - 每小时改变你的 Linux 桌面壁纸
这是一个 shell 脚本,你可以从 GitHub 上下载。当运行时,它从 Reddit 的热门版块中获取壁纸并将其设置为你的壁纸。
该脚本适用于所有流行的桌面环境,如 GNOME、KDE Plasma、Xfce 和 Sway 窗口管理器。
它有很多功能,你可以通过 crontab 来运行这个脚本,并在特定的时间间隔内得到一张新的壁纸。
### 下载并安装、运行
打开一个终端,并克隆 GitHub 仓库。如果没有安装的话,你需要安装 [feh][2] 和 git。
```
git clone https://github.com/thevinter/styli.sh
cd styli.sh
```
要设置随机壁纸,根据你的桌面环境运行以下内容。
![Change your Linux Desktop Wallpaper Every Hour using styli.sh][3]
GNOME
```
./styli.sh -g
```
Xfce
```
./styli.sh -x
```
KDE Plasma
```
./styli.sh -k
```
Sway
```
./styli.sh -y
```
### 每小时改变一次
要每小时改变背景,请运行以下命令:
```
crontab -e
```
并在打开的文件中加入以下内容。不要忘记改变脚本路径。
```
@hourly script/path/styli.sh
```
### 改变版块
在源目录中,有一个名为 `subreddits` 的文件。它填满了一些标准的版块。如果你想要更多一些,只需在文件末尾添加版块名称。
### 更多配置选项
壁纸的类型、大小,也可以设置。以下是这个脚本的一些独特的配置选项。
设置一个随机的 1920×1080 背景:
```
./styli.sh
```
指定一个所需的宽度或高度:
```
./styli.sh -w 1080 -h 720
./styli.sh -w 2560
./styli.sh -h 1440
```
根据搜索词设置壁纸:
```
./styli.sh -s island
./styli.sh -s “sea sunset”
./styli.sh -s sea -w 1080
```
从设定的一个版块中获得一个随机壁纸:
注意:宽度/高度/搜索参数对 reddit 不起作用。
```
./styli.sh -l reddit
```
从一个自定义的版块获得随机壁纸:
```
./styli.sh -r
./styli.sh -r wallpaperdump
```
使用内置的 `feh -bg` 选项:
```
./styli.sh -b
./styli.sh -b bg-scale -r widescreen-wallpaper
```
添加自定义的 feh 标志:
```
./styli.sh -c
./styli.sh -c no-xinerama -r widescreen-wallpaper
```
自动设置终端的颜色:
```
./styli.sh -p
```
使用 nitrogen 而不是 feh
```
./styli.sh -n
```
使用 nitrogen 更新多个屏幕:
```
./styli.sh -n -m
```
从一个目录中选择一个随机的背景:
```
./styli.sh -d /path/to/dir
```
### 最后说明
这是一个独特且方便的脚本,内存占用小,可以直接在一个时间间隔内比如一个小时获取图片。让你的桌面看起来 [新鲜且高效][4]。如果你不喜欢这些壁纸,你可以简单地从终端再次运行脚本来循环使用。
你喜欢这个脚本吗?或者你知道有什么像这样的壁纸切换器吗?请在下面的评论栏里告诉我。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2021/08/change-wallpaper-every-hour/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: https://github.com/thevinter/styli.sh
[2]: https://feh.finalrewind.org/
[3]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/Change-your-Linux-Desktop-Wallpaper-Every-Hour-using-styli.sh_.jpg
[4]: https://www.debugpoint.com/category/themes

View File

@ -3,58 +3,54 @@
[#]: author: "Kenneth Aaron https://opensource.com/users/flyingrhino"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13704-1.html"
用 LVM 安装 Linux
在 LVM 上安装 Linux Mint
======
一个关于让 Linux Mint 20.2 与逻辑卷管理器LVM一起工作的教程。
![Linux keys on the keyboard for a desktop computer][1]
几周前,[Linux Mint][2] 的人员发布了他们的开源操作系统的 20.2 版本。Live ISO 中内置的安装程序非常好,只需要点击几下就可以安装操作系统。如果你想定制你的分区,你甚至有一个内置的分区器
> 一个关于让 Linux Mint 20.2 与逻辑卷管理器LVM一起工作的教程。
安装程序主要集中在简单的安装上:定义你的分区并安装到这些分区。对于那些想要更灵活的设置的人来说,[逻辑卷管理器][3] LVM是个不错的选择你可以通过设置卷组并在其中定义你的逻辑卷。
![](https://img.linux.net.cn/data/attachment/album/202108/21/104418yg111cba52caalc5.jpg)
LVM 是一个硬盘管理系统,允许你在多个物理驱动器上创建存储空间。换句话说,你可以把几个小驱动器“拴”在一起,这样你的操作系统就会把它们当作一个驱动器。除此之外,它还有实时调整大小、文件系统快照和更多的优点。这篇文章并不是关于 LVM 的教程(网上已经有很多[这方面不错的信息][4]了)。 相反,我的目标是保持这个页面的主题,只关注让 Linux Mint 20.2 与 LVM 一起工作
几周前,[Linux Mint][2] 的人员发布了他们的开源操作系统的 20.2 版本。Live ISO 中内置的安装程序非常好,只需要点击几下就可以安装操作系统。如果你想定制你的分区,你甚至有一个内置的分区软件
作为一个桌面操作系统,安装程序很简单,在 LVM 上安装 LM 20.2 略微复杂一些,但不会太复杂。如果你在安装程序中选择了 LVM你会得到一个由 Linux Mint 开发者定义的设置,而且你在安装时无法控制各个卷
安装程序重点关注在简单的安装上:定义你的分区并安装到这些分区。对于那些想要更灵活的设置的人来说,<ruby>[逻辑卷管理器][3]<rt>logical volume manager</rt></ruby>LVM是个不错的选择你可以通过设置卷组VG并在其中定义你的逻辑卷LV
然而,有一个解决方案:在 Live ISO 中,该方案只需要在终端中的几个命令来设置 LVM然后你继续使用常规安装程序来完成工作。
LVM 是一个硬盘管理系统,允许你在多个物理驱动器上创建存储空间。换句话说,你可以把几个小驱动器“拴”在一起,这样你的操作系统就会把它们当作一个驱动器。除此之外,它还有实时调整大小、文件系统快照和更多的优点。这篇文章并不是关于 LVM 的教程(网上已经有很多 [这方面不错的信息][4]了)。相反,我的目标是贴合这篇文章的主题,只关注让 Linux Mint 20.2 与 LVM 一起工作。
我安装了 Linux Mint 20.2 和 [XFCE 桌面][5],但其他 LM 桌面的过程也类似。
作为一个桌面操作系统,其安装程序致力于简单化,在 LVM 上安装 Linux Mint 20.2 会略微复杂一些,但不会太复杂。如果你在安装程序中选择了 LVM你会得到一个由 Linux Mint 开发者定义的设置,而且你在安装时无法控制各个卷。
然而,有一个解决方案:在临场 ISO 中,该方案只需要在终端中使用几个命令来设置 LVM然后你可以继续使用常规安装程序来完成工作。
我安装了 Linux Mint 20.2 和 [XFCE 桌面][5],但其他 Linux Mint 桌面的过程也类似。
### 分区驱动器
在 Linux Mint live ISO 中,你可以通过终端和 GUI 工具访问 Linux 命令行工具。如果你需要做任何分区工作,你可以使用命令行 `fdisk``parted` 命令,或者 GUI 应用 `gparted`。我想让这些说明简单到任何人都能遵循,所以我会在可能的情况下使用 GUI 工具,在必要时使用命令行工具。
在 Linux Mint 临场 ISO 中,你可以通过终端和 GUI 工具访问 Linux 命令行工具。如果你需要做任何分区工作,你可以使用命令行 `fdisk``parted` 命令,或者 GUI 应用 `gparted`。我想让这些操作简单到任何人都能遵循,所以我会在可能的情况下使用 GUI 工具,在必要时使用命令行工具。
首先,为安装创建几个分区。
使用 `gparted` (从菜单中启动),完成以下工作:
使用 `gparted`(从菜单中启动),完成以下工作:
首先,创建一个 512MB 的分区,类型为 **FAT32**这是用来确保系统可启动。512MB 对大多数人来说是余的,你可以用 256MB 甚至更少,但在今天的大磁盘中,即使分配 512MB 也不是什么大问题。
首先,创建一个 512MB 的分区,类型为 FAT32这是用来确保系统可启动。512MB 对大多数人来说是余的,你可以用 256MB 甚至更少,但在今天的大容量磁盘中,即使分配 512MB 也不是什么大问题。
![Creating a boot partition][6]
CC BY-SA Seth Kenlon
接下来,在磁盘的其余部分创建一个 `lvm2 pv` 类型的分区(这是你的 LVM 的位置)。
接下来,在磁盘的其余部分创建一个 `lvm2 pv` 类型LVM 2 物理卷)的分区(这是你的 LVM 的位置)。
![Partition layout][7]
CC BY-SA Seth Kenlon
现在打开一个终端窗口,并将你的权限提升到 root
```
$ sudo -s
# whoami
root
```
接下来,你必须找到你之前创建的 LVM 成员(大分区)。使用下列命令之一: `lsblk -f``pvs``pvscan`
接下来,你必须找到你之前创建的 LVM 成员(那个大分区)。使用下列命令之一:`lsblk -f` 或 `pvs``pvscan`
```
# pvs
@ -64,42 +60,37 @@ PV VG Fmt [...]
在我的例子中,该分区位于 `/dev/sda2`,但你应该用你的输出中得到的内容来替换它。
现在你知道了你的分区有哪些设备,你可以在那里创建一个 LVM 卷组:
现在你知道了你的分区有哪些设备,你可以在那里创建一个 LVM 卷组VG
```
`# vgcreate vg /dev/sda2`
# vgcreate vg /dev/sda2
```
你可以使用 `vgs``vgscan` 看到你创建的卷组的细节。
创建你想在安装时使用的逻辑卷。为了简单,我分别创建了根分区(`/`)和 `swap` 分区,但是你可以根据需要创建更多的分区(例如,为 `/home` 创建一个单独的分区)。
创建你想在安装时使用的逻辑卷LV。为了简单我分别创建了 `root` 根分区(`/`)和 `swap` 交换分区,但是你可以根据需要创建更多的分区(例如,为 `/home` 创建一个单独的分区)。
```
# lvcreate -L 80G -n root vg
# lvcreate -L 16G -n swap vg
```
我的例子中的分区大小是任意的,是基于我可用的。使用对你的硬盘有意义的分区大小。
我的例子中的分区大小是任意的,是基于我可用的空间。使用对你的硬盘有意义的分区大小。
你可以用 `lvs``lvdisplay` 查看逻辑卷。
终端到这就结束了。
终端操作到这就结束了。
### 安装 Linux
现在从桌面上的图标启动安装程序:
* 进入 **Installation type**,选择 **Something else**
* 进入 “Installation type”选择 “Something else”
* 编辑 512Mb 的分区并将其改为 `EFI`
* 编辑根 LV,将其改为 `ext4`(或一个你选择的文件系统)。选择将其挂载为根目录,并选择将其格式化。
* 编辑交换分区并将其设置为`swap`
* 编辑根逻辑卷,将其改为 `ext4`(或一个你选择的文件系统)。选择将其挂载为根目录`/`,并选择将其格式化。
* 编辑 `swap` 分区并将其设置为交换分区
* 继续正常的安装过程。Linux Mint 安装程序会将文件放在正确的位置并为你创建挂载点。
完成了。在你的 Linux Mint 安装中享受 LVM 的强大。
如果你需要调整分区大小或在系统上做任何高级工作,你会感谢选择 LVM。
@ -111,7 +102,7 @@ via: https://opensource.com/article/21/8/install-linux-mint-lvm
作者:[Kenneth Aaron][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,14 +3,16 @@
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "unigeorge"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13699-1.html"
使用 commons-cli 解析 Java 中的命令行选项
======
让用户学会用命令行选项调整你的 Java 应用程序运行方式。
![Learning and studying technology is the key to success][1]
> 让用户用命令行选项调整你的 Java 应用程序运行方式。
![](https://img.linux.net.cn/data/attachment/album/202108/19/115907lvjwc1ce5avumaau.jpg)
通常向终端中输入命令时,无论是启动 GUI 应用程序还是仅启动终端应用程序,都可以使用
<ruby>
@ -22,13 +24,13 @@ Java 中有若干种解析选项的方法,其中我最喜欢用的是 [Apache
### 安装 commons-cli
如果你使用类似 [Maven][5] 之类的项目管理系统以及集成开发环境Integrated Development Environment简称IDE可以在项目属性比如 `pom.xml` 配置文件或者 Eclipse 和 NetBeans 的配置选项卡)中安装 Apache Commons CLI 库。
如果你使用类似 [Maven][5] 之类的项目管理系统以及<ruby>集成开发环境<rt>Integrated Development Environment</rt></ruby>(简称 IDE可以在项目属性比如 `pom.xml` 配置文件或者 Eclipse 和 NetBeans 的配置选项卡)中安装 Apache Commons CLI 库。
而如果你采用手动方式管理库,则可以从 Apache 网站下载 [该库的最新版本][6]。下载到本地的是几个捆绑在一起的 JAR 文件,你只需要其中的一个文件 `commons-cli-X.Y.jar`(其中 X 和 Y 代指最新版本号)。把这个 JAR 文件或手动或使用 IDE 添加到项目,就可以在代码中使用了。
### 将库导入至 Java 代码
在使用 **commons-cli** 库之前,必须首先导入它。对于本次选项解析的简单示例而言,可以先在 `Main.java` 文件中简单写入以下标准代码:
在使用 `commons-cli` 库之前,必须首先导入它。对于本次选项解析的简单示例而言,可以先在 `Main.java` 文件中简单写入以下标准代码:
```
package com.opensource.myoptparser;
@ -36,7 +38,7 @@ package com.opensource.myoptparser;
import org.apache.commons.cli.*;
public class Main {
    public static void main([String][7][] args) {
    public static void main(String[] args) {
    // code 
    }
}
@ -55,20 +57,20 @@ public class Main {
    Options options = new Options();
```
接下来,通过列出短选项(即选项名简写)、长选项(即全写)、默认布尔值(译注:设置是否需要选项参数,指定为 false 时此选项不带参,即为布尔选项)和帮助信息来定义选项,然后设置该选项是否为必需项(译注:下方创建 `alpha` 对象的代码中未手动设置此项),最后将该选项添加到包含所有选项的 `options` 组对象中。在下面几行代码中,我只创建了一个选项,命名为 `alpha`
接下来,通过列出短选项(即选项名简写)、长选项(即全写)、默认布尔值(LCTT 译注:设置是否需要选项参数,指定为 `false` 时此选项不带参,即为布尔选项)和帮助信息来定义选项,然后设置该选项是否为必需项(LCTT 译注:下方创建 `alpha` 对象的代码中未手动设置此项),最后将该选项添加到包含所有选项的 `options` 组对象中。在下面几行代码中,我只创建了一个选项,命名为 `alpha`
```
    //define options
    [Option][8] alpha = new [Option][8]("a", "alpha", false, "Activate feature alpha");
    Option alpha = new Option("a", "alpha", false, "Activate feature alpha");
    options.addOption(alpha);
```
### 在 Java 中定义带参选项
有时用户需要通过选项提供 **true****false** 以外的信息,比如给出配置文件、输入文件或诸如日期、颜色这样的设置项值。这种情况可以使用 `builder` 方法,根据选项名简写为其创建属性(例如,`-c` 是短选项,`--config` 是长选项)。完成定义后,再将定义好的选项添加到 `options` 组中:
有时用户需要通过选项提供 `true``false` 以外的信息,比如给出配置文件、输入文件或诸如日期、颜色这样的设置项值。这种情况可以使用 `builder` 方法,根据选项名简写为其创建属性(例如,`-c` 是短选项,`--config` 是长选项)。完成定义后,再将定义好的选项添加到 `options` 组中:
```
    [Option][8] config = [Option][8].builder("c").longOpt("config")
    Option config = Option.builder("c").longOpt("config")
        .argName("config")
        .hasArg()
        .required(true)
@ -76,11 +78,11 @@ public class Main {
    options.addOption(config);
```
`builder`函数可以用来设置短选项、长选项、是否为必需项(本段代码中必需项设置为 **true**,也就意味着用户启动程序时必须提供此选项,否则应用程序无法运行)、帮助信息等。
`builder` 函数可以用来设置短选项、长选项、是否为必需项(本段代码中必需项设置为 `true`,也就意味着用户启动程序时必须提供此选项,否则应用程序无法运行)、帮助信息等。
### 使用 Java 解析选项
定义并添加所有可能用到的选项后,需要对用户提供的参数进行迭代处理,检测是否有参数同预设的有效短选项列表中的内容相匹配。为此要创建 **CommandLine** 命令行本身的一个实例,其中包含用户提供的所有参数(包含有效选项和无效选项)。为了处理这些参数,还要创建一个 **CommandLineParser** 对象,我在代码中将其命名为 `parser`。最后,还可以创建一个 **HelpFormatter** 对象(我将其命名为 `helper`),当参数中缺少某些必需项或者用户使用 `--help``-h` 选项时,此对象可以自动向用户提供一些有用的信息。
定义并添加所有可能用到的选项后,需要对用户提供的参数进行迭代处理,检测是否有参数同预设的有效短选项列表中的内容相匹配。为此要创建命令行 `CommandLine` 本身的一个实例,其中包含用户提供的所有参数(包含有效选项和无效选项)。为了处理这些参数,还要创建一个 `CommandLineParser` 对象,我在代码中将其命名为 `parser`。最后,还可以创建一个 `HelpFormatter` 对象(我将其命名为 `helper`),当参数中缺少某些必需项或者用户使用 `--help``-h` 选项时,此对象可以自动向用户提供一些有用的信息。
```
    // define parser
@ -93,23 +95,23 @@ public class Main {
```
try {
    cmd = parser.parse(options, args);
    if(cmd.hasOption("a")) {
    [System][9].out.println("Alpha activated");
    }
cmd = parser.parse(options, args);
if(cmd.hasOption("a")) {
System.out.println("Alpha activated");
}
    if (cmd.hasOption("c")) {
    [String][7] opt_config = cmd.getOptionValue("config");
    [System][9].out.println("Config set to " + opt_config);
    }
} catch ([ParseException][10] e) {
    [System][9].out.println(e.getMessage());
    helper.printHelp("Usage:", options);
    [System][9].exit(0);
if (cmd.hasOption("c")) {
String opt_config = cmd.getOptionValue("config");
System.out.println("Config set to " + opt_config);
}
} catch (ParseException e) {
System.out.println(e.getMessage());
helper.printHelp("Usage:", options);
System.exit(0);
}
```
解析过程有可能会产生错误,因为有时可能缺少某些必需项如本例中的 `-c``--config` 选项。这时程序会打印一条帮助信息并立即结束运行。考虑到此错误Java 术语中称为 _exception_异常),在 main 方法的开头要添加语句声明可能的异常:
解析过程有可能会产生错误,因为有时可能缺少某些必需项如本例中的 `-c``--config` 选项。这时程序会打印一条帮助信息并立即结束运行。考虑到此错误Java 术语中称为异常),在 `main` 方法的开头要添加语句声明可能的异常:
```
@ -146,53 +148,52 @@ Config set to foo
以下是完整的演示代码,供读者参考:
```
package com.opensource.myapp;
import org.apache.commons.cli.*;
public class Main {
    
    /**
     * @param args the command line arguments
     * @throws org.apache.commons.cli.ParseException
     */ 
    public static void main([String][7][] args) throws [ParseException][10] {
        // define options
        Options options = new Options();
        
        [Option][8] alpha = new [Option][8]("a", "alpha", false, "Activate feature alpha");
        options.addOption(alpha);
        
        [Option][8] config = [Option][8].builder("c").longOpt("config")
                .argName("config")
                .hasArg()
                .required(true)
                .desc("Set config file").build();
        options.addOption(config);
     
        // define parser
        CommandLine cmd;
        CommandLineParser parser = new BasicParser();
        HelpFormatter helper = new HelpFormatter();
/**
* @param args the command line arguments
* @throws org.apache.commons.cli.ParseException
*/
public static void main(String[] args) throws ParseException {
// define options
Options options = new Options();
Option alpha = new Option("a", "alpha", false, "Activate feature alpha");
options.addOption(alpha);
Option config = Option.builder("c").longOpt("config")
.argName("config")
.hasArg()
.required(true)
.desc("Set config file").build();
options.addOption(config);
// define parser
CommandLine cmd;
CommandLineParser parser = new BasicParser();
HelpFormatter helper = new HelpFormatter();
        try {
            cmd = parser.parse(options, args);
            if(cmd.hasOption("a")) {
                [System][9].out.println("Alpha activated");
            }
          
            if (cmd.hasOption("c")) {
                [String][7] opt_config = cmd.getOptionValue("config");
                [System][9].out.println("Config set to " + opt_config);
            }
        } catch ([ParseException][10] e) {
            [System][9].out.println(e.getMessage());
            helper.printHelp("Usage:", options);
            [System][9].exit(0);
        }
    }
try {
cmd = parser.parse(options, args);
if(cmd.hasOption("a")) {
System.out.println("Alpha activated");
}
if (cmd.hasOption("c")) {
String opt_config = cmd.getOptionValue("config");
System.out.println("Config set to " + opt_config);
}
} catch (ParseException e) {
System.out.println(e.getMessage());
helper.printHelp("Usage:", options);
System.exit(0);
}
}
}
```
@ -207,7 +208,7 @@ via: https://opensource.com/article/21/8/java-commons-cli
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -4,8 +4,8 @@
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: " "
[#]: url: " "
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13695-1.html"
新发布的 Debian 11 “Bullseye” Linux 发行版的 7 大亮点
======

View File

@ -0,0 +1,104 @@
[#]: subject: "Zorin OS 16 Released with Stunning New Look and Array of Updates"
[#]: via: "https://www.debugpoint.com/2021/08/zorin-os-16-release-announcement/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: "zd200572"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13705-1.html"
Zorin OS 16 发布:惊艳的新外观和一系列更新
======
![](https://img.linux.net.cn/data/attachment/album/202108/21/121757gvsasswbt28085r6.jpg)
Zorin 团队宣布发布了全新的 Zorin OS 16带来了许多急需的更新和改进。 我们在这篇文章中对这个版本进行了总结。
![Zorin OS 16 桌面版][1]
开源而赏心悦目的 Linux 发行版 Zorin OS 发布了它的最新稳定的第 16 个版本,这个版本会在 2025 年前提供增强和更新支持。该团队在确保性能不会下降的同时,提供了一些独特和有用的特性。
Zorin OS 使用自有的软件参考,同时也可以使用 Ubuntu 的软件仓库。
让我们看下重要的新特性。
### Zorin OS 16 新特性
最新的 Zorin OS 16 建立在 Linux 内核 5.11hwe 栈)的支持上,该版本基于 Ubuntu 20.04 LTS。
这个版本最主要的变化是在 Zorin 中 **默认包括了 Flathub 软件仓库**。由此Zorin 应用商店成为了 Linux 发行版中最大的应用程序集合之一。因为它可以支持 Flathub另外还有早前支持的 Snap 商店、Ubuntu 软件仓库、Zorin 自有仓库,和对 AppImage 的支持。
Zorin 主要因其外观而闻名,在这个版本中,有一系列改进,这是一个简要的总结:
* 新的图标和色彩方案,默认主题更加精致。
* 预装了新的设计和壁纸。
* 锁屏现在可以展示自选壁纸的模糊效果,给你一个更简洁的视觉效果。
任务栏图标启用了活动指示器,以及带有计数的通知气泡。这意味着你可以在任务栏图标中获取信息 App 的未读消息计数等信息。任务栏还有一些基本特性,比如自动隐藏、透明度和移动图标等等。
![新的任务栏通知气泡][2]
新版有许多内部提升,细节尚不清楚,但根据团队的意见,所有 Zorin 风格的整体桌面体验比其前身 [Zorin 15][3] 有了很大改进。
此版本中引入两个新应用,首次安装后可以用一个 Tour 应用概览 Zorin 桌面,另一个引入的是新的录音应用。
如果你使用笔记本在应用和工作区间切换变得更加快捷和简便。Zorin OS 16 带来了多点触控手势,开箱即用。现在你可以通过上下滑动 4 个手指,以流畅的 1:1 动作在工作区之间切换。 用 3 个手指在触摸板撮合,可以打开活动概述,看到你工作区中运行的每个应用程序。
Zorin OS 16 现在支持高分辨率显示器的分数缩放。
安装器程序现在包含了 NVIDIA 驱动,可以在首次用临场盘启动时选择,它也支持加密。
详细的更新日志在 [这里][4]。
### Zorin OS 16 最低系统要求
Zorin OS Core、Education 和 Pro
* CPU 1 GHz 双核处理器Intel/AMD 64 位处理器
* RAM 2 GB
* 存储 15 GBCore & Education或 30 GBPro
* 显示器 800 × 600 分辨率
Zorin OS LITE
* CPU 700 MHz 单核Intel/AMD 64 或 32 位处理器
* RAM 512 MB
* 存储 10 GB
* 显示器 640 × 480 分辨率
### 下载 Zorin OS 16
值得一提的是 Zorin 发布了一个 PRO 版本,售价大约 $39有类似 Windows 11 风格等额外特性。可是你仍然可以随时下载免费版本Zorin OS 16 Core 和 Zorin OS 16 LITE用于低配电脑。你可能想看下它们的功能 [比较][5]。
你可以从以下链接下载最新的 .iso 文件。然后,你可以使用 [Etcher][6] 或其他工具来创建临场 USB 启动盘来安装。
- [下载 zorin os 16][7]
### 从 Zorin 15.x 升级
现在还没有从 Zorin OS 15 升级的路径,不过据该团队称,未来将会有升级到最新版本的简单方法。
### 结束语
Zorin 的最佳特性之一是它独特的应用生态处理方式。它可能是唯一提供开箱即用体验的 Linux 桌面发行版,可以通过它的软件商店从 Flathub、Snap 商店、AppImage、Ubuntu / 自有软件仓库来搜索和安装应用。你不需要为 Snap 或者 Flatpak 手动配置系统。也就是说,它仍然是一个带有附加项目的 GNOME 修改版。可能有些人不喜欢 Zorin可能会因为它预装了所有这些功能而感到臃肿。从某种意义上说它是 Linux 桌面新用户的理想发行版之一,这些用户需要拥有类似 Windows/macOS 系统感觉的现成的 Linux 功能。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2021/08/zorin-os-16-release-announcement/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[zd200572](https://github.com/zd200572)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/Zorin-OS-16-Desktop-1024x576.jpg
[2]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/New-Taskbar-Notification-Bubbles.png
[3]: https://www.debugpoint.com/2020/09/zorin-os-15-3-release/
[4]: https://blog.zorin.com/2021/08/17/2021-08-17-zorin-os-16-is-released/
[5]: https://zorin.com/os/pro/#compare
[6]: https://www.debugpoint.com/2021/01/etcher-bootable-usb-linux/
[7]: https://zorin.com/os/download/

View File

@ -0,0 +1,96 @@
[#]: subject: "KaOS 2021.08 Release Focuses on Visual Changes and Package Updates"
[#]: via: "https://news.itsfoss.com/kaos-2021-08-release/"
[#]: author: "Rishabh Moharir https://news.itsfoss.com/author/rishabh/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
KaOS 2021.08 Release Focuses on Visual Changes and Package Updates
======
The built-from-scratch Linux distribution KaOS — which uses KDE, Qt, and [pacman][1] as a package manager, has finally received its fifth update this year. This article will highlight the significant changes that have been brought to the distribution.
Let us get to know about what this new release brings!
### Desktop Environment Update
![][2]
The new Application Launcher, introduced in Plasma 5.22, is now the new home for accessing apps while the traditional cascading app menu is abandoned.
The default Midna theme has been given a slightly different look, which can be easily noticed from the boot-up to the logout screen. This includes a darker look for the logout screen, combined with a transparent sidebar for the lockscreen and SDDM, and a minimal look for the splash screen. The icon themes have also been customized accordingly for both the light and dark versions of the theme.
The desktop environment is now based on Plasma 5.22.4 and the latest Frameworks 5.85.0; both are built on Qt 5.15.2+.
### Application Updates
#### KDE Apps
This update brings the latest KDE Gear 21.08. This includes animated previews of folders and an easy method of renaming folders using F2 and TAB in Dolphin file manager, color and image previews, along with an SSH plugin in Konsole. And, a keyframeable effect for altering the speed of clips in Kdenlive, and a party mode in Elisa.
Plasma mobile apps are now available on KaOS and are promised to be suitable for desktop use. These apps include Angelfish — web browser, Koko image viewer, Kalk calculator, and  Kasts podcasts.
#### System Apps
Some Calligra users may be disappointed to learn that LibreOffice is now the default office application. Moreover, other applications like bibletime, speedtest-CLI, and mauikit-accounts have also been added.
### Calamares installer
Calamares is now built on QML modules designed specifically for KaOS. This gives it an even and modern look with other apps. This also includes an all-new Users and Summary page.
![Calamares Summary Page][3]
You can now select your preferred file system while opting for automated partitioning.
A handy feature allows the transfer of network settings from the Live system to the newly installed system. Thus, you dont need to connect your PC to your Wi-Fi again.
### Other Package Updates
Several other systems packages have been updated. This should improve the overall compatibility and stability as well. Some package updates include:
* Systemd 249.3 
* Curl 7.78.0
* NetworkManager 1.32.8 
* Mesa 21.1.7
* Vulkan packages 1.2.187
* Udisks 2.9.3, MLT 7.0.1 
* Openexr 3.1.1
Do note that this release does not support installation in systems with RAID set up as of now.
To explore more about the changes, you can refer to the [official announcement][4].
With this release, KaOS is focused on giving KDE users a streamlined experience. In addition, the installation has been made easier, and power users can definitely make use of the new apps.
[Download KaOS 2021.08][5]
What do you think about the latest KaOS release? Is it turning out to be a promising Linux distribution? Let me know your thoughts in the comments below.
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/kaos-2021-08-release/
作者:[Rishabh Moharir][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/rishabh/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/pacman-command/
[2]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQ0NCIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[3]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQ4NSIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[4]: https://kaosx.us/news/2021/kaos08/
[5]: https://kaosx.us/pages/download/

View File

@ -0,0 +1,123 @@
[#]: subject: "Intels XeSS Could be the Open-Source Alternative to Nvidias DLSS"
[#]: via: "https://news.itsfoss.com/intel-xess-open-source/"
[#]: author: "Jacob Crume https://news.itsfoss.com/author/jacob/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Intels XeSS Could be the Open-Source Alternative to Nvidias DLSS
======
Over the past year, everyone in the PC gaming community has been talking about DLSS and FidelityFX. However, it seems that Linux gamers have been missing out, with DLSS only working through Proton when combined with a beta Nvidia driver and FidelityFX leaving much to be desired in terms of graphics.
Fortunately, Intel appears to want to change that with their new XeSS frame rate boosting technology. Launching alongside their upcoming Alchemist range of GPUs, it promises the ease of implementation of FidelityFX while competing in terms of image quality with DLSS.
Here, we will be exploring how this technology works and the incredible impact it will have on gaming on Linux.
### Is it like Nvidias DLSS?
![][1]
Similar to DLSS (Deep Learning Super Sampling), XeSS stands for Xe Super Sampling. AMDs FidelityFX is a collection of technologies that enable games to run at a much higher frame rate than traditional rendering with minimal loss to visual quality.
Currently, two different technologies are used to achieve this. These are AI and traditional upscaling, both with various drawbacks and advantages.
#### Traditional Upscaling
Unlike AI, this approach has been worked on for many years. Previously, we have seen it being used in many TVs, computer monitors, and even some games to make a lower resolution image (or frame) appear clearer, with decent results.
This is the technology that AMD has chosen for their FidelityFX. They did this for several reasons; some possible ones include:
* Easier implementation by game developers
* The capability to run on almost any GPU
* Proven technology
That isnt to say that it is without its disadvantages, some being:
* Reduced visual quality compared to AI-based solutions
* More limited in opportunities to improve it in the future
AMD is currently the only major company using this technology for game upscaling. That means that we must move on to the other major upscaling technology: AI.
#### AI Upscaling
![][2]
It is the latest advancement in upscaling technology used by DLSS and XeSS.
Unlike traditional upscaling, this approach typically depends on some special hardware to run.
Specifically, it would help if you had a GPU with dedicated AI cores. On Nvidias cards, these come in the form of Tensor cores.
Because these cores are new, they are only available on 20 and 30 series GPUs, meaning that older cards are stuck with traditional upscaling. Additionally, it is much harder for developers to implement as the AI needs to be “trained,” involving feeding the AI thousands of hours of gameplay.
Yet, these trade-offs are worth it for many people, as AI provides better image quality and performance.
This is the route Intel has taken for its solution.
### Open Source and Upscaling
DLSS is completely closed source in true Nvidia style, like the drivers that annoyed Linus Torvalds so much.
Fortunately, Intel is following in the footsteps of AMD, and they plan to open-source XeSS once its ready for prime time.
While there is no significant commitment made by them, but multiple reports suggest that they plan to eventually open-source it.
This allows them to take advantage of the numerous contributions the open-source community will (hopefully) make. The result should be a fascinating GPU landscape with many different technologies and companies constantly fight for the top spot in upscaling.
### Intel XeSS
![][3]
Compared to Nvidias DLSS (XeSSs main competitor), XeSS promises better performance, visual quality, and ease of implementation.
So far, we have seen demos running at as much as double the native performance, backing up the performance claims. But thats press material for now.
As I mentioned, Intel is planning to make it open-source.
While it may not be open-source at launch, they intend on open-sourcing once it matures.
![][4]
If Intel is to be believed, this could be the killer feature of their upcoming Alchemy GPUs, putting them ahead of both AMD and Nvidia in one single scoop.
### Final Thoughts
I am incredibly excited about this feature, more so than I was about DLSS and FidelityFX combined. It should be noted that this is still some time away, with it expected to release in early 2022.
Overall, it looks like a significant step forward for Intel and maybe the key to them coming back from behind AMD and Nvidia.
_Are you excited about XeSS? Let me know in the comments below!_
**Via**: [Videocardz][5]
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/intel-xess-open-source/
作者:[Jacob Crume][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/jacob/
[b]: https://github.com/lujun9972
[1]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjM4MSIgd2lkdGg9IjY3OCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[2]: https://i2.wp.com/i.ytimg.com/vi/-Dp61_bM948/hqdefault.jpg?w=780&ssl=1
[3]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQzMiIgd2lkdGg9Ijc2OCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[4]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQzOSIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[5]: https://videocardz.com/newz/intel-xess-ai-based-super-sampling-technology-will-be-open-source-once-it-matures

View File

@ -0,0 +1,93 @@
[#]: subject: "SparkyLinux 6.0 Release is based on Debian 11 and Includes a Built-in VPN"
[#]: via: "https://news.itsfoss.com/sparkylinux-6-0-release/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
SparkyLinux 6.0 Release is based on Debian 11 and Includes a Built-in VPN
======
SparkyLinux 6.0 is a major stable update that utilizes Debian 11 Bullseye as its base now.
While you can go through the [features of Debian 11][1], SparkyLinux 6.0 should reflect some of the perks associated with it.
Here, we shall take a quick look at what SparkyLinux 6.0 has to offer.
### SparkyLinux 6.0 “Po Tolo”: Whats New?
The major highlight of the release is the latest Debian 11 Bullseye as its base. The repositories have also been updated to get the latest packages.
SparkyAPTus AppCenter has replaced the original SparkyAPTus, which is no longer developed.
![][2]
You can install, reinstall, and remove applications easily. Not just limited to the applications, but you also get the ability to tweak the pre-configured desktops using it.
In addition to that, you can remove and install Linux Kernels as well. You can choose from Debian Kernel, Liquorix, Sparky, and XanMod.
![][3]
It is worth noting that you will still be able to access all the tools from the old APTus.
To enhance privacy and security, SparkyLinux has decided to include the non-profit [RiseUp VPN][4] application pre-installed.
It is a VPN service that relies on donations to keep the network alive and comes with cross-platform support. You can also find it available for Android devices.
So, this makes it an interesting addition to the distribution. If you are not using any VPN service, this should make things easy.
The FirstRun app has been replaced with an improved welcome app that guides you through some basic pointers.
![][5]
### Other Improvements
With the latest release, you can also find new wallpapers and updated application packages that include:
* Thunderbird 78.13.0
* VLC 3.0.16
* LibreOffice 7.0.4
* Calamares Installer 3.2.41.1
To know more about the release, you can refer to the [official announcement][6].
### Download Sparky 6.0
SparkyLinux 6.0 is available to download with Xfce and KDE Plasma desktop environments. It supports 32-bit systems as well, which is a good thing.
If you are already running SparkLinux “Po Tolo” rolling release, you need to update your system to get Sparky 6.0.
Do note that the rolling version will switch to a stable release. So, if you want to stay on the rolling release, you need to wait for a few days.
[SparkyLinux 6.0][7]
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/sparkylinux-6-0-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/debian-11-feature/
[2]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjU4NyIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[3]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjU4NSIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[4]: https://riseup.net/en/vpn
[5]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjU2MyIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[6]: https://sparkylinux.org/sparky-6-0-po-tolo/
[7]: https://sparkylinux.org/download/

View File

@ -0,0 +1,104 @@
[#]: subject: "Setting new expectations for open source maintainers"
[#]: via: "https://opensource.com/article/21/8/open-source-maintainers"
[#]: author: "Luis Villa https://opensource.com/users/luis"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Setting new expectations for open source maintainers
======
The continued maturation of open source has regularly placed new burdens
on maintainers.
![Practicing empathy][1]
For a long time, there were two basic tests for releasing open source: "Does it do what I need it to do?" and "Does it compile?"
Sure, it was nice if it did things for others, but more than anything else, it at least needed to be fun for the developer and run at all for others. Then with the rise of package management, things leveled up a bit: "Is it packaged?" Shortly after that, the increasing popularity of [test-driven development][2] added another requirement: "Do the tests pass?"
Each of these new requirements made more work for open source maintainers, but (by and large) maintainers didn't grump too much about them. I think this happened for two reasons: First, the work was often aligned with skills developers needed to learn for their jobs, and second, they were broadly perceived as beneficial for all users of the software, not just corporate developers.
But that is changing—and in ways that may not work out so well for open source and enterprises.
### The new enterprise burdens
Here in 2021, it's clear that a new set of standards for open source is coalescing. These bring new labor to be done, either by open source developers or as part of a metadata overlay. These new standards include: 
* **Security information and auditing**: Security assessments of open source packages have traditionally been carried out by third parties, either through in-house security teams or by the distributed process coordinated through the [MITRE Common Vulnerabilities and Exposures][3] database. With new security training like the Linux Foundation's CII badges and projects like OpenSSF and Google's SLSA, the new buzzword is "end to end"—meaning maintainers and projects must make themselves security experts and create security controls. While this is almost certainly a good idea for the industry overall, it's yet more work expectations with no immediate prospect of compensation.
* **Legal metadata**: Traditionally, open source communities like GNU, Debian, and Fedora believed (with good reason) that the default level of mandatory licensing metadata was at the package level, with per-file licensing information often disfavored at best and unrepresentable at worst. SPDX, followed more recently by clearlydefined.io, has decided that license information must be complete, machine-readable, and accurate in every file. This is clearly correct for all users, but the vast majority of the benefit accrues the most deep-pocketed enterprises in practice. In the meantime, if we want accurate global coverage, the vast majority of the burden will fall on maintainers and require intricate legal assessment. (Adding these to the Linux kernel [took literally years][4].)
* **Procurement information**: The newest ask from the industry is to provide Software Bills of Material (SBOM) throughout the software stack—which inevitably includes vast quantities of open source. Again, this is not entirely unreasonable, and indeed open source has long led the way here via the package management techniques that open source language communities pioneered. But the completeness of coverage and depth of information being demanded (including, in some proposals, [information about the identity of developers][5]) is a step-change in what is required—primarily to benefit the governments and massive enterprises that can afford to do the detailed, package-by-package analysis of software provenance.
This new work may be quite different from previous waves of new obligations for open source developers—and we should think about why that is and what we might do about it.
### Is this work going to work?
As I suggested in the opening to this piece, the continued maturation of open source has regularly placed new burdens on maintainers. (At Mozilla, we used to call these "table stakes"—a poker term, indicating the things you had to do to even sit at the poker table, or in tech terms, to be considered for enterprise use.) So in some sense, this new wave of obligations is nothing new. But I do want to suggest that in two significant ways, these new mandates are problematic.
First, this work is increasingly highly specialized and so less helpful for individual maintainers to learn. The strongest open source developers have always had diverse skills (not just coding, but also marketing, people management, etc.). That's been part of the draw of open source—you learn those things along the way, making you a better developer. But when we start adding more and more requirements that specialists (e.g., a legal team or a security team) would cover in a corporate setting, we reduce the additional value to developers of participating in open source.
To put it another way: Developers clearly serve their self-interest by learning basic programming and people skills. It is less clear that they serve their self-interests by becoming experts in issues that, in their day jobs, are likely delegated to experts, like procurement, legal, and security. This works out fine in open source projects big enough to have large, sophisticated teams, but those are rare (even though they gather the lion's share of press and attention).
Second, these new and increasingly specialized requirements primarily benefit a specific class of open source users—large enterprises. That isn't necessarily a bad thing—big enterprises are essential in many ways, and indeed, the risks to them deserve to be taken seriously.
But in a world where hundreds of billions of dollars in enterprise value have been created by open source, and where small educational/hobby projects (and even many small companies) don't really benefit from these new unfunded mandates, developers will likely focus on other things, since few of them got into open source primarily to benefit the Fortune 500. 
In other words, many open source developers enjoy building things that benefit themselves and their friends and are even willing to give up nights and weekends for that. If meeting these new requirements mostly benefits faceless corporations, we may need to find other carrots to encourage developers to create and maintain new open source projects.
![Tidelift 2021 maintainer survey results][6]
According to the Tidelift 2021 open source maintainer survey, open source maintenance work is often stressful, thankless, and financially unrewarding.
 ([Tidelift][7])
### Why "unfunded mandate?"
In U.S. politics, an "unfunded mandate" occurs when a government requires someone else (usually a lower-level government) to do new work while not funding the new work. Bradley M. Kuhn gave me the inspiration to use the term "unfunded mandate" in [a recent Twitter post][8].
Sometimes, unfunded mandates can be good—many times, they are used to create equity and justice programs, for example, that local governments really should be doing as a matter of course. Arguably, many security initiatives fall into this category—burdensome, yes, but necessary for all of us to use the internet effectively.
But other times, they just create work for small entities that are already overwhelmed juggling the responsibilities of modern governance. If that sounds familiar to open source developers, no surprise—[they're already burnt out][7], and this is creating more work without creating more time or money.
![Tidelift survey results showing half of maintainers quit because of burnout.][9]
According to the Tidelift 2021 managed open source survey, more than half of maintainers have quit or considered quitting because they were experiencing burnout.
([Tidelift][10])
### Aligning incentives—by paying the maintainers
We were pleased to see Google call this issue out in [a recent filing on SBOMs][11] with the National Telecommunications and Information Administration (NTIA).
> "Unfortunately, much of the burden of maintaining our digital infrastructure falls on the backs of unpaid, volunteer contributors. The NTIA should carefully evaluate ways to fund and assist these communities as they work with industry to comply with new regulations."
Tidelift's filling to the same NTIA call for comments made similar points about money, scale, and reliability. In response, in [its own summary][12], the NTIA acknowledged that "funding sources" are a challenge and also said: 
> "Further research is necessary to understand the optimal … incentives for sharing, protecting, and using SBOM data."
Given the dynamic of increasing professionalization—or to put it more bluntly, increasing work—that I've described above, it is refreshing to see an acknowledgment from significant industry players that developer incentives should be considered as we move into the next era of open source. We, as an industry, must figure out how to address this together, or we'll both fail to reach our goals and burn out developers—the worst of all worlds.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/open-source-maintainers
作者:[Luis Villa][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/luis
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/practicing-empathy.jpg?itok=-A7fj6NF (Practicing empathy)
[2]: https://opensource.com/article/20/1/test-driven-development
[3]: https://cve.mitre.org/
[4]: https://lwn.net/Articles/739183/
[5]: https://security.googleblog.com/2021/02/know-prevent-fix-framework-for-shifting.html
[6]: https://opensource.com/sites/default/files/pictures/tidelift-survey-2021-1.png (Tidelift 2021 maintainer survey results)
[7]: https://blog.tidelift.com/finding-4-open-source-maintenance-work-is-often-stressful-thankless-and-financially-unrewarding
[8]: https://twitter.com/richardfontana/status/1408170067594985474
[9]: https://opensource.com/sites/default/files/pictures/tidelift-survey-2021-2.png (Tidelift 2021 maintainer survey results about burnout)
[10]: https://blog.tidelift.com/finding-5-more-than-half-of-maintainers-have-quit-or-considered-quitting-and-heres-why
[11]: https://www.ntia.doc.gov/files/ntia/publications/google_-_2021.06.17.pdf
[12]: https://www.ntia.gov/files/ntia/publications/sbom_minimum_elements_report.pdf

View File

@ -0,0 +1,71 @@
[#]: subject: "A guide to understanding your team's implicit values and needs"
[#]: via: "https://opensource.com/open-organization/21/8/leadership-cultural-social-norms"
[#]: author: "Ron McFarland https://opensource.com/users/ron-mcfarland"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
A guide to understanding your team's implicit values and needs
======
To enhance team dynamics, open leaders can study the implicit social
norms that guide members' behaviors and decisions.
![Working meetings can be effective meetings][1]
Culture matters in [open organizations][2]. But "culture" seems like such a large, complicated concept to address. How can we help open organization teams better understand it?
One solution might come from [Michele J. Gelfand][3], author of [_Rule Makers, Rule Breakers_][4]_: Tight and Loose Cultures and the Secret Signals That Direct Our Lives_. Gelfand organizes all countries and cultures into two very simple groups: those with "tight" cultures and those with "loose" ones. Then she explains the characteristics and social norms of both, offering their relative strengths and weaknesses. By studying both, one might overcome the divisions and conflicts that separate people in and across teams, organizations, and countries.
In this two-part review of _Rule Makers, Rule Breakers_, I'll explain Gelfand's argument and discuss the ways it's useful to people working in open organizations.
### Know your social norms
Gelfand believes that our behavior is very strongly dependent on whether we live in a "tight" or "loose" community culture, because each of these cultures has social norms that differ from the other. These norms—and the strictness with which they are enforced—will determine our behavior in the community. They give us our identity. They help us coordinate with each other. In short, they're the glue that holds communities together.
They also impact our worldviews, the ways we build our environments, and even the processing in our brains. "Countless studies have shown that social norms are critical for uniting communities into cooperative, well-coordinated groups that can accomplish great feats," Gelfand writes. Throughout history, communities have put their citizens through the seemingly craziest of rituals for no other reason than to maintain group cohesion and cooperation. The rituals result in greater bonding, which has kept people alive (particularly in times of hunting, foraging, and warfare).
Social norms include rules we all tend to follow automatically, what Gelfand calls a kind of "normative autopilot." These are things we do without thinking about them—for example, being quiet in libraries, cinemas, elevators, or airplanes. We do these things automatically. "From the outside," Gelfand says, "our social norms often seem bizarre, but from the inside, we take them for granted." She explains that social norms can be codified into regulations and laws ("obey stop signs" and "don't steal"). Others are largely unspoken ("don't stare at people on the train" or "cover your mouth when you sneeze"). And, of course, they vary by context.
The challenge is that most social norms are invisible, and we don't know how much these social norms control us.
The challenge is that most social norms are invisible, and we don't know how much these social norms control us. Without knowing it, we often just follow the groups in our surroundings. This is called "groupthink," in which people will follow along with their identifying group, even if the group is wrong. They don't want to stand out.
### Organizations, tight and loose
Gelfand organizes social norms into various groupings. She argues that some norms are characteristic of "tight" cultures, while others are characteristic of "loose" cultures. To do this, Gelfand researched and sampled approximately seven thousand people from more than 30 countries across five continents and with a wide range of occupations, genders, ages, religions, sects, and social classes in order to learn where those communities positioned themselves (and how strongly their social norms were enforced officially and by the communities/neighborhoods in general). Differences between tight and loose cultures vary between nations, within countries (like within the United States and its various regions), within organizations, within social classes and even within households.
Because organizations have cultures, they too have their own social norms (after all, if an organization is unable to coordinate its members and influence their behavior, it won't be able to survive). So organizations can also reflect and instill the "light" or "loose" cultural characteristics Gelfand describes. And if we have a strong ability to identify these differences, we can predict and address conflict more successfully. Then, armed with greater awareness of those social norms, we can put open organization principles to work.
Gelfand describes the difference between tight and loose cultures this way:
> Broadly speaking, loose cultures tend to be open, but they're also much more disorderly. On the flip side, tight cultures have a comforting order and predictability, but they're less tolerant. This is the tight-loose trade-off: advantages in one realm coexist with drawbacks in another.
Tight societies, she concludes, maintain strict social order, synchrony and self-regulation; loose societies take pride in being highly tolerant, creative and open to change.
Although not true in every case, tight and loose cultures generally exhibit some trade-offs; each has its own strengths and weaknesses. See Figure 1 below.
![][5]
The work of successfully applying the five open organization principles in these two environments can vary greatly. To be successful, community commitment is vital, and if the social norms are different, the reasons for commitment would be different as well. Organizational leaders must know what the community's values are. Only then can that person adequately inspire others.
In the next part of this review, I'll explain more thoroughly the characteristics of tight and loose cultures, so leaders can get a better sense of how they can put open organization principles to work on their teams.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/21/8/leadership-cultural-social-norms
作者:[Ron McFarland][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ron-mcfarland
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/leader-team-laptops-conference-meeting.png?itok=ztoA0E6f (Working meetings can be effective meetings)
[2]: https://theopenorganization.org/definition/
[3]: https://www.michelegelfand.com/
[4]: https://www.michelegelfand.com/rule-makers-rule-breakers
[5]: https://opensource.com/sites/default/files/images/open-org/rule-makers-breakers-1.png

View File

@ -1,177 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 ways to improve your Bash scripts)
[#]: via: (https://opensource.com/article/20/1/improve-bash-scripts)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
5 ways to improve your Bash scripts
======
Find out how Bash can help you tackle the most challenging tasks.
![A person working.][1]
A system admin often writes Bash scripts, some short and some quite lengthy, to accomplish various tasks.
Have you ever looked at an installation script provided by a software vendor? They often add a lot of functions and logic in order to ensure that the installation works properly and doesnt result in damage to the customers system. Over the years, Ive amassed a collection of various techniques for enhancing my Bash scripts, and Id like to share some of them in hopes they can help others. Here is a collection of small scripts created to illustrate these simple examples.
### Starting out
When I was starting out, my Bash scripts were nothing more than a series of commands, usually meant to save time with standard shell operations like deploying web content. One such task was extracting static content into the home directory of an Apache web server. My script went something like this:
```
cp january_schedule.tar.gz /usr/apache/home/calendar/
cd /usr/apache/home/calendar/
tar zvxf january_schedule.tar.gz
```
While this saved me some time and typing, it certainly was not a very interesting or useful script in the long term. Over time, I learned other ways to use Bash scripts to accomplish more challenging tasks, such as creating software packages, installing software, or backing up a file server.
### 1\. The conditional statement
Just as with so many other programming languages, the conditional has been a powerful and common feature. A conditional is what enables logic to be performed by a computer program. Most of my examples are based on conditional logic.
The basic conditional uses an "if" statement. This allows us to test for some condition that we can then use to manipulate how a script performs. For instance, we can check for the existence of a Java bin directory, which would indicate that Java is installed. If found, the executable path can be updated with the location to enable calls by Java applications.
```
if [ -d "$JAVA_HOME/bin" ] ; then
    PATH="$JAVA_HOME/bin:$PATH"
```
### 2\. Limit execution
You might want to limit a script to only be run by a specific user. Although Linux has standard permissions for users and groups, as well as SELinux for enabling this type of protection, you could choose to place logic within a script. Perhaps you want to be sure that only the owner of a particular web application can run its startup script. You could even use code to limit a script to the root user. Linux has a couple of environment variables that we can test in this logic. One is **$USER**, which provides the username. Another is **$UID**, which provides the users identification number (UID) and, in the case of a script, the UID of the executing user.
#### User
The first example shows how I could limit a script to the user jboss1 in a multi-hosting environment with several application server instances. The conditional "if" statement essentially asks, "Is the executing user not jboss1?" When the condition is found to be true, the first echo statement is called, followed by the **exit 1,** which terminates the script.
```
if [ "$USER" != 'jboss1' ]; then
     echo "Sorry, this script must be run as JBOSS1!"
     exit 1
fi
echo "continue script"
```
#### Root
This next example script ensures that only the root user can execute it. Because the UID for root is 0, we can use the **-gt** option in the conditional if statement to prohibit all UIDs greater than zero.
```
if [ "$UID" -gt 0 ]; then
     echo "Sorry, this script must be run as ROOT!"
     exit 1
fi
echo "continue script"
```
### 3\. Use arguments
Just like any executable program, Bash scripts can take arguments as input. Below are a few examples. But first, you should understand that good programming means that we dont just write applications that do what we want; we must write applications that _cant_ do what we _dont_ want. I like to ensure that a script doesnt do anything destructive in the case where there is no argument. Therefore, this is the first check that y. The condition checks the number of arguments, **$#**, for a value of zero and terminates the script if true.
```
if [ $# -eq 0 ]; then
    echo "No arguments provided"
    exit 1
fi
echo "arguments found: $#"
```
#### Multiple arguments
You can pass more than one argument to a script. The internal variables that the script uses to reference each argument are simply incremented, such as **$1**, **$2**, **$3**, and so on. Ill just expand my example above with the following line to echo the first three arguments. Obviously, additional logic will be needed for proper argument handling based on the total number. This example is simple for the sake of demonstration.
```
`echo $1 $2 $3`
```
While were discussing these argument variables, you might have wondered, "Did he skip zero?"
Well, yes, I did, but I have a great reason! There is indeed a **$0** variable, and it is very useful. Its value is simply the name of the script being executed.
```
`echo $0`
```
An important reason to reference the name of the script during execution is to generate a log file that includes the scripts name in its own name. The simplest form might just be an echo statement.
```
`echo test >> $0.log`
```
However, you will probably want to add a bit more code to ensure that the log is written to a location with the name and information that you find helpful to your use case.
### 4\. User input
Another useful feature to use in a script is its ability to accept input during execution. The simplest is to offer the user some input.
```
echo "enter a word please:"
 read word
 echo $word
```
This also allows you to provide choices to the user.
```
read -p "Install Software ?? [Y/n]: " answ
 if [ "$answ" == 'n' ]; then
   exit 1
 fi
   echo "Installation starting..."
```
### 5\. Exit on failure
Some years ago, I wrote a script for installing the latest version of the Java Development Kit (JDK) on my computer. The script extracts the JDK archive to a specific directory, updates a symbolic link, and uses the alternatives utility to make the system aware of the new version. If the extraction of the JDK archive failed, continuing could break Java system-wide. So, I wanted the script to abort in such a situation. I dont want the script to make the next set of system changes unless the archive was successfully extracted. The following is an excerpt from that script:
```
tar kxzmf jdk-8u221-linux-x64.tar.gz -C /jdk --checkpoint=.500; ec=$?
if [ $ec -ne 0 ]; then
     echo "Installation failed - exiting."
     exit 1
fi
```
A quick way for you to demonstrate the usage of the **$?** variable is with this short one-liner:
```
`ls T; ec=$?; echo $ec`
```
First, run **touch T** followed by this command. The value of **ec** will be 0. Then, delete **T**, **rm T**, and repeat the command. The value of **ec** will now be 2 because ls reports an error condition since **T** was not found.
You can take advantage of this error reporting to include logic, as I have above, to control the behavior of your scripts.
### Takeaway
We might assume that we need to employ languages, such as Python, C, or Java, for higher functionality, but thats not necessarily true. The Bash scripting language is very powerful. There is a lot to learn to maximize its usefulness. I hope these few examples will shed some light on the potential of coding with Bash.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/improve-bash-scripts
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl (A person working.)

View File

@ -1,445 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (YungeG)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Understanding systemd at startup on Linux)
[#]: via: (https://opensource.com/article/20/5/systemd-startup)
[#]: author: (David Both https://opensource.com/users/dboth)
Understanding systemd at startup on Linux
======
systemd's startup provides important clues to help you solve problems
when they occur.
![People at the start line of a race][1]
In [_Learning to love systemd_][2], the first article in this series, I looked at systemd's functions and architecture and the controversy around its role as a replacement for the old SystemV init program and startup scripts. In this second article, I'll start exploring the files and tools that manage the Linux startup sequence. I'll explain the systemd startup sequence, how to change the default startup target (runlevel in SystemV terms), and how to manually switch to a different target without going through a reboot.
I'll also look at two important systemd tools. The first is the **systemctl** command, which is the primary means of interacting with and sending commands to systemd. The second is **journalctl**, which provides access to the systemd journals that contain huge amounts of system history data such as kernel and service messages (both informational and error messages).
Be sure to use a non-production system for testing and experimentation in this and future articles. Your test system needs to have a GUI desktop (such as Xfce, LXDE, Gnome, KDE, or another) installed.
I wrote in my previous article that I planned to look at creating a systemd unit and adding it to the startup sequence in this article. Because this article became longer than I anticipated, I will hold that for the next article in this series.
### Exploring Linux startup with systemd
Before you can observe the startup sequence, you need to do a couple of things to make the boot and startup sequences open and visible. Normally, most distributions use a startup animation or splash screen to hide the detailed messages that would otherwise be displayed during a Linux host's startup and shutdown. This is called the Plymouth boot screen on Red Hat-based distros. Those hidden messages can provide a great deal of information about startup and shutdown to a sysadmin looking for information to troubleshoot a bug or to just learn about the startup sequence. You can change this using the GRUB (Grand Unified Boot Loader) configuration.
The main GRUB configuration file is **/boot/grub2/grub.cfg**, but, because this file can be overwritten when the kernel version is updated, you do not want to change it. Instead, modify the **/etc/default/grub** file, which is used to modify the default settings of **grub.cfg**.
Start by looking at the current, unmodified version of the **/etc/default/grub** file:
```
[root@testvm1 ~]# cd /etc/default ; cat grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="resume=/dev/mapper/fedora_testvm1-swap rd.lvm.
lv=fedora_testvm1/root rd.lvm.lv=fedora_testvm1/swap rd.lvm.lv=fedora_
testvm1/usr rhgb quiet"
GRUB_DISABLE_RECOVERY="true"
[root@testvm1 default]#
```
Chapter 6 of the [GRUB documentation][3] contains a list of all the possible entries in the **/etc/default/grub** file, but I focus on the following:
* I change **GRUB_TIMEOUT**, the number of seconds for the GRUB menu countdown, from five to 10 to give a bit more time to respond to the GRUB menu before the countdown hits zero.
* I delete the last two parameters on **GRUB_CMDLINE_LINUX**, which lists the command-line parameters that are passed to the kernel at boot time. One of these parameters, **rhgb** stands for Red Hat Graphical Boot, and it displays the little Fedora icon animation during the kernel initialization instead of showing boot-time messages. The other, the **quiet** parameter, prevents displaying the startup messages that document the progress of the startup and any errors that occur. I delete both **rhgb** and **quiet** because sysadmins need to see these messages. If something goes wrong during boot, the messages displayed on the screen can point to the cause of the problem.
After you make these changes, your GRUB file will look like:
```
[root@testvm1 default]# cat grub
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="resume=/dev/mapper/fedora_testvm1-swap rd.lvm.
lv=fedora_testvm1/root rd.lvm.lv=fedora_testvm1/swap rd.lvm.lv=fedora_
testvm1/usr"
GRUB_DISABLE_RECOVERY="false"
[root@testvm1 default]#
```
The **grub2-mkconfig** program generates the **grub.cfg** configuration file using the contents of the **/etc/default/grub** file to modify some of the default GRUB settings. The **grub2-mkconfig** program sends its output to **STDOUT**. It has a **-o** option that allows you to specify a file to send the datastream to, but it is just as easy to use redirection. Run the following command to update the **/boot/grub2/grub.cfg** configuration file:
```
[root@testvm1 grub2]# grub2-mkconfig &gt; /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.18.9-200.fc28.x86_64
Found initrd image: /boot/initramfs-4.18.9-200.fc28.x86_64.img
Found linux image: /boot/vmlinuz-4.17.14-202.fc28.x86_64
Found initrd image: /boot/initramfs-4.17.14-202.fc28.x86_64.img
Found linux image: /boot/vmlinuz-4.16.3-301.fc28.x86_64
Found initrd image: /boot/initramfs-4.16.3-301.fc28.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-7f12524278bd40e9b10a085bc82dc504
Found initrd image: /boot/initramfs-0-rescue-7f12524278bd40e9b10a085bc82dc504.img
done
[root@testvm1 grub2]#
```
Reboot your test system to view the startup messages that would otherwise be hidden behind the Plymouth boot animation. But what if you need to view the startup messages and have not disabled the Plymouth boot animation? Or you have, but the messages stream by too fast to read? (Which they do.)
There are a couple of options, and both involve log files and systemd journals—which are your friends. You can use the **less** command to view the contents of the **/var/log/messages** file. This file contains boot and startup messages as well as messages generated by the operating system during normal operation. You can also use the **journalctl** command without any options to view the systemd journal, which contains essentially the same information:
```
[root@testvm1 grub2]# journalctl
\-- Logs begin at Sat 2020-01-11 21:48:08 EST, end at Fri 2020-04-03 08:54:30 EDT. --
Jan 11 21:48:08 f31vm.both.org kernel: Linux version 5.3.7-301.fc31.x86_64 ([mockbuild@bkernel03.phx2.fedoraproject.org][4]) (gcc version 9.2.1 20190827 (Red Hat 9.2.1-1) (GCC)) #1 SMP Mon Oct &gt;
Jan 11 21:48:08 f31vm.both.org kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.3.7-301.fc31.x86_64 root=/dev/mapper/VG01-root ro resume=/dev/mapper/VG01-swap rd.lvm.lv=VG01/root rd&gt;
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-provided physical RAM map:
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000dffeffff] usable
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x00000000dfff0000-0x00000000dfffffff] ACPI data
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x0000000100000000-0x000000041fffffff] usable
Jan 11 21:48:08 f31vm.both.org kernel: NX (Execute Disable) protection: active
Jan 11 21:48:08 f31vm.both.org kernel: SMBIOS 2.5 present.
Jan 11 21:48:08 f31vm.both.org kernel: DMI: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006
Jan 11 21:48:08 f31vm.both.org kernel: Hypervisor detected: KVM
Jan 11 21:48:08 f31vm.both.org kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 11 21:48:08 f31vm.both.org kernel: kvm-clock: cpu 0, msr 30ae01001, primary cpu clock
Jan 11 21:48:08 f31vm.both.org kernel: kvm-clock: using sched offset of 8250734066 cycles
Jan 11 21:48:08 f31vm.both.org kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 11 21:48:08 f31vm.both.org kernel: tsc: Detected 2807.992 MHz processor
Jan 11 21:48:08 f31vm.both.org kernel: e820: update [mem 0x00000000-0x00000fff] usable ==&gt; reserved
Jan 11 21:48:08 f31vm.both.org kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
&lt;snip&gt;
```
I truncated this datastream because it can be hundreds of thousands or even millions of lines long. (The journal listing on my primary workstation is 1,188,482 lines long.) Be sure to try this on your test system. If it has been running for some time—even if it has been rebooted many times—huge amounts of data will be displayed. Explore this journal data because it contains a lot of information that can be very useful when doing problem determination. Knowing what this data looks like for a normal boot and startup can help you locate problems when they occur.
I will discuss systemd journals, the **journalctl** command, and how to sort through all of that data to find what you want in more detail in a future article in this series.
After GRUB loads the kernel into memory, it must first extract itself from the compressed version of the file before it can perform any useful work. After the kernel has extracted itself and started running, it loads systemd and turns control over to it.
This is the end of the boot process. At this point, the Linux kernel and systemd are running but unable to perform any productive tasks for the end user because nothing else is running, there's no shell to provide a command line, no background processes to manage the network or other communication links, and nothing that enables the computer to perform any productive function.
Systemd can now load the functional units required to bring the system up to a selected target run state.
### Targets
A systemd target represents a Linux system's current or desired run state. Much like SystemV start scripts, targets define the services that must be present for the system to run and be active in that state. Figure 1 shows the possible run-state targets of a Linux system using systemd. As seen in the first article of this series and in the systemd bootup man page (man bootup), there are other intermediate targets that are required to enable various necessary services. These can include **swap.target**, **timers.target**, **local-fs.target**, and more. Some targets (like **basic.target**) are used as checkpoints to ensure that all the required services are up and running before moving on to the next-higher level target.
Unless otherwise changed at boot time in the GRUB menu, systemd always starts the **default.target**. The **default.target** file is a symbolic link to the true target file. For a desktop workstation, this is typically going to be the **graphical.target**, which is equivalent to runlevel 5 in SystemV. For a server, the default is more likely to be the **multi-user.target**, which is like runlevel 3 in SystemV. The **emergency.target** file is similar to single-user mode. Targets and services are systemd units.
The following table, which I included in the previous article in this series, compares the systemd targets with the old SystemV startup runlevels. The systemd target aliases are provided by systemd for backward compatibility. The target aliases allow scripts—and sysadmins—to use SystemV commands like **init 3** to change runlevels. Of course, the SystemV commands are forwarded to systemd for interpretation and execution.
**systemd targets** | **SystemV runlevel** | **target aliases** | **Description**
---|---|---|---
default.target | | | This target is always aliased with a symbolic link to either **multi-user.target** or **graphical.target**. systemd always uses the **default.target** to start the system. The **default.target** should never be aliased to **halt.target**, **poweroff.target**, or **reboot.target**.
graphical.target | 5 | runlevel5.target | **Multi-user.target** with a GUI
| 4 | runlevel4.target | Unused. Runlevel 4 was identical to runlevel 3 in the SystemV world. This target could be created and customized to start local services without changing the default **multi-user.target**.
multi-user.target | 3 | runlevel3.target | All services running, but command-line interface (CLI) only
| 2 | runlevel2.target | Multi-user, without NFS, but all other non-GUI services running
rescue.target | 1 | runlevel1.target | A basic system, including mounting the filesystems with only the most basic services running and a rescue shell on the main console
emergency.target | S | | Single-user mode—no services are running; filesystems are not mounted. This is the most basic level of operation with only an emergency shell running on the main console for the user to interact with the system.
halt.target | | | Halts the system without powering it down
reboot.target | 6 | runlevel6.target | Reboot
poweroff.target | 0 | runlevel0.target | Halts the system and turns the power off
Each target has a set of dependencies described in its configuration file. systemd starts the required dependencies, which are the services required to run the Linux host at a specific level of functionality. When all of the dependencies listed in the target configuration files are loaded and running, the system is running at that target level. If you want, you can review the systemd startup sequence and runtime targets in the first article in this series, [_Learning to love systemd_][2].
### Exploring the current target
Many Linux distributions default to installing a GUI desktop interface so that the installed systems can be used as workstations. I always install from a Fedora Live boot USB drive with an Xfce or LXDE desktop. Even when I'm installing a server or other infrastructure type of host (such as the ones I use for routers and firewalls), I use one of these installations that installs a GUI desktop.
I could install a server without a desktop (and that would be typical for data centers), but that does not meet my needs. It is not that I need the GUI desktop itself, but the LXDE installation includes many of the other tools I use that are not in a default server installation. This means less work for me after the initial installation.
But just because I have a GUI desktop does not mean it makes sense to use it. I have a 16-port KVM that I can use to access the KVM interfaces of most of my Linux systems, but the vast majority of my interaction with them is via a remote SSH connection from my primary workstation. This way is more secure and uses fewer system resources to run **multi-user.target** compared to **graphical.target.**
To begin, check the default target to verify that it is the **graphical.target**:
```
[root@testvm1 ~]# systemctl get-default
graphical.target
[root@testvm1 ~]#
```
Now verify the currently running target. It should be the same as the default target. You can still use the old method, which displays the old SystemV runlevels. Note that the previous runlevel is on the left; it is **N** (which means None), indicating that the runlevel has not changed since the host was booted. The number 5 indicates the current target, as defined in the old SystemV terminology:
```
[root@testvm1 ~]# runlevel
N 5
[root@testvm1 ~]#
```
Note that the runlevel man page indicates that runlevels are obsolete and provides a conversion table.
You can also use the systemd method. There is no one-line answer here, but it does provide the answer in systemd terms:
```
[root@testvm1 ~]# systemctl list-units --type target
UNIT                   LOAD   ACTIVE SUB    DESCRIPTION                
basic.target           loaded active active Basic System              
cryptsetup.target      loaded active active Local Encrypted Volumes    
getty.target           loaded active active Login Prompts              
graphical.target       loaded active active Graphical Interface        
local-fs-pre.target    loaded active active Local File Systems (Pre)  
local-fs.target        loaded active active Local File Systems        
multi-user.target      loaded active active Multi-User System          
network-online.target  loaded active active Network is Online          
network.target         loaded active active Network                    
nfs-client.target      loaded active active NFS client services        
nss-user-lookup.target loaded active active User and Group Name Lookups
paths.target           loaded active active Paths                      
remote-fs-pre.target   loaded active active Remote File Systems (Pre)  
remote-fs.target       loaded active active Remote File Systems        
rpc_pipefs.target      loaded active active rpc_pipefs.target          
slices.target          loaded active active Slices                    
sockets.target         loaded active active Sockets                    
sshd-keygen.target     loaded active active sshd-keygen.target        
swap.target            loaded active active Swap                      
sysinit.target         loaded active active System Initialization      
timers.target          loaded active active Timers                    
LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.
21 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
```
This shows all of the currently loaded and active targets. You can also see the **graphical.target** and the **multi-user.target**. The **multi-user.target** is required before the **graphical.target** can be loaded. In this example, the **graphical.target** is active.
### Switching to a different target
Making the switch to the **multi-user.target** is easy:
```
`[root@testvm1 ~]# systemctl isolate multi-user.target`
```
The display should now change from the GUI desktop or login screen to a virtual console. Log in and list the currently active systemd units to verify that **graphical.target** is no longer running:
```
`[root@testvm1 ~]# systemctl list-units --type target`
```
Be sure to use the **runlevel** command to verify that it shows both previous and current "runlevels":
```
[root@testvm1 ~]# runlevel
5 3
```
### Changing the default target
Now, change the default target to the **multi-user.target** so that it will always boot into the **multi-user.target** for a console command-line interface rather than a GUI desktop interface. As the root user on your test host, change to the directory where the systemd configuration is maintained and do a quick listing:
```
[root@testvm1 ~]# cd /etc/systemd/system/ ; ll
drwxr-xr-x. 2 root root 4096 Apr 25  2018  basic.target.wants
&lt;snip&gt;
lrwxrwxrwx. 1 root root   36 Aug 13 16:23  default.target -&gt; /lib/systemd/system/graphical.target
lrwxrwxrwx. 1 root root   39 Apr 25  2018  display-manager.service -&gt; /usr/lib/systemd/system/lightdm.service
drwxr-xr-x. 2 root root 4096 Apr 25  2018  getty.target.wants
drwxr-xr-x. 2 root root 4096 Aug 18 10:16  graphical.target.wants
drwxr-xr-x. 2 root root 4096 Apr 25  2018  local-fs.target.wants
drwxr-xr-x. 2 root root 4096 Oct 30 16:54  multi-user.target.wants
&lt;snip&gt;
[root@testvm1 system]#
```
I shortened this listing to highlight a few important things that will help explain how systemd manages the boot process. You should be able to see the entire list of directories and links on your virtual machine.
The **default.target** entry is a symbolic link (symlink, soft link) to the directory **/lib/systemd/system/graphical.target**. List that directory to see what else is there:
```
`[root@testvm1 system]# ll /lib/systemd/system/ | less`
```
You should see files, directories, and more links in this listing, but look specifically for **multi-user.target** and **graphical.target**. Now display the contents of **default.target**, which is a link to **/lib/systemd/system/graphical.target**:
```
[root@testvm1 system]# cat default.target
#  SPDX-License-Identifier: LGPL-2.1+
#
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
[Unit]
Description=Graphical Interface
Documentation=man:systemd.special(7)
Requires=multi-user.target
Wants=display-manager.service
Conflicts=rescue.service rescue.target
After=multi-user.target rescue.service rescue.target display-manager.service
AllowIsolate=yes
[root@testvm1 system]#
```
This link to the **graphical.target** file describes all of the prerequisites and requirements that the graphical user interface requires. I will explore at least some of these options in the next article in this series.
To enable the host to boot to multi-user mode, you need to delete the existing link and create a new one that points to the correct target. Make the [PWD][5] **/etc/systemd/system**, if it is not already:
```
[root@testvm1 system]# rm -f default.target
[root@testvm1 system]# ln -s /lib/systemd/system/multi-user.target default.target
```
List the **default.target** link to verify that it links to the correct file:
```
[root@testvm1 system]# ll default.target
lrwxrwxrwx 1 root root 37 Nov 28 16:08 default.target -&gt; /lib/systemd/system/multi-user.target
[root@testvm1 system]#
```
If your link does not look exactly like this, delete it and try again. List the content of the **default.target** link:
```
[root@testvm1 system]# cat default.target
#  SPDX-License-Identifier: LGPL-2.1+
#
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
[Unit]
Description=Multi-User System
Documentation=man:systemd.special(7)
Requires=basic.target
Conflicts=rescue.service rescue.target
After=basic.target rescue.service rescue.target
AllowIsolate=yes
[root@testvm1 system]#
```
The **default.target**—which is really a link to the **multi-user.target** at this point—now has different requirements in the **[Unit]** section. It does not require the graphical display manager.
Reboot. Your virtual machine should boot to the console login for virtual console 1, which is identified on the display as tty1. Now that you know how to change the default target, change it back to the **graphical.target** using a command designed for the purpose.
First, check the current default target:
```
[root@testvm1 ~]# systemctl get-default
multi-user.target
[root@testvm1 ~]# systemctl set-default graphical.target
Removed /etc/systemd/system/default.target.
Created symlink /etc/systemd/system/default.target → /usr/lib/systemd/system/graphical.target.
[root@testvm1 ~]#
```
Enter the following command to go directly to the **graphical.target** and the display manager login page without having to reboot:
```
`[root@testvm1 system]# systemctl isolate default.target`
```
I do not know why the term "isolate" was chosen for this sub-command by systemd's developers. My research indicates that it may refer to running the specified target but "isolating" and terminating all other targets that are not required to support the target. However, the effect is to switch targets from one run target to another—in this case, from the multi-user target to the graphical target. The command above is equivalent to the old init 5 command in SystemV start scripts and the init program.
Log into the GUI desktop, and verify that it is working as it should.
### Summing up
This article explored the Linux systemd startup sequence and started to explore two important systemd tools, **systemctl** and **journalctl**. It also explained how to switch from one target to another and to change the default target.
The next article in this series will create a new systemd unit and configure it to run during startup. It will also look at some of the configuration options that help determine where in the sequence a particular unit will start, for example, after networking is up and running.
### Resources
There is a great deal of information about systemd available on the internet, but much is terse, obtuse, or even misleading. In addition to the resources mentioned in this article, the following webpages offer more detailed and reliable information about systemd startup.
* The Fedora Project has a good, practical [guide][6] [to systemd][6]. It has pretty much everything you need to know in order to configure, manage, and maintain a Fedora computer using systemd.
* The Fedora Project also has a good [cheat sheet][7] that cross-references the old SystemV commands to comparable systemd ones.
* For detailed technical information about systemd and the reasons for creating it, check out [Freedesktop.org][8]'s [description of systemd][9].
* [Linux.com][10]'s "More systemd fun" offers more advanced systemd [information and tips][11].
There is also a series of deeply technical articles for Linux sysadmins by Lennart Poettering, the designer and primary developer of systemd. These articles were written between April 2010 and September 2011, but they are just as relevant now as they were then. Much of everything else good that has been written about systemd and its ecosystem is based on these papers.
* [Rethinking PID 1][12]
* [systemd for Administrators, Part I][13]
* [systemd for Administrators, Part II][14]
* [systemd for Administrators, Part III][15]
* [systemd for Administrators, Part IV][16]
* [systemd for Administrators, Part V][17]
* [systemd for Administrators, Part VI][18]
* [systemd for Administrators, Part VII][19]
* [systemd for Administrators, Part VIII][20]
* [systemd for Administrators, Part IX][21]
* [systemd for Administrators, Part X][22]
* [systemd for Administrators, Part XI][23]
Alison Chiaken, a Linux kernel and systems programmer at Mentor Graphics, offers a preview of her...
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/5/systemd-startup
作者:[David Both][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/start_line.jpg?itok=9reaaW6m (People at the start line of a race)
[2]: https://opensource.com/article/20/4/systemd
[3]: http://www.gnu.org/software/grub/manual/grub
[4]: mailto:mockbuild@bkernel03.phx2.fedoraproject.org
[5]: https://en.wikipedia.org/wiki/Pwd
[6]: https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html
[7]: https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet
[8]: http://Freedesktop.org
[9]: http://www.freedesktop.org/wiki/Software/systemd
[10]: http://Linux.com
[11]: https://www.linux.com/training-tutorials/more-systemd-fun-blame-game-and-stopping-services-prejudice/
[12]: http://0pointer.de/blog/projects/systemd.html
[13]: http://0pointer.de/blog/projects/systemd-for-admins-1.html
[14]: http://0pointer.de/blog/projects/systemd-for-admins-2.html
[15]: http://0pointer.de/blog/projects/systemd-for-admins-3.html
[16]: http://0pointer.de/blog/projects/systemd-for-admins-4.html
[17]: http://0pointer.de/blog/projects/three-levels-of-off.html
[18]: http://0pointer.de/blog/projects/changing-roots
[19]: http://0pointer.de/blog/projects/blame-game.html
[20]: http://0pointer.de/blog/projects/the-new-configuration-files.html
[21]: http://0pointer.de/blog/projects/on-etc-sysinit.html
[22]: http://0pointer.de/blog/projects/instances.html
[23]: http://0pointer.de/blog/projects/inetd.html

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (unigeorge)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -169,7 +169,7 @@ via: https://opensource.com/article/20/9/ssh
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[unigeorge](https://github.com/unigeorge)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -2,7 +2,7 @@
[#]: via: (https://itsfoss.com/check-mbr-or-gpt/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (alim0x)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,138 +0,0 @@
[#]: subject: (Use VS Code to develop in containers)
[#]: via: (https://opensource.com/article/21/7/vs-code-remote-containers-podman)
[#]: author: (Brant Evans https://opensource.com/users/branic)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Use VS Code to develop in containers
======
Create consistency to avoid problems when you have multiple developers
working on the same project.
![Women programming][1]
Coding and testing inconsistencies are a risk when you have multiple developers with different development environments working on a project. [Visual Studio Code][2] (VS Code) is an integrated development environment (IDE) that can help minimize these issues. It can be combined with containers to provide separate development environments for each application alongside a consistent development environment.
VS Code's [Remote - Containers extension][3] enables you to define a container, use that definition to build a container, and develop inside the container. This container definition can be checked into the source code repository along with the application code, which allows all developers to use the same definition to build and develop within a container.
By default, the Remote - Containers extension uses Docker to build and run the container, but it is easy to use [Podman][4] for container runtimes, and it enables using [rootless containers][5].
This article walks you through the setup to develop inside a rootless container using Podman with VS Code and the Remote - Containers extension.
### Initial configuration
Before continuing, ensure your Red Hat Enterprise Linux (RHEL) or Fedora workstation is updated with the latest errata and that VS Code and the Remote - Containers extension are installed. (See the [VS Code website][2] for more information on installing.)
Next, install Podman and its supporting packages with a simple `dnf install` command:
```
`$ sudo dnf install -y podman`
```
After you install Podman, configure VS Code to use the Podman executable (instead of Docker) for interacting with the container. Within VS Code, navigate to **File &gt; Preferences &gt; Settings** and click the **&gt;** icon next to **Extensions**. In the dropdown menu that appears, select **Remote - Containers**, and scroll down to find the **Remote &gt; Containers: Docker Path** option. In the text box, replace docker with **podman**.
![Enter "podman" in the text box][6]
(Brant Evans, [CC BY-SA 4.0][7])
Now that the configurations are done, create and open a new folder or an existing folder for the project in VS Code.
### Define the container
This tutorial uses the example of creating a container for Python 3 development.
The Remote - Containers extension can add the necessary basic configuration files to the project folder. To add these files, open the Command Pallet by entering **Ctrl+Shift+P** on your keyboard, search for **Remote-Containers: Add Development Container Configuration Files**, and select it.
![Remote-Containers: Add Development Container Configuration Files][8]
(Brant Evans, [CC BY-SA 4.0][7])
In the next pop-up, define the type of development environment you want to set up. For this example configuration, search for the **Python 3** definition and select it.
![Select Python 3 definition][9]
(Brant Evans, [CC BY-SA 4.0][7])
Next, select the version of Python that will be used in the container. Select the **3 (default)** option to use the latest version.
![Select the 3 \(default\) option][10]
(Brant Evans, [CC BY-SA 4.0][7])
The Python configuration can also install Node.js, but for this example, **uncheck Install Node.js** and click OK.
![Uncheck "Install Node.js"][11]
(Brant Evans, [CC BY-SA 4.0][7])
It will create a `.devcontainer` folder containing files named `devcontainer.json` and `Dockerfile`. VS Code automatically opens the `devcontainer.json` file so that you can customize it.
### Enable rootless containers
In addition to the obvious security benefits, one of the other reasons to run a container as rootless is that all the files created in the project folder will be owned by the correct user ID (UID) outside the container. To run the development container as a rootless container, modify the `devcontainer.json` file by adding the following lines to the end of it:
```
"workspaceMount": "source=${localWorkspaceFolder},target=/workspace,type=bind,Z",
"workspaceFolder": "/workspace",
"runArgs": ["--userns=keep-id"],
"containerUser": "vscode"
```
These options tell VS Code to mount the Workspace with the proper SELinux context, create a user namespace that maps your UID and GID to the same values inside the container, and use `vscode` as your username inside the container. The `devcontainer.json` file should look like this (don't forget the commas at the end of the lines, as indicated):
![Updated devcontainer.json file][12]
(Brant Evans, [CC BY-SA 4.0][7])
Now that you've set up the container configuration, you can build the container and open the workspace inside it. Reopen the Command Palette (with **Ctrl+Shift+P**), and search for **Remote-Containers: Rebuild and Reopen in Container**. Click on it, and VS Code will start to build the container. Now is a great time to take a break (and get your favorite beverage), as building the container may take several minutes.
![Building the container][13]
(Brant Evans, [CC BY-SA 4.0][7])
Once the container build completes, the project will open inside the container. Files created or edited within the container will be reflected in the filesystem outside the container with the proper user permissions applied to the files. Now, you can proceed with development within the container. VS Code can even bring your SSH keys and Git configuration into the container so that committing code will work just like it does when editing outside the container.
### Next steps
Now that you've completed the basic setup and configuration, you can further enhance the configuration's usefulness. For example:
* Modify the Dockerfile to install additional software (e.g., required Python modules).
* Use a customized container image. For example, if you're doing Ansible development, you could use Quay.io's [Ansible Toolset][14]. (Be sure to add the `vscode` user to the container image via the Dockerfile.)
* Commit the files in the `.devcontainer` directory to the source code repository so that other developers can take advantage of the container definition for their development efforts.
Developing inside a container helps prevent conflicts between different projects by keeping the dependencies and code for each separate. You can use Podman to run containers in a rootless environment that increases security. By combining VS Code, the Remote - Containers extension, and Podman, you can easily set up a consistent environment for multiple developers, decrease setup time, and reduce bugs from differences in development environments in a secure fashion.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/vs-code-remote-containers-podman
作者:[Brant Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/branic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard2.png?itok=WnKfsl-G (Women programming)
[2]: https://code.visualstudio.com/
[3]: https://code.visualstudio.com/docs/remote/containers
[4]: https://podman.io/
[5]: https://www.redhat.com/sysadmin/rootless-podman-makes-sense
[6]: https://opensource.com/sites/default/files/uploads/vscode-remote_podman.png (Enter "podman" in the text box)
[7]: https://creativecommons.org/licenses/by-sa/4.0/
[8]: https://opensource.com/sites/default/files/uploads/adddevelopmentcontainerconfigurationfiles.png (Remote-Containers: Add Development Container Configuration Files)
[9]: https://opensource.com/sites/default/files/uploads/python3.png (Select Python 3 definition)
[10]: https://opensource.com/sites/default/files/uploads/python3default.png (Select the 3 (default) option)
[11]: https://opensource.com/sites/default/files/uploads/unchecknodejs.png (Uncheck "Install Node.js")
[12]: https://opensource.com/sites/default/files/uploads/newdevcontainerjson.png (Updated devcontainer.json file)
[13]: https://opensource.com/sites/default/files/uploads/buildingcontainer.png (Building the container)
[14]: https://quay.io/repository/ansible/toolset

View File

@ -1,194 +0,0 @@
[#]: subject: "Install OpenVPN on your Linux PC"
[#]: via: "https://opensource.com/article/21/7/openvpn-router"
[#]: author: "D. Greg Scott https://opensource.com/users/greg-scott"
[#]: collector: "lujun9972"
[#]: translator: "perfiffer"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Install OpenVPN on your Linux PC
======
After setting up a VPN server, the next step is installing and
configuring OpenVPN.
![Open ethernet cords.][1]
OpenVPN creates an encrypted tunnel between two points, preventing a third party from accessing your network traffic. By setting up your virtual private network (VPN) server, you become your own VPN provider. Many popular VPN services already use [OpenVPN][2], so why tie your connection to a specific provider when you can have complete control?
The [first article][3] in this series demonstrated how to set up and configure a Linux PC to serve as your OpenVPN server. It also discussed how to configure your router so that you can reach your VPN server from an outside network.
This second article demonstrates how to install the OpenVPN server software using steps customized from the [OpenVPN wiki][4].
### Install OpenVPN
First, install OpenVPN and the `easy-rsa` application (to help you set up authentication on your server) using your package manager. This example uses Fedora Linux; if you've chosen something different, use the appropriate command for your distribution:
```
`$ sudo dnf install openvpn easy-rsa`
```
This creates some empty directories:
* `/etc/openvpn`
* `/etc/openvpn/client`
* `/etc/openvpn/server`
If these aren't created during installation, create them manually.
### Set up authentication
OpenVPN depends on the `easy-rsa` scripts and should have its own copy of them. Copy the `easy-rsa` scripts and files:
```
$ sudo mkdir /etc/openvpn/easy-rsa
$ sudo cp -rai /usr/share/easy-rsa/3/* \
/etc/openvpn/easy-rsa/
```
Authentication is important, and OpenVPN takes it very seriously. The theory is that if Alice needs to access private information inside Bob's company, it's vital that Bob makes sure Alice really is Alice. Likewise, Alice must make sure that Bob is really Bob. We call this mutual authentication.
Today's best practice checks an attribute from two of three possible factors:
* Something you have
* Something you know
* Something you are
There are lots of choices. This OpenVPN setup uses:
* **Certificates:** Something both the client and server have
* **Certificate password:** Something the people know
Alice and Bob need help to mutually authenticate. Since they both trust Cathy, Cathy takes on a role called **certificate authority** (CA). Cathy attests that Alice and Bob both are who they claim to be. Because Alice and Bob both trust Cathy, now they also trust each other.
But what convinces Cathy that Alice and Bob really are Alice and Bob? Cathy's reputation in the community depends on getting this right, and so if she wants Danielle, Evan, Fiona, Greg, and others to also trust her, she will rigorously test Alice and Bob's claims. After Alice and Bob convince Cathy that they really are Alice and Bob, Cathy signs certificates for them to share with each other and the world.
How do Alice and Bob know Cathy—and not somebody impersonating her—signed the certificates? They use a technology called **public key cryptography:**
* Find a cryptography algorithm that encrypts with one key and decrypts with another.
* Declare one key private and share the other key with the public.
* Cathy shares her public key and a clear-text copy of her signature with the world.
* Cathy encrypts her signature with her private key. Anyone can decrypt it with her public key.
* If Cathy's decrypted signature matches the clear-text copy, Alice and Bob can trust Cathy really did sign it.
You use this same technology every time you buy goods and services online.
### Implement authentication
OpenVPN's [documentation][5] suggests setting up a CA on a separate system or at least a separate directory on the OpenVPN server. The documentation also suggests generating server and client certificates from the server and clients. Because this is a simple setup, you can use the OpenVPN server as its own CA and put the certificates and keys into specified directories on the server.
Generate certificates from the server and copy them to each client as part of client setup.
This implementation uses self-signed certificates. This works because the server trusts itself, and clients trust the server. Therefore, the server is the best CA to sign certificates.
From the OpenVPN server, set up the CA:
```
$ sudo mkdir /etc/openvpn/ca
$ cd /etc/openvpn/ca
$ sudo /etc/openvpn/easy-rsa/easyrsa init-pki
$ sudo /etc/openvpn/easy-rsa/easyrsa build-ca
```
Use an easy-to-remember but hard-to-guess passphrase.
Set up the server key pair and certificate request:
```
$ cd /etc/openvpn/server
$ sudo /etc/openvpn/easy-rsa/easyrsa init-pki
$ sudo /etc/openvpn/easy-rsa/easyrsa gen-req OVPNserver2020 nopass
```
In this example, `OVPNServer2020` is whatever hostname you assigned your OpenVPN server in the first article in this series.
### Generate and sign certs
Now you must send a server request to the CA and generate and sign the server certificate.
This step essentially copies the request file from `/etc/openvpn/server/pki/reqs/OVPNserver2020.req` to `/etc/openvpn/ca/pki/reqs/OVPNserver2020.req` to prepare it for review and signing:
```
$ cd /etc/openvpn/ca
$ sudo /etc/openvpn/easy-rsa/easyrsa \
import-req /etc/openvpn/server/pki/reqs/OVPNserver2020.req OVPNserver2020
```
### Review and sign the request
You've generated a request, so now you must review and sign the certs:
```
$ cd /etc/openvpn/ca
$ sudo /etc/openvpn/easy-rsa/easyrsa \
show-req OVPNserver2020
```
Sign as the server:
```
$ cd /etc/openvpn/ca
$ sudo /etc/openvpn/easy-rsa/easyrsa \
sign-req server OVPNserver2020
```
Put a copy of the server and CA certificates where they belong for the config file to pick them up:
```
$ sudo cp /etc/openvpn/ca/pki/issued/OVPNserver2020.crt \
/etc/openvpn/server/pki/
$ sudo cp /etc/openvpn/ca/pki/ca.crt \
/etc/openvpn/server/pki/
```
Next, generate [Diffie-Hellman][6] parameters so that clients and the server can exchange session keys:
```
$ cd /etc/openvpn/server
$ sudo /etc/openvpn/easy-rsa/easyrsa gen-dh
```
### Almost there
The next article in this series will demonstrate how to configure and start the OpenVPN server you just built.
* * *
_This article is based on D. Greg Scott's [blog][7] and is reused with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/openvpn-router
作者:[D. Greg Scott][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/greg-scott
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab (Open ethernet cords.)
[2]: https://openvpn.net/
[3]: https://opensource.com/article/21/7/vpn-openvpn-part-1
[4]: https://community.openvpn.net/openvpn/wiki
[5]: https://openvpn.net/community-resources/
[6]: https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange
[7]: https://www.dgregscott.com/how-to-build-a-vpn-in-four-easy-steps-without-spending-one-penny/

View File

@ -1,143 +0,0 @@
[#]: subject: "Access OpenVPN from a client computer"
[#]: via: "https://opensource.com/article/21/7/openvpn-client"
[#]: author: "D. Greg Scott https://opensource.com/users/greg-scott"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Access OpenVPN from a client computer
======
After building your own VPN on Linux, it's time to finally use it.
![Woman programming][1]
OpenVPN creates an encrypted tunnel between two points, preventing a third party from accessing your network traffic. By setting up your virtual private network (VPN) server, you become your own VPN provider. Many popular VPN services already use [OpenVPN][2], so why tie your connection to a specific provider when you can have complete control yourself?
The [first article][3] in this series set up a server for your VPN, the [second article][4] demonstrated how to install and configure the OpenVPN server software, while the [third article][5] explained how to configure your firewall and start the OpenVPN server software. This fourth and final article demonstrates how to use your OpenVPN server from client computers. This is the reason you did all the work in the previous three articles!
### Create client certificates
Remember that the method of authentication for OpenVPN requires both the server and the client to _have_ something (certificates) and to _know_ something (a password). It's time to set that up.
First, create a client certificate and a private key for your client computer. On your OpenVPN server, generate a certificate request. It asks for a passphrase; make sure you remember it:
```
$ cd /etc/openvpn/ca
$ sudo /etc/openvpn/easy-rsa/easyrsa \
gen-req greglaptop
```
In this example, `greglaptop` is the client computer for which this certificate is being created.
There's no need to import the request into the certificate authority (CA) because it's already there. Review it to make sure:
```
$ cd /etc/openvpn/ca
$ /etc/openvpn/easy-rsa/easyrsa \
show-req greglaptop
```
You can sign as the client, too:
```
$ /etc/openvpn/easy-rsa/easyrsa \
sign-req client greglaptop
```
### Install the OpenVPN client software
On Linux, Network Manager may already have an OpenVPN client included. If not, you can install the plugin:
```
`$ sudo dnf install NetworkManager-openvpn`
```
On Windows, you must download and install the OpenVPN client from the OpenVPN download site. Launch the installer and follow the prompts.
### Copy certificates and private keys to the client
Now your client needs the authentication credentials you generated for it. You generated these on the server, so you must transport them over to your client. I tend to use SSH for this. On Linux, that's the `scp` command. On Windows, you can use [WinSCP][6] as administrator to pull the certificates and keys.
Assuming the client is named `greglaptop`, here are the file names and server locations:
```
/etc/openvpn/ca/pki/issued/greglaptop.crt
/etc/openvpn/ca/pki/private/greglaptop.key
/etc/openvpn/ca/pki/issued/ca.crt
```
On Linux, copy these to the `/etc/pki/tls/certs/` directory. On Windows, copy them to the `C:\Program Files\OpenVPN\config` directory.
### Copy and customize the client configuration file
On Linux, you can either copy the `/etc/openvpn/client/OVPNclient2020.ovpn` file on the server to `/etc/NetworkManager/system-connections/`, or you can navigate to Network Manager in System Settings and add a VPN connection. 
For the connection type, select **Certificates**. Point Network Manager to the certificates and keys you copied from the server.
![VPN displayed in Network Manager][7]
(Seth Kenlon, [CC BY-SA 4.0][8])
On Windows, run WinSCP as administrator to copy the client configuration template `/etc/openvpn/client/OVPNclient2020.ovpn` on the server to `C:\Program Files\OpenVPN\config` on the client. Then:
* Rename it to match the certificate above.
* Change the names of the CA certificate, client certificate, and key to match the names copied above from the server.
* Edit the IP information to match your network.
You need super administrative permissions to edit the client config files. The easiest way to get this might be to launch a CMD window as administrator and then launch Notepad from the administrator CMD window to edit the files.
### Connect your client to the server
On Linux, Network manager displays your VPN. Select it to connect.
 
![Add a VPN connection in Network Manager][9]
(Seth Kenlon, [CC BY-SA 4.0][8])
On Windows, start the OpenVPN graphical user interface (GUI). It produces a graphic in the Windows System Tray on the right side of the taskbar, usually in the lower-right corner of your Windows desktop. Right-click the graphic to connect, disconnect, or view the status.
For the first connection, edit the "remote" line of your client config file to use the _inside IP address_ of your OpenVPN server. Connect to the server from inside your office network by right-clicking on the OpenVPN GUI in the Windows System Tray and clicking **Connect**. Debug this connection. This should find and fix problems without any firewall issues getting in the way because both the client and server are on the same side of the firewall.
Next, edit the "remote" line of your client config file to use the _public IP address_ for your OpenVPN server. Bring the Windows client to an outside network and connect. Debug any issues.
### Connect securely
Congratulations! You have an OpenVPN network ready for your other client systems. Repeat the setup steps for the rest of your clients. You might even use Ansible to distribute certs and keys and keep them up to date. 
* * *
_This article is based on D. Greg Scott's [blog][10] and is reused with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/openvpn-client
作者:[D. Greg Scott][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/greg-scott
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming)
[2]: https://openvpn.net/
[3]: https://opensource.com/article/21/7/vpn-openvpn-part-1
[4]: https://opensource.com/article/21/7/vpn-openvpn-part-2
[5]: https://opensource.com/article/21/7/vpn-openvpn-part-3
[6]: https://winscp.net/eng/index.php
[7]: https://opensource.com/sites/default/files/uploads/network-manager-profile.jpg (VPN displayed in Network Manager)
[8]: https://creativecommons.org/licenses/by-sa/4.0/
[9]: https://opensource.com/sites/default/files/uploads/network-manager-connect.jpg (Add a VPN connection in Network Manager)
[10]: https://www.dgregscott.com/how-to-build-a-vpn-in-four-easy-steps-without-spending-one-penny/

View File

@ -1,198 +0,0 @@
[#]: subject: "Monitor your Linux system in your terminal with procps-ng"
[#]: via: "https://opensource.com/article/21/8/linux-procps-ng"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Monitor your Linux system in your terminal with procps-ng
======
How to find the process ID (PID) of a program. The most common Linux
tools for this are provided by the procps-ng package, including the ps
and pstree, pidof, and pgrep commands.
![System monitor][1]
A process, in [POSIX][2] terminology, is an ongoing event being managed by an operating systems kernel. A process is spawned when you launch an application, although there are many other processes running in the background of your computer, including programs to keep your system time accurate, to monitor for new filesystems, to index files, and so on.
Most operating systems have a system activity monitor of some kind so you can learn what processes are running at any give moment. Linux has a few for you to choose from, including GNOME System Monitor and KSysGuard. Both are useful applications on the desktop, but Linux also provides the ability to monitor your system in your terminal. Regardless of which you choose, its a common task for those who take an active role in managing their computer is to examine a specific process.
In this article, I demonstrate how to find the process ID (PID) of a program. The most common tools for this are provided by the [procps-ng][3] package, including the `ps` and `pstree`, `pidof`, and `pgrep` commands.
### Find the PID of a running program
Sometimes you want to get the process ID (PID) of a specific application you know you have running. The `pidof` and `pgrep` commands find processes by command name.
The `pidof` command returns the PIDs of a command, searching for the exact command by name:
```
$ pidof bash
1776 5736
```
The `pgrep` command allows for regular expressions (regex):
```
$ pgrep .sh
1605
1679
1688
1776
2333
5736
$ pgrep bash
5736
```
### Find a PID by file
You can find the PID of the process using a specific file with the `fuser` command.
```
$ fuser --user ~/example.txt                    
/home/tux/example.txt:  3234(tux)
```
### Get a process name by PID
If you have the PID _number_ of a process but not the command that spawned it, you can do a "reverse lookup" with `ps`:
```
$ ps 3234
 PID TTY      STAT   TIME COMMAND
5736 pts/1    Ss     0:00 emacs
```
### List all processes
The `ps` command lists processes. You can list every process on your system with the `-e` option:
```
$ ps -e | less
PID TTY          TIME CMD
  1 ?        00:00:03 systemd
  2 ?        00:00:00 kthreadd
  3 ?        00:00:00 rcu_gp
  4 ?        00:00:00 rcu_par_gp
  6 ?        00:00:00 kworker/0:0H-events_highpri
[...]
5648 ?        00:00:00 gnome-control-c
5656 ?        00:00:00 gnome-terminal-
5736 pts/1    00:00:00 bash
5791 pts/1    00:00:00 ps
5792 pts/1    00:00:00 less
(END)
```
### List just your processes
The output of `ps -e` can be overwhelming, so use `-U` to see the processes of just one user:
```
$ ps -U tux | less
 PID TTY          TIME CMD
3545 ?        00:00:00 systemd
3548 ?        00:00:00 (sd-pam)
3566 ?        00:00:18 pulseaudio
3570 ?        00:00:00 gnome-keyring-d
3583 ?        00:00:00 dbus-daemon
3589 tty2     00:00:00 gdm-wayland-ses
3592 tty2     00:00:00 gnome-session-b
3613 ?        00:00:00 gvfsd
3618 ?        00:00:00 gvfsd-fuse
3665 tty2     00:01:03 gnome-shell
[...]
```
That produces 200 fewer (give or take a hundred, depending on the system you're running it on) processes to sort through.
You can view the same output in a different format with the `pstree` command:
```
$ pstree -U tux -u --show-pids
[...]
├─gvfsd-metadata(3921)─┬─{gvfsd-metadata}(3923)
                     └─{gvfsd-metadata}(3924)
├─ibus-portal(3836)─┬─{ibus-portal}(3840)
                  └─{ibus-portal}(3842)
├─obexd(5214)
├─pulseaudio(3566)─┬─{pulseaudio}(3640)
                 ├─{pulseaudio}(3649)
                 └─{pulseaudio}(5258)
├─tracker-store(4150)─┬─{tracker-store}(4153)
                    ├─{tracker-store}(4154)
                    ├─{tracker-store}(4157)
                    └─{tracker-store}(4178)
└─xdg-permission-(3847)─┬─{xdg-permission-}(3848)
                        └─{xdg-permission-}(3850)
```
### List just your processes with context
You can see extra context for all of the processes you own with the `-u` option.
```
$ ps -U tux -u
USER  PID %CPU %MEM    VSZ   RSS TTY STAT START  TIME COMMAND
tux  3545  0.0  0.0  89656  9708 ?   Ss   13:59  0:00 /usr/lib/systemd/systemd --user
tux  3548  0.0  0.0 171416  5288 ?   S    13:59  0:00 (sd-pam)
tux  3566  0.9  0.1 1722212 17352 ?  S&lt;sl 13:59  0:29 /usr/bin/pulseaudio [...]
tux  3570  0.0  0.0 664736  8036 ?   SLl  13:59  0:00 /usr/bin/gnome-keyring-daemon [...]
[...]
tux  5736  0.0  0.0 235628  6036 pts/1 Ss 14:18  0:00 bash
tux  6227  0.0  0.4 2816872 74512 tty2 Sl+14:30  0:00 /opt/firefox/firefox-bin [...]
tux  6660  0.0  0.0 268524  3996 pts/1 R+ 14:50  0:00 ps -U tux -u
tux  6661  0.0  0.0 219468  2460 pts/1 S+ 14:50  0:00 less
```
### Troubleshoot with PIDs
If youre having trouble with a specific application, or youre just curious about what else on your system an application uses, you can see a memory map of the running process with `pmap`:
```
$ pmap 1776
5736:   bash
000055f9060ec000   1056K r-x-- bash
000055f9063f3000     16K r---- bash
000055f906400000     40K rw---   [ anon ]
00007faf0fa67000   9040K r--s- passwd
00007faf1033b000     40K r-x-- libnss_sss.so.2
00007faf10345000   2044K ----- libnss_sss.so.2
00007faf10545000      4K rw--- libnss_sss.so.2
00007faf10546000 212692K r---- locale-archive
00007faf1d4fb000   1776K r-x-- libc-2.28.so
00007faf1d6b7000   2044K ----- libc-2.28.so
00007faf1d8ba000      8K rw--- libc-2.28.so
[...]
```
### Process IDs
The **procps-ng** package has all the commands you need to investigate and monitor what your system is using at any moment. Whether youre just curious about how all the disparate parts of a Linux system fit together, or whether youre investigating an error, or youre looking to optimize how your computer is performing, learning these commands gives you a significant advantage for understanding your OS.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/linux-procps-ng
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/system-monitor-splash.png?itok=0UqsjuBQ (System monitor)
[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[3]: https://gitlab.com/procps-ng

View File

@ -1,72 +0,0 @@
[#]: subject: "4 alternatives to cron in Linux"
[#]: via: "https://opensource.com/article/21/7/alternatives-cron-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "unigeorge"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
4 alternatives to cron in Linux
======
There are a few other open source projects out there that can be used
either in conjunction with cron or instead of cron.
![Alarm clocks with different time][1]
The [Linux `cron` system][2] is a time-tested and proven technology. However, it's not always the right tool for system automation. There are a few other open source projects out there that can be used either in conjunction with `cron` or instead of `cron`.
### Linux at command
`Cron` is intended for long-term repetition. You schedule a job, and it runs at a regular interval from now until the computer is decommissioned. Sometimes you just want to schedule a one-off command to run at a time you happen not to be at your computer. For that, you can use the `at` command.
The syntax of `at` is far simpler and more flexible than the `cron` syntax, and it has both an interactive and non-interactive method for scheduling (so you could use `at` to create an `at` job if you really wanted to.)
```
`$ echo "rsync -av /home/tux/ me@myserver:/home/tux/" | at 1:30 AM`
```
It feels natural, it's easy to use, and you don't have to clean up old jobs because they're entirely forgotten once they've been run.
Read more about the [at command][3] to get started.
### Systemd
In addition to managing processes on your computer, `systemd` can also help you schedule them. Like traditional `cron` jobs, `systemd` timers can trigger events, such as shell scripts and commands, at specified time intervals. This can be once a day on a specific day of the month (and then, perhaps only if it's a Monday, for example), or every 15 minutes during business hours from 09:00 to 17:00.
Timers can also do some things that `cron` jobs can't.
For example, a timer can trigger a script or program to run a specific amount of time _after_ an event, such as boot, startup, completion of a previous task, or even the prior completion of the service unit called by the timer itself!
If your system runs `systemd`, then you're technically using `systemd` timers already. Default timers perform menial tasks like rotating log files, updating the mlocate database, manage the DNF database, and so on. Creating your own is easy, as demonstrated by David Both in his article [Use systemd timers instead of cronjobs][4].
### Anacron
`Cron` specializes in running a command at a specific time. This works well for a server that's never hibernating or powered down. Still, it's pretty common for laptops and desktop workstations to either intentionally or absent-mindedly turn the computer off from time to time. When the computer's not on, `cron` doesn't run, so important jobs (such as backing up data) get skipped.
The `anacron` system is designed to ensure that jobs are run periodically rather than on a schedule. This means you can leave a computer off for several days and still count on `anacron` to run essential tasks when you boot it up again. `Anacron` works in tandem with `cron`, so it's not strictly an alternative to it, but it's a meaningful alternative way of scheduling tasks. Many a sysadmin has configured a `cron` job to backup data late at night on a remote worker's computer, only to discover that the job's only been run once in the past six months. `Anacron` ensures that important jobs happen _sometime_ when they can rather than _never_ when they were scheduled.
Read more about [using anacron for a better crontab][5].
### Automation
Computers and technology are meant to make lives better and work easier. Linux provides its users with lots of helpful features to ensure important operating system tasks get done. Take a look at what's available, and start using these features for your own tasks.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/alternatives-cron-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/clocks_time.png?itok=_ID09GDk (Alarm clocks with different time)
[2]: https://opensource.com/article/21/7/cron-linux
[3]: https://opensource.com/article/21/7/intro-command
[4]: https://opensource.com/article/20/7/systemd-timers
[5]: https://opensource.com/article/21/2/linux-automation

View File

@ -1,87 +0,0 @@
[#]: subject: "Automatically Synchronize Subtitle With Video Using SubSync"
[#]: via: "https://itsfoss.com/subsync/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Automatically Synchronize Subtitle With Video Using SubSync
======
Let me share a scenario. You are trying to watch a movie or video and you need subtitles. You download the subtitle only to find that the subtitle is not properly synchronized. There are no other good subtitles available. What to do now?
You can [synchronize subtitles in VLC by pressing G or H keys][1]. It adds a delay to the subtitles. This could work if the subtitle is out of synch by the same time interval throughout the video. But if thats not the case, SubSync could be of great help here.
### SubSync: Subtitle Speech Synchronizer
[SubSync][2] is a nifty open source utility available for Linux, macOS and Windows.
It synchronizes the subtitle by listening to the audio track and thats how it works the magic. It will work even if the audio track and the subtitle are in different languages. If necessary, it could also be translated but I did not test this feature.
I made a simple test by using a subtitle which was not in synch with the video I was playing. To my surprise, it worked pretty smooth and I got perfectly synched subtitles.
Using SubSync is simple. You start the application and it asks to add the subtitle file and the video file.
![User interface for SubSync][3]
Youll have to specif the language of the subtitle and the video on the interface. It may download additional assets based on the language in use.
![SubSync may download additional packages for language support][4]
Please keep in mind that it takes some time to synchronize the subtitles, depending on the length of the video and subtitle. You may grab your cup of tea/coffee or beer while you wait for the process to complete.
You can see the synchronization status in progress and even save it before it gets completed.
![SubSync synchronization in progress][5]
Once the synchronization completes, you hit the save button and either save the changes to the original file or save it as a new subtitle file.
![Synchronization completed][6]
I cannot say that it will work in all the cases but it worked for the sample test I ran.
### Installing SubSync
SubSync is a cross-platform application and you can get the installer files for Windows and macOS from its [download page][7].
For Linux users, SubSync is available as a Snap package. If your distribution has Snap support enabled, use the following command to install SubSync:
```
sudo snap install subsync
```
Please keep in mind that it will take some time to download SubSync snap package. So have a good internet connection or plenty of patience.
### In the end
Personally, I am addicted to subtitles. Even if I am watching movies in English on Netflix, I keep the subtitles on. It helps understand each dialogue clearly, specially if there is a strong accent. Without subtitles I could never understand a [word from Mickey ONeil (played by Brad Pitt) in the movie Snatch][8]. Dags!!
Using SubSync is a lot easier than [using Subtitle Editor][9] for synchronizing subtitles. After [Penguin Subtitle Player][10], this is another great tool for someone like me who searches the entire internet for rare or recommended (mystery) movies from different countries.
If you are a subtitle user, I have a feeling you would like this tool. If you do use it, please share your experience with it in the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/subsync/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/how-to-synchronize-subtitles-with-movie-quick-tip/
[2]: https://subsync.online/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/subsync-interface.png?resize=593%2C280&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/subsync-subtitle-synchronize.png?resize=522%2C189&ssl=1
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/subsync-subtitle-synchronize-1.png?resize=424%2C278&ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/subsync-subtitle-synchronize-2.png?resize=424%2C207&ssl=1
[7]: https://subsync.online/en/download.html
[8]: https://www.youtube.com/watch?v=tGDO-9hfaiI
[9]: https://itsfoss.com/subtitld/
[10]: https://itsfoss.com/penguin-subtitle-player/

View File

@ -0,0 +1,121 @@
[#]: subject: "A guide to database replication with open source"
[#]: via: "https://opensource.com/article/21/8/database-replication-open-source"
[#]: author: "John Lafleur https://opensource.com/users/john-lafleur"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
A guide to database replication with open source
======
Why choose log-based Change Data Capture (CDC) replication for
databases. Learn about the open source options available to you.
![Cloud and databsae incons][1]
In the world of constantly evolving data, one question often pops up: How is it possible to seamlessly replicate data that is growing exponentially and coming from an increasing number of sources? This article explains some of the foundational open source technologies that may help commoditize database replication tasks into data warehouses, lakes, or other databases.
One popular replication technology is **Change Data Capture (CDC)**, a pattern that allows row-level data changes at the source database to be quickly identified, captured, and delivered in real-time to the destination data warehouse, lake, or other database. With CDC, only the data that has changed since the last replication—categorized by insert, update, and delete operations—is in scope. This incremental design approach makes CDC significantly more efficient than other database replication patterns, such as a full-database replication. With full-database replication, the entire source database table with potentially millions of rows is scanned and copied over to the destination.
### Open source CDC
[Debezium][2] is an open source distributed CDC platform that leverages Apache Kafka to transport data changes. It continuously monitors databases, ensuring that each row-level change is sent to the destination in exactly the same order they were committed to the database. However, using Debezium in a do-it-yourself replication project can be a heavy lift. It requires a deep understanding of concepts related to the source and destination systems, Kafka, and Debezium internals. For example, just take a look at all the details required for a [Debezium MySQL connector][3].
[Airbyte][4] is an open source data integration engine that allows you to consolidate your data in your data warehouses, lakes, and databases. Airbyte leverages Debezium and does all the heavy lifting. Indeed, within Airbyte, Debezium is run as an embedded library. This engineering design allows for using Debezium without needing to use Apache Kafka or another language runtime. This [video][5] shows how you can use CDC to replicate a PostgreSQL database with Airbyte in a matter of minutes. The open source code is available for use with Postgres, MySQL, and MSSQL and will soon be for all other major databases that enable it.
### What are some typical CDC use cases for databases?
Databases lie at the core of today's data infrastructures, and several different use cases apply.
#### 1\. Squash the overhead across your transactional databases and network
With CDC in place, it's possible to deliver data changes as a continuous stream without placing unnecessary overhead on source database systems. This means that databases can focus on doing the more valuable tasks that they are engineered for, resulting in higher throughput and lower latency for apps. With CDC, only incremental data changes are transferred over the network, reducing data transfer costs, minimizing network saturation, and eliminating the need for fine-tuning the system to handle peak batch traffic.
#### 2\. Keep transactional and analytical databases synchronized
With data being generated at dizzying rates, extracting insights from data is key to an organization's success. CDC captures live data changes from the transactional database and ships those regularly to the analytical database or warehouse, where they can be analyzed to extract deeper insights. For example, imagine that you're an online travel company. You can capture real-time online booking activity at the database tier (let's say using PostgreSQL) and send these transactions to your analytical database to learn more about your customer's buying patterns and preferences.
#### 3\. Migrate data from legacy systems to next-generation data platforms
With the shift towards modernizing legacy database systems by going to cloud-based database instances, moving data to these newer platforms has become more critical than ever. With CDC, data is synchronized periodically, allowing you to modernize your data platforms at your own pace while maintaining both your legacy and next-generation data platforms in the interim. This setup ensures flexibility and can keep business operational without missing a heartbeat.
#### 4\. Warm up a dynamic data cache for applications
Caching is a standard technique for improving application performance, but data caches must be warmed up (or loaded with data) for them to be effective. With a warm data cache, the application can access data quickly, bypassing the core database. For example, this pattern is extremely beneficial for an application that does many data lookups because loading this lookup data in a cache can offload the read workload from the core database. Using CDC, the data cache can be dynamically updated all the time. For example, selective lookup tables in the database can be loaded into a cache during the initial warm-up cycle. Any future modifications in the lookup table data will incrementally be propagated to update the cache.
### What CDC implementations exist and what database should you pick?
CDC has been around for quite some time, and over the years, several different widely-used implementations have sprung up across other products. However, not all CDC implementations are created equal, and you need to pick the proper implementation to get a clear picture of the data changes. I summarize some of these implementations and the challenges of using each of them in the list below:
#### Date modified
This approach tracks metadata across every row in the table, including who created the row, who recently modified the row, and when the row was created and modified.
**Challenges**:
* Not easy to track data deletes since the date_modified field no longer exists for a deleted row.
* Needs additional compute resources to process the date_modified field. If indexing is used on the date_modified field, the index will need additional compute and storage resources.
#### Binary diffs
This implementation calculates the difference in state between the current data and the previous data.
**Challenges**:
* Calculating state differences can be complex and does not scale well when data volumes are large.
* Needs additional compute resources and cannot be done in real-time.
#### Database trigger
This method needs database triggers to be created with logic to manage the metadata within the same table or in a separate book-keeping table.
**Challenges**:
* Triggers must fire for every transaction, and this can slow down the transactional workload.
* The data engineer must write additional complex rollback logic to handle the case of a transaction failure.
* If the table schema is modified, the trigger must be manually updated with the latest schema changes.
* SQL language differences across the different database systems mean that triggers are not easily portable and might need to be re-written.
#### Log-based
This implementation reads data directly from the database logs and journal files to minimize the impact of the capture process. Since database logs and journal files exist in every transactional database product, the experience is transparent. This means it does not require any logical changes in terms of database objects or the application running on top of the database.
**Challenges**:
* If the destination database system is down, the source database system logs will need to be kept intact until the sync happens.
* Database operations that bypass the log file will not be captured. This is a corner case for most relational database use-cases since logs are required to guarantee [ACID][6] behaviors.
* For example, a **TRUNCATE** table statement might not log data, and in this case, forced logging through a query hint or configuration might be required.
When it comes to production databases, the choice is clear: Log-based CDC is the way forward due to its reliability, ability to scale under massive data volumes, and ease of use without requiring any database or app changes.
### Conclusion
I hope this article was useful to explain why log-based CDC replication for databases matters and the new open source options available to you. These options provide endless replication possibilities, just as Airbyte made log-based CDC replication much easier.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/database-replication-open-source
作者:[John Lafleur][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/john-lafleur
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg (Cloud and databsae incons)
[2]: https://github.com/debezium/
[3]: https://debezium.io/documentation/reference/1.6/connectors/mysql.html
[4]: https://airbyte.io/
[5]: https://www.youtube.com/watch?v=NMODvLgZvuE
[6]: https://en.wikipedia.org/wiki/ACID

View File

@ -0,0 +1,150 @@
[#]: subject: "Build a JAR file with fastjar and gjar"
[#]: via: "https://opensource.com/article/21/8/fastjar"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Build a JAR file with fastjar and gjar
======
Utilities like fastjar, gjar, and jar help you manually or
programmatically build JAR files, while other toolchains such as Maven
and Gradle offer features for dependency management.
![Someone wearing a hardhat and carrying code ][1]
One of the many advantages of Java, in my experience, is its ability to deliver applications in a neat and tidy package (called a JAR, or _Java archive_.) JAR files make it easy for users to download and launch an application they want to try, easy to transfer that application from one computer to another (and Java is cross-platform, so sharing liberally can be encouraged), and easy to understand for new programmers to look inside a JAR to find out what makes a Java app run.
There are many ways to create a JAR file, including toolchain solutions such as Maven and Gradle, and one-click build features in your IDE. However, there are also stand-alone commands such as `jarfast`, `gjar`, and just plain old `jar`, which are useful for quick and simple builds, and to demonstrate what a JAR file needs to run.
### Install
On Linux, you may already have the `fastjar`, `gjar`, or `jar` commands as part of an OpenJDK package, or GCJ (GCC-Java.) You can test whether any of these commands are installed by typing the command with no arguments: 
```
$ fastjar
Try 'fastjar --help' for more information.
$ gjar
jar: must specify one of -t, -c, -u, -x, or -i
jar: Try 'jar --help' for more information
$ jar
Usage: jar [OPTION...] [ [--release VERSION] [-C dir] files] ...
Try `jar --help' for more information.
```
I have all of them installed, but you only need one. All of these commands are capable of building a JAR.
On a modern Linux system such as Fedora, typing a missing command causes your OS to prompt you to install it for you.
Alternately, you can just [install Java][2] from [AdoptOpenJDK.net][3] for Linux, MacOS, and Windows.
### Build a JAR 
First, you need a Java application to build.
To keep things simple, create a basic "hello world" application in a file called hello.java:
```
class Main {
public static void main([String][4][] args) {
    [System][5].out.println("Hello Java World");
}}
```
It's a simple application that somewhat trivializes the real-world importance of managing external dependencies. Still, it's enough to get started with the basic concepts you need to create a JAR.
Next, create a manifest file. A manifest file describes the Java environment of the JAR. In this case, the most important information is identifying the main class, so the Java runtime executing the JAR knows where to find the application's entry point. 
```
$ mdir META-INF
$ echo "Main-Class: Main" &gt; META-INF/MANIFEST.MF 
```
### Compiling Java bytecode
Next, compile your Java file into Java bytecode.
```
`$ javac hello.java`
```
Alternately, you can use the Java component of GCC to compile:
```
`$ gcj -C hello.java`
```
Either way, this produces the file `Main.class`:
```
$ file Main.class
Main.class: compiled Java class data, version XX.Y
```
### Creating a JAR 
You have all the components you need so that you can create the JAR file.
I often include the Java source code as a reference for curious users, but all that's _required_ is the `META-INF` directory and the class files.
The `fastjar` command uses syntax similar to the [`tar` command][6].
```
`$ fastjar cvf hello.jar META-INF Main.class`
```
Alternately, you can use `gjar` in much the same way, except that `gjar` requires you to specify your manifest file explicitly:
```
`$ gjar cvf world.jar Main.class -m META-INF/MANIFEST.MF`
```
Or you can use the `jar` command. Notice this one doesn't require a Manifest file because it auto-generates one for you, but for safety I define the main class explicitly:
```
`$ jar --create --file hello.jar --main-class=Main Main.class`
```
Test your application:
```
$ java -jar hello.jar
Hello Java World
```
### Easy packaging
Utilities like `fastjar`, `gjar`, and `jar` help you manually or programmatically build JAR files, while other toolchains such as Maven and Gradle offer features for dependency management. A good IDE may integrate one or more of these features.
Whatever solution you use, Java provides an easy and unified target for distributing your application code.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/fastjar
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag (Someone wearing a hardhat and carrying code )
[2]: https://opensource.com/article/19/11/install-java-linux
[3]: https://adoptopenjdk.net/
[4]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[5]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
[6]: https://opensource.com/article/17/7/how-unzip-targz-file

View File

@ -0,0 +1,111 @@
[#]: subject: "below: a time traveling resource monitor"
[#]: via: "https://fedoramagazine.org/below-a-time-traveling-resource-monitor/"
[#]: author: "Daniel Xu https://fedoramagazine.org/author/dxuu/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
below: a time traveling resource monitor
======
![][1]
In this article, we introduce _below_: an Apache 2.0 licensed resource monitor for modern Linux systems. _below_ allows you to replay previously recorded data.
### Background
One of the kernels primary responsibilities is mediating access to resources. Sometimes this might mean parceling out physical memory such that multiple processes can share the same host. Other times it might mean ensuring equitable distribution of CPU time. In all these contexts, the kernel provides the mechanism and leaves the policy to “someone else”. In more recent times, this “someone else” is usually a runtime like systemd or dockerd. The runtime takes input from a scheduler or end user — something along the lines of what to run and how to run it — and turns the right knobs and pulls the right levers on the kernel such that the workload can —well — get to work.
In a perfect world this would be the end of the story. However, the reality is that resource management is a complex and rather opaque amalgam of technologies that has evolved over decades of computing. Despite some of this technology having various warts and dead ends, the end result — a container — works relatively well. While the user does not usually need to concern themselves with the details, it is crucial for infrastructure operators to have visibility into their stack. Visibility and debuggability are essential for detecting and investigating misconfigurations, bugs, and systemic issues.
To make matters more complicated, resource outages are often difficult to reproduce. It is not unusual to spend weeks waiting for an issue to reoccur so that the root cause can be investigated. Scale further compounds this issue: one cannot run a custom script on _every_ host in the hopes of logging bits of crucial state if the bug happens again. Therefore, more sophisticated tooling is required. Enter _below_.
### Motivation
Historically Facebook has been a heavy user of _atop_ [0]. _atop_ is a performance monitor for Linux that is capable of reporting the activity of all processes as well as various pieces of system level activity. One of the most compelling features _atop_ has over tools like _htop_ is the ability to record historical data as a daemon. This sounds like a simple feature, but in practice this has enabled debugging countless production issues. With long enough data retention, it is possible to go backwards in time and look at the host state before, during, and after the issue or outage.
Unfortunately, it became clear over the years that _atop_ had certain deficiencies. First, cgroups [1] have emerged as the defacto way to control and monitor resources on a Linux machine. _atop_ still lacks support for this fundamental building block. Second, _atop_ stores data on disk with custom delta compression. This works fine under normal circumstances, but under heavy resource pressure the host is likely to lose data points. Since delta compression is in use, huge swaths of data can be lost for periods of time where the data is most important. Third, the user experience has a steep learning curve. We frequently heard from _atop_ power users that they love the dense layout and numerous keybindings. However, this is a double edged sword. When someone new to the space wants to debug a production issue, theyre solving two problems at once now: the issue at hand and how to use _atop_.
_below_ was designed and developed by and for the resource control team at Facebook with input from production _atop_ users. The resource control team is responsible for, as the name suggests, resource management at scale. The team is comprised of kernel developers, container runtime developers, and hardware folks. Recognizing the opportunity for a next-generation system monitor, we designed _below_ with the following in mind:
* Ease of use: _below_ must be both intuitive for new users as well as powerful for daily users
* Opinionated statistics: _below_ displays accurate and useful statistics. We try to avoid collecting and dumping stats just because we can.
* Flexibility: when the default settings are not enough, we allow the user to customize their experience. Examples include configurable keybindings, configurable default view, and a scripting interface (the default being a terminal user interface).
### Install
To install the package:
```
# dnf install -y below
```
To turn on the recording daemon:
```
# systemctl enable --now below
```
### Quick tour
_below_s most commonly used mode is replay mode. As the name implies, replay mode replays previously recorded data. Assuming youve already started the recording daemon, start a session by running:
```
$ below replay --time "5 minutes ago"
```
You will then see the cgroup view:
![][2]
If you get stuck or forget a keybinding, press **?** to access the help menu.
The very top of the screen is the status bar. The status bar displays information about the current sample. You can move forwards and backwards through samples by pressing **t** and **T**, respectively. The middle section is the system overview. The system overview contains statistics about the system as a whole that are generally always useful to see. The third and lowest section is the multipurpose view. The image above shows the cgroup view. Additionally, there are process and system views, accessible by pressing **p** and **s**, respectively.
Press **↑** and **↓** to move the list selection. Press **&lt;Enter&gt;** to collapse and expand cgroups. Suppose youve found an interesting cgroup and you want to see what processes are running inside it. To zoom into the process view, select the cgroup and press **z**:
![][3]
Press **z** again to return to the cgroup view. The cgroup view can be somewhat long at times. If you have a vague idea of what youre looking for, you can filter by cgroup name by pressing **/** and entering a filter:
![][4]
At this point, you may have noticed a tab system we havent explored yet. To cycle forwards and backwards through tabs, press **&lt;Tab&gt;** and **&lt;Shift&gt; \+ &lt;Tab&gt;** respectively. Well leave this as an exercise to the reader.
### Other features
Under the hood, _below_ has a powerful design and architecture. Facebook is constantly upgrading to newer kernels, so we never assume a data source is available. This tacit assumption enables total backwards and forwards compatibility between kernels and _below_ versions. Furthermore, each data point is zstd compressed and stored in full. This solves the issues with delta compression weve seen _atop_ have at scale. Based on our tests, our per-sample compression can achieve on average a 5x compression ratio.
_below_ also uses eBPF [2] to collect information about short-lived processes (processes that live for shorter than the data collection interval). In contrast, _atop_ implements this feature with BSD process accounting, a known slow and priority-inversion-prone kernel interface.
For the user, _below_ also supports live-mode and a dump interface. Live mode combines the recording daemon and the TUI session into one process. This is convenient for browsing system state without committing to a long running daemon or disk space for data storage. The dump interface is a scriptable interface to all the data _below_ stores. Dump is both powerful and flexible — detailed data is available in CSV, JSON, and human readable format.
### Conclusion
_below_ is an Apache 2.0 licensed open source project that we (the _below_ developers) think offers compelling advantages over existing tools in the resource monitoring space. Weve spent a great deal of effort preparing _below_ for open source use so we hope that readers and the community get a chance to try _below_ out and report back with bugs and feature requests.
[0]: <https://www.atoptool.nl/>
[1]: <https://en.wikipedia.org/wiki/Cgroups>
[2]: <https://ebpf.io/>
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/below-a-time-traveling-resource-monitor/
作者:[Daniel Xu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/dxuu/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/08/below_resource_monitor-816x345.jpg
[2]: https://fedoramagazine.org/wp-content/uploads/2021/08/image-1024x800.png
[3]: https://fedoramagazine.org/wp-content/uploads/2021/08/image-1-1024x800.png
[4]: https://fedoramagazine.org/wp-content/uploads/2021/08/image-2-1024x800.png

View File

@ -0,0 +1,154 @@
[#]: subject: "Check free disk space in Linux with ncdu"
[#]: via: "https://opensource.com/article/21/8/ncdu-check-free-disk-space-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Check free disk space in Linux with ncdu
======
Get an interactive report about disk usage with the ncdu Linux command.
![Check disk usage][1]
Computer users tend to amass a lot of data over the years, whether it's important personal projects, digital photos, videos, music, or code repositories. While hard drives tend to be pretty big these days, sometimes you have to step back and take stock of what you're actually storing on your drives. The classic Linux commands [` df`][2] and [` du`][3] are quick ways to gain insight about what's on your drive, and they provide a reliable report that's easy to parse and process. That's great for scripting and processing, but the human brain doesn't always respond well to hundreds of lines of raw data. In recognition of this, the `ncdu` command aims to provide an interactive report about the space you're using on your hard drive.
### Installing ncdu on Linux
On Linux, you can install `ncdu` from your software repository. For instance, on Fedora or CentOS:
```
`$ sudo dnf install ncdu`
```
On BSD, you can use [pkgsrc][4].
On macOS, you can install from [MacPorts][5] or [HomeBrew][6].
Alternately, you can [compile ncdu from source code][7].
### Using ncdu
The interface of `ncdu` uses the ncurses library, which turns your terminal window into a rudimentary graphical application so you can use the Arrow keys to navigate visual menus.
![ncdu interface][8]
CC BY-SA Seth Kenlon
That's one of the main appeals of `ncdu`, and what sets it apart from the original `du` command.
To get a complete listing of a directory, launch `ncdu`. It defaults to the current directory.
```
$ ncdu
ncdu 1.16 ~ Use the arrow keys to navigate, press ? for help                                                                  
\--- /home/tux -----------------------------------------------
   22.1 GiB [##################] /.var                                                                                        
   19.0 GiB [###############   ] /Iso
   10.0 GiB [########          ] /.local
    7.9 GiB [######            ] /.cache
    3.8 GiB [###               ] /Downloads
    3.6 GiB [##                ] /.mail
    2.9 GiB [##                ] /Code
    2.8 GiB [##                ] /Documents
    2.3 GiB [#                 ] /Videos
[...]
```
The listing shows the largest directory first (in this example, that's the `~/.var` directory, full of many many flatpaks).
Using the Arrow keys on your keyboard, you can navigate through the listing to move deeper into a directory so you can gain better insight into what's taking up the most space.
### Get the size of a specific directory
You can run `ncdu` on an arbitrary directory by providing the path of a folder when launching it:
```
`$ ncdu ~/chromiumos`
```
### Excluding directories
By default, `ncdu` includes everything it can, including symbolic links and pseudo-filesystems such as procfs and sysfs. `You can` exclude these with the `--exclude-kernfs`.
You can exclude arbitrary files and directories using the --exclude option, followed by a pattern to match.
```
$ ncdu --exclude ".var"
   19.0 GiB [##################] /Iso                                                                                          
   10.0 GiB [#########         ] /.local
    7.9 GiB [#######           ] /.cache
    3.8 GiB [###               ] /Downloads
[...]
```
Alternately, you can list files and directories to exclude in a file, and cite the file using the `--exclude-from` option:
```
$ ncdu --exclude-from myexcludes.txt /home/tux                                                                                    
   10.0 GiB [#########         ] /.local
    7.9 GiB [#######           ] /.cache
    3.8 GiB [###               ] /Downloads
[...]
```
### Color scheme
You can add some color to ncdu with the `--color dark` option.
![ncdu color scheme][9]
CC BY-SA Seth Kenlon
### Including symlinks
The `ncdu` output treats symlinks literally, meaning that a symlink pointing to a 9 GB file takes up just 40 bytes.
```
$ ncdu ~/Iso
    9.3 GiB [##################]  CentOS-Stream-8-x86_64-20210427-dvd1.iso                                                    
@   0.0   B [                  ]  fake.iso
```
You can force ncdu to follow symlinks with the `--follow-symlinks` option:
```
$ ncdu --follow-symlinks ~/Iso
    9.3 GiB [##################]  fake.iso                                                                                    
    9.3 GiB [##################]  CentOS-Stream-8-x86_64-20210427-dvd1.iso
```
### Disk usage
It's not fun to run out of disk space, so monitoring your disk usage is important. The `ncdu` command makes it easy and interactive. Try `ncdu` the next time you're curious about what you've got stored on your PC, or just to explore your filesystem in a new way.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/ncdu-check-free-disk-space-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/du-splash.png?itok=nRLlI-5A (Check disk usage)
[2]: https://opensource.com/article/21/7/check-disk-space-linux-df
[3]: https://opensource.com/article/21/7/check-disk-space-linux-du
[4]: https://opensource.com/article/19/11/pkgsrc-netbsd-linux
[5]: https://opensource.com/article/20/11/macports
[6]: https://opensource.com/article/20/6/homebrew-mac
[7]: https://dev.yorhel.nl/ncdu
[8]: https://opensource.com/sites/default/files/ncdu.jpg (ncdu interface)
[9]: https://opensource.com/sites/default/files/ncdu-dark.jpg (ncdu color scheme)

View File

@ -0,0 +1,182 @@
[#]: subject: "Debian vs Ubuntu: Whats the Difference? Which One Should You Use?"
[#]: via: "https://itsfoss.com/debian-vs-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Debian vs Ubuntu: Whats the Difference? Which One Should You Use?
======
You can [use apt-get commands][1] for managing applications in both Debian and Ubuntu. You can install DEB packages in both distributions as well. Many times, youll find common package installation instructions for both distributions.
So, whats the difference between the two, if they are so similar?
Debian and Ubuntu belong to the same side of the distribution spectrum. Debian is the original distribution created by Ian Murdock in 1993. Ubuntu was created in 2004 by Mark Shuttleworth and it is based on Debian.
### Ubuntu is based on Debian: What does it mean?
While there are hundreds of Linux distributions, only a handful of them are independent ones, created from scratch. [Debian][2], Arch, Red Hat are some of the biggest distributions that do not derive from any other distribution.
Ubuntu is derived from Debian. It means that Ubuntu uses the same APT packaging system as Debian and shares a huge number of packages and libraries from Debian repositories. It utilizes the Debian infrastructure as base.
![Ubuntu uses Debian as base][3]
Thats what most derived distributions do. They use the same package management system and share packages as the base distribution. But they also add some packages and changes of their own. And that is how Ubuntu is different from Debian despite being derived from it.
### Difference between Ubuntu and Debian
So, Ubuntu is built on Debian architecture and infrastructure and uses .DEB packages same as Debian.
Does it mean using Ubuntu is the same as using Debian? Not quite so. There are many more factors involved that distinguish one distribution from the other.
Let me discuss these factors one by one to compare Ubuntu and Debian. Please keep in mind that some comparisons are applicable to desktop editions while some apply to the server editions.
![][4]
#### 1\. Release cycle
Ubuntu has two kinds of releases: LTS and regular. [Ubuntu LTS (long term support) release][5] comes out every two years and they get support for five years. You have the option to upgrade to the next available LTS release. The LTS releases are considered more stable.
There are also non-LTS releases, every six months. These releases are supported for nine months only, but they have newer software versions and features. You have to upgrade to the next Ubuntu versions when the current on reaches end of life.
So basically, you have the option to choose between stability and new features based on these releases.
On the other hand, Debian has three different releases: Stable, Testing and Unstable. Unstable is for actual testing and should be avoided.
The testing branch is not that unstable. It is used for preparing the next stable branch. Some Debian users prefer the testing branch to get newer features.
And then comes the stable branch. This is the main Debian release. It may not have the latest software and feature but when it comes to stability, Debian Stable is rock solid.
There is a new stable release every two years and it is supported for a total of three years. After that, you have to upgrade to the next available stable release.
#### 2\. Software freshness
![][6]
Debians focus on stability means that it does not always aim for the latest versions of the software. For example, the latest Debian 11 features GNOME 3.38, not the latest GNOME 3.40.
The same goes for other software like GIMP, LibreOffice, etc. This is a compromise you have to make with Debian. This is why “Debian stable = Debian stale” joke is popular in the Linux community.
Ubuntu LTS releases also focus on stability. But they usually have more recent versions of the popular software.
You should note that for _some software_, installing from developers repository is also an option. For example, if you want the latest Docker version, you can add Docker repository in both Debian and Ubuntu.
Overall, software in Debian Stable often have older versions when compared to Ubuntu.
#### 3\. Software availability
Both Debian and Ubuntu has a huge repository of software. However, [Ubuntu also has PPA][7] (Personal Package Archive). With PPA, installing newer software or getting the latest software version becomes a bit more easy.
![][8]
You may try using PPA in Debian but it wont be a smooth experience. Youll encounter issues most of the time.
#### 4\. Supported platforms
Ubuntu is available on 64-bit x86 and ARM platforms. It does not provide 32-bit ISO anymore.
Debian, on the other hand, supports both 32 bit and 64 bit architecture. Apart from that Debian also supports 64-bit ARM (arm64), ARM EABI (armel), ARMv7 (EABI hard-float ABI, armhf), little-endian MIPS (mipsel), 64-bit little-endian MIPS (mips64el), 64-bit little-endian PowerPC (ppc64el) and IBM System z (s390x).
No wonder it is called the universal operating system.
#### 5\. Installation
[Installing Ubuntu][9] is a lot easier than installing Debian. I am not kidding. Debian could be confusing even for intermediate Linux user.
When you download Debian, it provides a minimal ISO by default. This ISO has no non-free (not open source) firmware. You go on to install it and realize that your network adapters and other hardware wont be recognized.
There is a separate non-free ISO that contains firmware but it is hidden and if you do not know that, you are in for a bad surprise.
![Getting non-free firmware is a pain in Debian][10]
Ubuntu is a lot more forgiving when it comes to including proprietary drivers and firmware in the default ISO.
Also, the Debian installer looks old whereas Ubuntu installer is modern looking. Ubuntu installer also recognizes other installed operating systems on the disk and gives you the option to install Ubuntu alongside the existing ones (dual boot). I have not noticed it with Debian installer in my testing.
![Installing Ubuntu is smoother][11]
#### 6\. Out of the box hardware support
As mentioned earlier, Debian focuses primarily on [FOSS][12] (free and open source software). This means that the kernel provided by Debian does not include proprietary drivers and firmware.
Its not that you cannot make it work but youll have to do add/enable additional repositories and install it manually. This could be discouraging, specially for the beginners.
Ubuntu is not perfect but it is a lot better than Debian for providing drivers and firmware out of the box. This means less hassle and a more complete out-of-the-box experience.
#### 7\. Desktop environment choices
Ubuntu uses a customized GNOME desktop environment by default. You may install [other desktop environments][13] on top of it or opt for [various desktop based Ubuntu flavors][14] like Kubuntu (for KDE), Xubuntu (for Xfce) etc.
Debian also installs GNOME by default. But its installer gives you choice to install desktop environment of your choice during the installation process.
![][15]
You may also get [DE specific ISO images from its website][16].
#### 8\. Gaming
Gaming on Linux has improved in general thanks to Steam and its Proton project. Still, gaming depends a lot on hardware.
And when it comes to hardware compatibility, Ubuntu is better than Debian for supporting proprietary drivers.
Not that it cannot be done in Debian but it will require some time and effort to achieve that.
#### 9\. Performance
There is no clear winner in the performance section, whether it is on the server or on the desktop. Both Debian and Ubuntu are popular as desktop as well as server operating systems.
The performance depends on your systems hardware and the software component you use. You can tweak and control your system in both operating systems.
#### 10\. Community and support
Debian is a true community project. Everything about this project is governed by its community members.
Ubuntu is backed by [Canonical][17]. However, it is not entirely a corporate project. It does have a community but the final decision on any matter is in Canonicals hands.
As far the support goes, both Ubuntu and Debian have dedicated forums where users can seek help and advice.
Canonical also offers professional support for a fee to its enterprise clients. Debian has no such features.
### Conclusion
Both Debian and Ubuntu are solid choices for desktop or server operating systems. The apt package manager and DEB packaging is common to both and thus giving a somewhat similar experience.
However, Debian still needs a certain level of expertise, specially on the desktop front. If you are new to Linux, sticking with Ubuntu will be a better choice for you. In my opinion, you should gain some experience, get familiar with Linux in general and then try your hands on Debian.
Its not that you cannot jump onto the Debian wagon from the start, but it is more likely to be an overwhelming experience for Linux beginners.
**Your opinion on this Debian vs Ubuntu debate is welcome.**
--------------------------------------------------------------------------------
via: https://itsfoss.com/debian-vs-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/apt-get-linux-guide/
[2]: https://www.debian.org/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/Debian-ubuntu-upstream.png?resize=800%2C400&ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/debian-vs-ubuntu.png?resize=800%2C450&ssl=1
[5]: https://itsfoss.com/long-term-support-lts/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/10/apt-cache-policy.png?resize=795%2C456&ssl=1
[7]: https://itsfoss.com/ppa-guide/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/ffmpeg_add_ppa.jpg?resize=800%2C222&ssl=1
[9]: https://itsfoss.com/install-ubuntu/
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/Debian-firmware.png?resize=800%2C600&ssl=1
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/choose-something-else-installing-ubuntu.png?resize=800%2C491&ssl=1
[12]: https://itsfoss.com/what-is-foss/
[13]: https://itsfoss.com/best-linux-desktop-environments/
[14]: https://itsfoss.com/which-ubuntu-install/
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/debian-install-desktop-environment.png?resize=640%2C479&ssl=1
[16]: https://cdimage.debian.org/debian-cd/current-live/amd64/iso-hybrid/
[17]: https://canonical.com/

View File

@ -0,0 +1,285 @@
[#]: subject: "Short option parsing using getopt in C"
[#]: via: "https://opensource.com/article/21/8/short-option-parsing-c"
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Short option parsing using getopt in C
======
Use the command line to make your programs more flexible by allowing
users to tell them what to do.
![Person programming on a laptop on a building][1]
Writing a C program to process files is easy when you already know what files you'll operate on and what actions to take. If you "hard code" the filename into your program, or if your program is coded to do things only one way, then your program will always know what to do.
But you can make your program much more flexible if it can respond to the user every time the program runs. Let your user tell your program what files to use or how to do things differently. And for that, you need to read the command line.
### Reading the command line
When you write a program in C, you might start with the declaration:
```
`int main()`
```
That's the simplest way to start a C program. But if you add these standard parameters in the parentheses, your program can read the options given to it on the command line:
```
`int main(int argc, char **argv)`
```
The `argc` variable is the argument count or the number of arguments on the command line. This will always be a number that's at least one.
The `argv` variable is a double pointer, an array of strings, that contains the arguments from the command line. The first entry in the array, `*argv[0]`, is always the name of the program. The other elements of the `**argv` array contain the rest of the command-line arguments.
I'll write a simple program to echo back the options given to it on the command line. This is similar to the Linux `echo` command, except it also prints the name of the program. It also prints each command-line option on its own line using the `puts` function:
```
#include &lt;stdio.h&gt;
int
main(int argc, char **argv)
{
  int i;
  [printf][2]("argc=%d\n", argc); /* debugging */
  for (i = 0; i &lt; argc; i++) {
    [puts][3](argv[i]);
  }
  return 0;
}
```
Compile this program and run it with some command-line options, and you'll see your command line printed back to you, each item on its own line:
```
$ ./echo this program can read the command line
argc=8
./echo
this
program
can
read
the
command
line
```
This command line sets the program's `argc` to `8`, and the `**argv` array contains eight entries: the name of the program, plus the seven words the user entered. And as always in C programs, the array starts at zero, so the elements are numbered 0, 1, 2, 3, 4, 5, 6, 7. That's why you can process the command line with the `for` loop using the comparison `i < argc`.
You can use this to write your own versions of the Linux `cat` or `cp` commands. The `cat` command's basic functionality displays the contents of one or more files. Here's a simple version of `cat` that reads the filenames from the command line:
```
#include &lt;stdio.h&gt;
void
copyfile(FILE *in, FILE *out)
{
  int ch;
  while ((ch = [fgetc][4](in)) != EOF) {
    [fputc][5](ch, out);
  }
}
int
main(int argc, char **argv)
{
  int i;
  FILE *fileptr;
  for (i = 1; i &lt; argc; i++) {
    fileptr = [fopen][6](argv[i], "r");
    if (fileptr != NULL) {
      copyfile(fileptr, stdout);
      [fclose][7](fileptr);
    }
  }
  return 0;
}
```
This simple version of `cat` reads a list of filenames from the command line and displays the contents of each file to the standard output, one character at a time. For example, if I have one file called `hello.txt` that contains a few lines of text, I can display its contents with my own `cat` command:
```
$ ./cat hello.txt
Hi there!
This is a sample text file.
```
Using this sample program as a starting point, you can write your own versions of other Linux commands, such as the `cp` program, by reading only two filenames: one file to read from and another file to write the copy.
### Reading command-line options
Reading filenames and other text from the command line is great, but what if you want your program to change its behavior based on the _options_ the user gives it? For example, the Linux `cat` command supports several command-line options, including:
* `-b` Put line numbers next to non-blank lines
* `-E` Show the ends of lines as `$`
* `-n` ` `Put line numbers next to all lines
* `-s` Suppress printing repeated blank lines
* `-T` Show tabs as `^I`
* `-v` ` `Verbose; show non-printing characters using `^x` and `M-x` notation, except for new lines and tabs
These _single-letter_ options are called _short options_, and they always start with a single hyphen character. You usually see these short options written separately, such as `cat -E -n`, but you can also combine the short options into a single _option string_ such as `cat -En`.
Fortunately, there's an easy way to read these from the command line. All Linux and Unix systems include a special C library called `getopt`, defined in the `unistd.h` header file. You can use `getopt` in your program to read these short options.
Unlike other Unix systems, `getopt` on Linux will always ensure your short options appear at the front of your command line. For example, say a user types `cat -E file -n`. The `-E` option is upfront, but the `-n` option is after the filename. But if you use Linux `getopt`, your program will always behave as though the user types `cat -E -n file`. That makes processing a breeze because `getopt` can parse the short options, leaving you a list of filenames on the command line that your program can read using the `**argv` array.
You use `getopt` like this:
```
       #include &lt;unistd.h&gt;
       int getopt(int argc, char **argv, char *optstring);
```
The option string `optstring` contains a list of the valid option characters. If your program only allows the `-E` and `-n` options, you use "`En"` as your option string.
You usually use `getopt` in a loop to parse the command line for options. At each `getopt` call, the function returns the next short option it finds on the command line or the value `'?'` for any unrecognized short options. When `getopt` can't find any more short options, it returns `-1` and sets the global variable `optind` to the next element in `**argv` after all the short options.
Let's look at a simple example. This demo program isn't a full replacement of `cat` with all the options, but it can parse its command line. Every time it finds a valid command-line option, it prints a short message to indicate it was found. In your own programs, you might instead set a variable or take some other action that responds to that command-line option:
```
#include &lt;stdio.h&gt;
#include &lt;unistd.h&gt;
int
main(int argc, char **argv)
{
  int i;
  int option;
  /* parse short options */
  while ((option = getopt(argc, argv, "bEnsTv")) != -1) {
    switch (option) {
    case 'b':
      [puts][3]("Put line numbers next to non-blank lines");
      break;
    case 'E':
      [puts][3]("Show the ends of lines as $");
      break;
    case 'n':
      [puts][3]("Put line numbers next to all lines");
      break;
    case 's':
      [puts][3]("Suppress printing repeated blank lines");
      break;
    case 'T':
      [puts][3]("Show tabs as ^I");
      break;
    case 'v':
      [puts][3]("Verbose");
      break;
    default:                          /* '?' */
      [puts][3]("What's that??");
    }
  }
  /* print the rest of the command line */
  [puts][3]("------------------------------");
  for (i = optind; i &lt; argc; i++) {
    [puts][3](argv[i]);
  }
  return 0;
}
```
If you compile this program as `args`, you can try out different command lines to see how they parse the short options and always leave you with the rest of the command line. In the simplest case, with all the options up front, you get this:
```
$ ./args -b -T file1 file2
Put line numbers next to non-blank lines
Show tabs as ^I
\------------------------------
file1
file2
```
Now try the same command line but combine the two short options into a single option string:
```
$ ./args -bT file1 file2
Put line numbers next to non-blank lines
Show tabs as ^I
\------------------------------
file1
file2
```
If necessary, `getopt` can "reorder" the command line to deal with short options that are out of order:
```
$ ./args -E file1 file2 -T
Show the ends of lines as $
Show tabs as ^I
\------------------------------
file1
file2
```
If your user gives an incorrect short option, `getopt` prints a message:
```
$ ./args -s -an file1 file2
Suppress printing repeated blank lines
./args: invalid option -- 'a'
What's that??
Put line numbers next to all lines
\------------------------------
file1
file2
```
### Download the cheat sheet
`getopt` can do lots more than what I've shown. For example, short options can take their own options, such as `-s string` or `-f file`. You can also tell `getopt` to not display error messages when it finds unrecognized options. Read the `getopt(3)` manual page using `man 3 getopt` to learn more about what `getopt` can do for you.
If you're looking for gentle reminders on the syntax and structure of `getopt()` and `getopt_long()`, [download my getopt cheat sheet][8]. One page demonstrates short options, and the other side demonstrates long options with minimum viable code and a listing of the global variables you need to know.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/short-option-parsing-c
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_code_programming_laptop.jpg?itok=ormv35tV (Person programming on a laptop on a building)
[2]: http://www.opengroup.org/onlinepubs/009695399/functions/printf.html
[3]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html
[4]: http://www.opengroup.org/onlinepubs/009695399/functions/fgetc.html
[5]: http://www.opengroup.org/onlinepubs/009695399/functions/fputc.html
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/fopen.html
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/fclose.html
[8]: https://opensource.com/downloads/c-getopt-cheat-sheet

View File

@ -0,0 +1,143 @@
[#]: subject: "3 steps for managing a beginner-friendly open source community"
[#]: via: "https://opensource.com/article/21/8/beginner-open-source-community"
[#]: author: "Isabel Costa https://opensource.com/users/isabelcmdcosta"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
3 steps for managing a beginner-friendly open source community
======
As a member of an open source project, there's a lot you can do to help
beginners find a way to contribute. 
![Working from home at a laptop][1]
When someone is new to contributing to open source, the best place to start is often beginner-friendly bugs and issues. But before they can do that, they have to be able to find those kinds of issues. As a member of an open source project, there's a lot you can do to help beginners find a way to contribute. 
Bearing this in mind, the [AnitaB.org open source community][2] prioritizes making our community beginner-friendly. We have initiatives to ensure that we're inclusive for contributors at different levels of experience and for different types of contributions that don't only relate to coding.
I recently presented some of the community work we do at the [AnitaB.org][3] community at [Upstream 2021][4], the Tidelift event, which kicked off Maintainer Week, a weeklong celebration of open source maintainers. I discussed how there are three main parts to our strategy:
* How we communicate
* Projects and issues
* Open source programs
### How we communicate
Transparency is such an essential part of open source, and we apply transparency principles to our approach to communication. In practical terms, this means that all our community sessions are run openly, affect how we've set up Zulip chat and how we provide documentation.
#### **Open sessions**
Anyone can join our sessions and discuss topics related to our community. They can participate in discussions or just listen. These are available for everyone to see in our community calendar. We usually only use audio in these calls, which we've found can make people feel more comfortable participating.
We host project-focused sessions and a couple of category-related sessions, where people from different areas can discuss the same project and help improve our processes. Occasionally, we have "Ask Me Anything" sessions, where anyone can come and ask questions about anything related to open source.
We take notes of all sessions in a shared document and share the summary and a document link in [our Zulip][5].
#### **Our Zulip chat**
The open source Zulip chat platform is our primary community communication channel, although we also use the comments section on issues and pull requests on Github. In general, we have disabled private messaging to make sure we are as transparent as possible. We have only a few exceptions to this rule, where we have private streams for admins dealing with the logistics of the programs we run. We've found this approach is more welcoming, and it also enables us to have more visibility into conduct violations in the public chat.
We share all session summaries on the Zulip chat, including the main points discussed, action items, and documentation. This process might sound like an obvious requirement, but I've been surprised at how many open source projects don't provide notes so that people who did not attend can remain informed.
On Zulip, we discuss project roadmaps, answer questions and queries from the community, and actively **promote ways for people to contribute and where they can contribute. **Sometimes we celebrate contributors' wins—whether it's to highlight the first PR they have tested, reviewed, or the excellent work our volunteers do.
#### **Documentation**
We try to keep **open documentation about our processes**, such as FAQs, so those community members can learn at their own pace and in their own time about the community. This is intended to give them an idea of how we work and what type of work we do before reaching out to us.
### Projects and issues
Regarding our projects and issues management, we encourage multiple ways to contribute, create specific issues for first-timers only, and try to have an easy setup for projects.
#### **Multiple ways to contribute**
We make an effort to create **issues that require different contributions** such as documentation, testing, design, and outreach. This is to provide ways for anyone to contribute regardless of their experience level and area of interest. It helps the community get involved, and we've found that it enables members to work their way up and contribute to some low-effort but valuable tasks.
Types of contributions we promote are:
* Coding tasks that range in complexity.
* Quality assurance tasks—where contributors can test our apps or pull requests and report bugs.
* Design sessions where members can participate in discussions. Also, opportunities to create mock-ups and redesign parts of our apps, and explore user experience improvements.
* Outreach tasks, we primarily promote on Zulip, where we suggest blogging to our Medium publication about their open source experiences and their contributions.
* Documentation tasks that can include general community documentation or our project's documentation on Docusaurus.
#### **First-timers only issues**
We label some **issues as "first-timers only."** These are for people who have not contributed yet to the issue's repository. Labeling issues also enable us to have work for people beginning their open source journey during times of contributor influx, for example, during [Google Summer of Code (GSoC)][6].
Sometimes these might be "low-hanging fruit" that can get them acquainted with the process of contributing and submitting pull requests.
#### **Easy project setup**
We also care about having a **beginner-friendly setup **for our projects. We notice that the most active project is generally the easiest to set up. We know that contributing to a project you aren't familiar with can take a lot of effort and make or break the experience of contributing.
We try to provide instructions on how to run our projects on multiple operating systems. In the past, we had some projects with separate instructions to run on Unix environments, and we noticed contributors having difficulties running these projects on Windows. We've improved since then to avoid confusion among contributors who would ask for help on our Zulip.
We have been improving the README for one of our most active projects, [mentorship-backend][7], according to contributors' experience. One of the struggles for beginners in this project was setting part of the environment variables related to configuring an email account to enable the backend functionality to send emails. However, because this feature was not critical for local development, by default, we made the email setup optional so that emails, instead of being sent to users, were printed to the terminal. This approach still made the emails visible to the contributor. Similar to this change, we made [the SQLite database][8] the default for local development to avoid additional setup for the Postgres database, even though we use this in our deployed version.
We have noticed that some contributors have struggled to contribute to one of our projects, [bridge-in-tech-backend][9], where its setup is complicated and includes many more steps than [mentorship-backend][7]. Since we noticed this in one of our open source programs, we have been exploring improving its structure.
For most of our projects, we also provide a live or bundled version of the apps so that contributors can test the project without setting it up. This helps us provide a way for contributors who are not interested or familiar with the development setup to try the most recent version of our apps and contribute by reporting any bugs found. We have the links to these apps deployed on our [Quality Assurance guide][10].
### Open source programs
We organize two main programs in our community: Open Source Hack (a one-month program) and Open Source Ambassadors (a six-month program).
#### **Open Source Hack (OSH)**
In this program, we create issues in multiple categories of contributions—Documentation, Coding, Outreach, Testing, and Design (similar to the [Google Code-in][11] contest). Participants can contribute and receive digital certificates for contributing at least once to each category. One issue may include multiple categories, and the pull requests don't need to be merged for their contributions to be valid.
We select a few projects for this program, then mentors brainstorm and create issues for participants. When the program starts, participants can claim issues and begin contributing. Mentors support and review their contributions.
This approach encourages diversity of contributions and welcomes anyone, regardless of their coding ability, to contribute in a friendly and fail-safe environment.
#### **Open Source Ambassadors**
In this program, we select ambassadors from the community that ideally will cover each category of contributions we aim to promote. We've run this program twice so far.
The program aims to have members grow in helping manage projects and initiatives by responding to questions from the community, assisting contributors to get involved, and advocating for their assigned category.
In the first program we ran, we accepted anyone who applied. We assessed where members' interests lay and provided a structure for people who wanted to contribute but were initially uncomfortable with taking that step.
This edition was very enlightening for us as a community. It required a lot of management from admins, as we had a mix of experienced and inexperienced open source contributors and community members. Some ambassadors were confident in stepping up and leading initiatives, while others needed more support. For our second program, we decided to scale down the initiative. We only accepted contributors who were already familiar with the community and could lead on initiatives and projects and help us train the less experienced.
The second program became a positive feedback loop. Ambassadors who started as beginners, contributing to the first program we ran, became comfortable leading after learning from their experience with the program.
This change of approach enabled admins to focus more on supporting the ambassadors' team, helping them propagate our mission and continue making the community beginner-friendly, and mentoring more people to contribute.
### Summary
These programs have helped us bring awareness to different ways to contribute and give back to open source. Through these, we've found volunteers helping by managing projects and hosting open sessions, which contributes to managing the community and providing mentorship to our contributors.
Even though we have had a good response from contributors and helped people make their first contributions, we still have a lot of room for improvement. We will continue to enhance our project's setup and contribution guidelines to improve contributors' experience. We'll also continue to focus on making sure we always have and promote available issues across the organization and in different categories to promote an inclusive environment so that anyone who wishes to can contribute.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/beginner-open-source-community
作者:[Isabel Costa][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/isabelcmdcosta
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop)
[2]: https://github.com/anitab-org
[3]: https://anitab.org/
[4]: https://youtu.be/l8r50jCr-Yo
[5]: https://anitab-org.zulipchat.com/
[6]: https://summerofcode.withgoogle.com/
[7]: https://github.com/anitab-org/mentorship-backend#readme
[8]: https://opensource.com/article/21/2/sqlite3-cheat-sheet
[9]: https://github.com/anitab-org/bridge-in-tech-backend
[10]: https://github.com/anitab-org/documentation/blob/master/quality-assurance.md
[11]: https://codein.withgoogle.com/

View File

@ -0,0 +1,119 @@
[#]: subject: "Check file status on Linux with the stat command"
[#]: via: "https://opensource.com/article/21/8/linux-stat-file-status"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Check file status on Linux with the stat command
======
All the information you need about any file or file system is just one
Linux command away.
![Hand putting a Linux file folder into a drawer][1]
The `stat` command, included in the GNU `coreutils` package, provides a variety of metadata, including file size, inode location, access permissions and SELinux context, and creation and modification times, about files and filesystems. It's a convenient way to gather information that you usually need several different commands to acquire.
### Installing stat on Linux
On Linux, you probably already have the `stat` command installed because it's part of a core utility package that's generally bundled with Linux distributions by default.
In the event that you don't have `stat` installed, you can install `coreutils` with your package manager.
Alternately, you can [compile coreutils from source code][2].
### Getting the status of a file
Running `stat` provides easy to read output about a specific file or directory.
```
$ stat planets.xml
  File: planets.xml
  Size: 325      Blocks: 8     IO Block: 4096   regular file
Device: fd03h/64771d    Inode: 140217      Links: 1
Access: (0664/-rw-rw-r--)  Uid: (1000/tux)   Gid: (100/users)
Context: unconfined_u:object_r:user_home_t:s0
Access: 2021-08-17 18:26:57.281330711 +1200
Modify: 2021-08-17 18:26:58.738332799 +1200
Change: 2021-08-17 18:26:58.738332799 +1200
 Birth: 2021-08-17 18:26:57.281330711 +1200
```
It may be easy to read, but it's still a lot of information. Here's what `stat` is covering:
* **File**: the file name
* **Size**: the file size in bytes
* **Blocks**: the number of blocks on the hard drive reserved for this file
* **IO Block**: the size of a block of the filesystem
* **regular file**: the type of file (regular file, directory, filesystem)
* **Device**: the device where the file is located
* **Inode**: the inode number where the file is located
* **Links**: the number of links to the file
* **Access, UID, GID**: file permissions, user, and group owner
* **Context**: SELinux context
* **Access, Modify, Change, Birth**: the timestamp of when the file was accessed, modified, changed status, and created
### Terse output
For people who know the output well, or want to parse the output with other utilities like [awk][3], there's the `--terse` (`-t` for short) option, which formats the output without headings or line breaks.
```
$ stat --terse planets.xml
planets.xml 325 8 81b4 100977 100 fd03 140217 1 0 0 1629181617 1629181618 1629181618 1629181617 4096 unconfined_u:object_r:user_home_t:s0
```
### Choosing your own format
You can define your own format for output using the `--printf` option and a syntax similar to [printf][4]. Each attribute reported by `stat` has a format sequence (`%C` for SELinux context, `%n` for file name, and so on), so you can choose what you want to see in a report.
```
$ stat --printf="%n\n%C\n" planets.xml
planets.xml
unconfined_u:object_r:user_home_t:s0
$ $ stat --printf="Name: %n\nModified: %y\n" planets.xml
Name: planets.xml
Modified: 2021-08-17 18:26:58.738332799 +1200
```
Here are some common format sequences:
* **%a** access rights
* **%F** file type
* **%n** file name
* **%U** user name
* **%u** user ID
* **%g** group ID
* **%w** time of birth
* **%y** modification time
A full listing of format sequences is available in the `stat` man page and the `coreutils` info pages.
### File information
If you've ever tried to parse the output of `ls -l`, then you'll appreciate the flexibility of the `stat` command. You don't always need every bit of the default information that `stat` provides, but the command is invaluable when you do need some or all of it. Whether you read its output in its default format, or you create your own queries, the `stat` command gives you easy access to the data about your data.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/linux-stat-file-status
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
[2]: https://www.gnu.org/software/coreutils/
[3]: https://opensource.com/article/20/9/awk-ebook
[4]: https://opensource.com/article/20/8/printf

View File

@ -0,0 +1,114 @@
[#]: subject: "How to Download Audio Only Using youtube-dl"
[#]: via: "https://itsfoss.com/youtube-dl-audio-only/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Download Audio Only Using youtube-dl
======
[youtube-dl][1] is a versatile command line tool for downloading videos from YouTube and many other websites. I use it for making back up of my own YouTube videos.
By default, you [use youtube-dl for downloading videos][2]. How about extracting only the audio with youtubde-dl? Thats very simple actually. Let me show you the steps.
Attention
Downloading videos from websites could be against their policies. Its up to you if you choose to download videos or audio.
### Download only audio with youtube-dl
Please make sure that you have installed youtube-dl on your Linux distribution first.
```
sudo snap install youtube-dl
```
If you only want to download audio from a YouTube video, you can use the -x option with youtube-dl. This extract-audio option converts the video files to audio-only files.
```
youtube-dl -x video_URL
```
The file is saved in the same directory from where you ran the youtube-dl command.
Heres an example where I downloaded the voice-over of our Zorin OS 16 review video.
```
youtube-dl -x https://www.youtube.com/watch?v=m_PmLG7HqbQ
[youtube] m_PmLG7HqbQ: Downloading webpage
[download] Destination: Zorin OS 16 Review - It's a Visual Masterpiece-m_PmLG7HqbQ.m4a
[download] 100% of 4.26MiB in 00:03
[ffmpeg] Correcting container in "Zorin OS 16 Review - It's a Visual Masterpiece-m_PmLG7HqbQ.m4a"
[ffmpeg] Post-process file Zorin OS 16 Review - It's a Visual Masterpiece-m_PmLG7HqbQ.m4a exists, skipping
```
Did you notice the audio format? It is in .m4a format. You may specify the audio format to something of your choice.
Say you want to extract the audio in MP3 format. You can use it like this:
```
youtube-dl -x --audio-format mp3 video_URL
```
Heres the same example I showed previously. You can see that it [uses ffmpeg to convert][3] the m4a file into mp3.
```
youtube-dl -x --audio-format mp3 https://www.youtube.com/watch?v=m_PmLG7HqbQ
[youtube] m_PmLG7HqbQ: Downloading webpage
[download] Zorin OS 16 Review - It's a Visual Masterpiece-m_PmLG7HqbQ.m4a has already been downloaded
[download] 100% of 4.26MiB
[ffmpeg] Correcting container in "Zorin OS 16 Review - It's a Visual Masterpiece-m_PmLG7HqbQ.m4a"
[ffmpeg] Destination: Zorin OS 16 Review - It's a Visual Masterpiece-m_PmLG7HqbQ.mp3
Deleting original file Zorin OS 16 Review - It's a Visual Masterpiece-m_PmLG7HqbQ.m4a (pass -k to keep)
```
### Download entire YouTube playlist in MP3 format
Yes, you can totally do that. The main thing is to get the URL of the playlist here. It is typically in the following format:
```
https://www.youtube.com/playlist?list=XXXXXXXXXXXXXXXXXXX
```
To get the URL of a playlist, click on its name when the playlist is being displayed in the right sidebar.
![Click on the playlist title][4]
It will take you to the playlist page and you can copy the URL here.
![Grab the playlist URL][5]
Now that you have the playlist URL, you can use it to download the audio files in MP3 format in the following fashion:
```
youtube-dl --extract-audio --audio-format mp3 -o "%(title)s.%(ext)s" playlist_URL
```
That scary looking `-o "%(title)s.%(ext)s"` specifies the output file (with option -o) and instructs it to use the title of the video and the extension (mp3 in this case) for naming the audio files.
![][6]
I hope you find this quick tip helpful. Enjoy the audio files :)
--------------------------------------------------------------------------------
via: https://itsfoss.com/youtube-dl-audio-only/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://github.com/ytdl-org/youtube-dl
[2]: https://itsfoss.com/download-youtube-linux/
[3]: https://itsfoss.com/ffmpeg/
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/getting-youtube-playlist-url.png?resize=797%2C366&ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/youtube-playlist-url.png?resize=800%2C388&ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/downloading-youtube-playlist-audio.png?resize=800%2C559&ssl=1

View File

@ -0,0 +1,166 @@
[#]: subject: "MAKE MORE with Inkscape G-Code Tools"
[#]: via: "https://fedoramagazine.org/make-more-with-inkscape-g-code-tools/"
[#]: author: "Sirko Kemter https://fedoramagazine.org/author/gnokii/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
MAKE MORE with Inkscape G-Code Tools
======
![MAKE MORE with Inkscape - GCode Tools][1]
Inkscape, the most used and loved tool of Fedoras Design Team, is not just a program for doing nice vector graphics. With vector graphics (in our case SVG) a lot more can be done. Many programs can import this format. Inkscape can also do a lot more than just graphics. This series will show you some things you can do besides graphics with Inkscape. This first article of the series will show how Inkscapes G-Code Tools extension can be used to produce G-Code. G-Code , in turn, is useful for programming machines such as plotters and laser engravers.
### What is G-Code and what is it used for
The construction of machines for the hobby sector is booming. The publication of the source code for [RepRap][2] 3D printers for self-construction and the availability of electronic components, such as [Arduino][3] or [Raspberry Pi][4] are probably some of the causes for this boom. Mechanical engineering as a hobby is finding more and more adopters. This trend hasnt stopped with 3D printers. There are also [CNC][5] milling machines, plotters, laser engravers, cutters and and even machines that you can build yourself.
You dont have to design or build these machines yourself. You can purchase such machines relatively cheaply as a kit or already assembled. All these machines have one thing in common they are computer-controlled. [Computer Aided Manufacturing (][6][CAM][6]), which has been widespread in the manufacturing industry, is now also taking place at home.
### G-Code or G programming language
The most widespread language for programming CAM machines is G-Code. G-Code is also known as the G programming language. This language was developed at MIT in the 1950s. Since then, various organizations have developed versions of this programming language. Keep this in mind when you work with it. Different countries have different standards for this language. The name comes from the fact that many instructions in this code begin with the letter G. This letter is used to transmit travel or path commands to the machine.
The commands go, in the truest sense of the word, from A (absolute or incremental position around the X-axis; turning around X) to Z (absolute or incrementing in the direction of the Z-axis). Commands prefixed with M (miscellaneous) transmit other instructions to the machine. Switching coolant on/off is an example of an M command. If you want a more complete list of G-Code commands there is a table on [Wikipedia][7].
```
%
G00 X0 Y0 F70
G01 Z-1 F50
G01 X0 Y20 F50
G02 X20 Y0 J-20
G01 X0 Y0
G00 Z0 F70
M30
%
```
This small example would mill a square. You could write this G-Code in any editor of your choice. But when it comes to more complex things, you typically wont do this sort of low-level coding by hand. When it comes to 3D-Printing the slicer writes the G-Code for you. But what about when you want to use a plotter or a laser engraver?
### Other Software for writing G-Code
So you will need a program to do this job for you. Sure, some CAD programs can write G-Code. But not all open source CAD programs can do this. Here are some other open source solutions for this:
* [dxf2gcode][8], normally a command line tool but has a Python implemented GUI
* [dmap2gcode][9], can import raster graphics and convert them
* [Millcrum][10], a browser-based tool
* [LinuxCNC][11], can import raster graphics and converts them to G-Code
* [TrueTypeTracer][12] or [F-Engrave][13] if you want to engrave fonts
As you can see, there is no problem finding a tool for doing this. What I dislike is the use of raster graphics. I use a CNC machine because it works more precisely than I would be able to by hand. Using raster graphics and tracing it to make a path for G-Code is not precise anymore. I find that the use of vector graphics, which has paths anyway, is much more precise.
### Inkscape and G-Code Tools installation
When it comes to vector graphics, there is no way around Inkscape; at least not if you use Linux. There are a few other programs. But they do not have anywhere near the capability that Inkscape has. Or they are designed for other purposes. So the question is, “Can Inkscape be used for creating G-Code?” And the answer is, “Yes!” Since version 0.91, Inkscape has been packaged with an extension called [GCode Tools][14]. This extension does exactly what we want it converts paths to G-Code.
So all you have to do, if you have not already done it, is install Inkscape:
```
$ sudo dnf install Inkscape
```
One thing to note from the start (where light is, is also shadow) the GCode Tools extension has a lot of functionality that is not well documented. The developer thinks its a good idea to use a forum for documentation. Also, basic knowledge about G-Code and CAM is necessary to understand the functions.
Another point to be aware of is that the development isnt as vibrant as it was at the time the GCode Tools were packaged with Inkscape.
### Getting started with Inkscapes G-Code Tools extension
The first step is the same as when you would make any other thing in Inkscape adjust your document properties. So open the document settings with **Shift + Ctrl + D** or by a clicking on the icon on the command bar and set the document properties to the size of your work piece.
Next, set the orientation points by going to _Extensions &gt; Gcodetools &gt; Orientation points_. You can use the default settings. The default settings will probably give you something similar to what is shown below.
![Inkscape with document setup and the orientation points ][15]
#### The Tool library
The next step is to edit the tool library (_Extensions &gt; Gcodetools &gt; Tools library_). This will open the dialog window for the tool setting. There you choose the tool you will use. The _default_ tool is fine. After you have chosen the tool and hit _Apply_, a rectangle will be on the canvas with the settings for the tool. These settings can be edited with the text tool (**T**). But this is a bit tricky.
![Inkscape with the default tool library settings added into the document][16]
The G-Code Tools extension will use these settings later. These tool settings are grouped together with an identifiable name. If you de-group these settings, this name will be lost.
There are two possibilities to avoid losing the identifier if you ungroup the tool settings. You can use the de-group with 4 clicks with the activated selection tool. Or you can de-group it by using **Shift + Ctrl + G** and then give the group a name later using the XML-editor.
In the first case you should **watch that the group is restored before you draw anything new**. Otherwise the newly drawn object will be added to this group.
Now you can draw the paths you want to later convert to G-Code. Objects like rectangles, circles, stars and polygons as well text must be converted to paths (_Path &gt; Object to Path_ or **Shift + Ctrl + C**).
Keep in mind that this function often does not produce clean paths. You will have to control it and clean it afterwards. You can find an older article [here][17], that describes the process.
#### Hershey Fonts or Stroke Fonts
Regarding fonts, keep in mind that TTF and OTF are so called Outline Fonts. This means the contour of the single character is defined and it will be engraved or cut as such. If you do not want this and want to use, for example, a script font then you have to use Stroke Fonts instead. Inkscape itself brings a small collection of them by default (see _Extensions &gt; Text &gt; [Hershey text][18]_).
![The stroke fonts of the Hershey Text extension][19]
Another article about how make your own Stroke Fonts will follow. They are not only useful for engraving, but also for embroidery.
#### The Area Fill Functions
In some cases it might be necessary to fill paths with a pattern. The G-Code Tools extension has a function which offers two ways to fill objects with patterns _zig zag_ and _spiral_. There is another function which currently is not working (Inkscape changed some parts for the extensions with the release of version 1.0). The latter function would fill the object with the help of the offset functions in Inkscape. These functions are under _Extensions &gt; Gcodetools &gt; Area_.
![The Fill Area function of the G-Code Tools extension. Left the pattern fill and right \(currently not working\) the offset filling. The extension will execute the active tab!][20]
![The area fillings of the G-Code Tool, on top Zig zag and on the bottom Spiral. Note the results will look different, if you apply this function letter-by-letter instead of on the whole path.][21]
With more and different area fillings you will often have to draw the paths by hand (about 90% of the time). The [EggBot extension][22] has a function for filling regions with hatches. You also can use the [classical hatch patterns][23]. But you will have to convert the fill pattern back to an object. Otherwise the G-Code Tools extension can not convert it. Besides these, [Evilmadscientist has a good wiki page describing fill methods][24].
#### Converting paths to G-Code
To convert drawn paths to G-Code, use the function _Extensions &gt; Gcodetools &gt; Paths to G-Code._ This function will be run on the selected objects. If no object is selected, then on all paths in the document will be converted.
There is currently no functionality to save G-Code using the file menu. This must be done from within the G-Code Tools extension dialog box when you convert the paths to G-Code. **On the Preferences tab, you have to specify the path and the name for the output file.**
On the canvas, different colored lines and arrows will be rendered. Blue and green lines show curves (G02 and G03). Red lines show straight lines (G01). When you see this styling, then you know that you are working with G-Code.
![Fedoras logo converted to G-Code with the Inkscape G-Code Tools][25]
### Conclusion
Opinions differ as to whether Inkscape is the right tool for creating G-Code. If you keep in mind that Inkscape works only in two dimensions and dont expect too much, you can create G-Code with it. For simple jobs like plotting some lettering or logos, it is definitely enough. The main disadvantage of the G-Code Tools extension is that its documentation is lacking. This makes it difficult to get started with G-Code Tools. Another disadvantage is that there is not currently much active development of G-Code Tools. There are other extensions for Inkscape that also targeted G-Code. But they are already history or are also not being actively developed. The [Makerbot Unicorn GCode Output][26] extension and the [GCode Plot][27] extension are a few examples of the latter case. The need for an easy way to export G-Code directly definitely exists.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/make-more-with-inkscape-g-code-tools/
作者:[Sirko Kemter][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/gnokii/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/07/drawing-1-816x345.png
[2]: https://reprap.org/wiki/RepRap
[3]: https://www.arduino.cc/
[4]: https://www.raspberrypi.org/
[5]: https://en.wikipedia.org/wiki/CNC
[6]: https://en.wikipedia.org/wiki/Computer-aided_manufacturing
[7]: https://en.wikipedia.org/wiki/G-code
[8]: https://sourceforge.net/projects/dxf2gcode/
[9]: https://www.scorchworks.com/Dmap2gcode/dmap2gcode.html
[10]: http://millcrum.com/
[11]: http://linuxcnc.org/
[12]: https://github.com/aewallin/truetype-tracer
[13]: https://www.scorchworks.com/Fengrave/fengrave.html
[14]: https://github.com/cnc-club/gcodetools
[15]: https://fedoramagazine.org/wp-content/uploads/2021/07/Bildschirmfoto-vom-2021-07-12-19-02-14-1024x556.png
[16]: https://fedoramagazine.org/wp-content/uploads/2021/07/Bildschirmfoto-vom-2021-07-12-19-10-24-1024x556.png
[17]: https://fedoramagazine.org/design-faster-web-pages-part-2-image-replacement/
[18]: https://www.evilmadscientist.com/2011/hershey-text-an-inkscape-extension-for-engraving-fonts/
[19]: https://fedoramagazine.org/wp-content/uploads/2021/07/Bildschirmfoto-vom-2021-07-12-19-16-50.png
[20]: https://fedoramagazine.org/wp-content/uploads/2021/07/fillarea-1024x391.png
[21]: https://fedoramagazine.org/wp-content/uploads/2021/07/Bildschirmfoto-vom-2021-07-12-20-36-51.png
[22]: https://wiki.evilmadscientist.com/Installing_software#Linux
[23]: https://inkscape.org/de/~henkjan_nl/%E2%98%85classical-hatch-patterns-for-mechanical-drawings
[24]: https://wiki.evilmadscientist.com/Creating_filled_regions
[25]: https://fedoramagazine.org/wp-content/uploads/2021/07/Bildschirmfoto-vom-2021-07-12-19-38-34-1024x556.png
[26]: http://makerbot.wikidot.com/unicorn-output-for-inkscape
[27]: https://inkscape.org/de/~arpruss/%E2%98%85gcodeplot

View File

@ -0,0 +1,185 @@
[#]: subject: "10 Things to Do After Installing elementary OS 6 “Odin”"
[#]: via: "https://www.debugpoint.com/2021/08/10-things-to-do-after-install-elementary-os-6/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
10 Things to Do After Installing elementary OS 6 “Odin”
======
A curated list of things to do after installing the latest elementary OS
6 code-named “Odin”._Pre-step_Applications > System Settings > Desktop
The [elementary OS 6 “Odin” released][1] a while back after more than two years in development. It brings a huge set of new features across its core modules, Pantheon desktop, native applications. This release is based on the Ubuntu 20.04 LTS.
That said, if you already completed the installation, there are certain customization that you might want to try out to personalize your system. The options those described here are generic and may not be useful for you at certain cases, but we feel its worth to list down some basics and give you a path to explore more of this beautiful elementary OS.
### Things to Do After Installing elementary OS 6 “Odin”
Make sure you connect to the internet first. You can get the list of networks available in the notification area at the top.
#### 1\. Change hostname
This might not be the first thing you would like to do. However, I am not sure why an option not given changing the hostname during installation itself. For example, see below terminal prompt, the hostname is the default hardware configuration set by elementary OS. Which is not looking good at all in my opinion.
![hostname change before][2]
To change the hostname, open a terminal and run the below command.
```
hostnamectl set-hostname your-new-hostname
```
example:
![changing hostname][3]
![changed hostname][4]
#### 2\. Update your system
The very first thing you should do after installing any Linux distribution is to make sure the system is up-to-date with packages and security updates.
To do that here, you can open App Center and check/install for updates.
Or, open the Terminal and run the below commands.
```
sudo apt update
sudo apt upgrade
```
#### 3\. Install Pantheon Tweaks
Pantheon Tweaks is a must-have application in elementary OS. It provides additional settings and configuration options that is not available via standard system settings app. To install Pantheon Tweaks, open a terminal and run the below commands. Note: The earlier tweak tool was elementary Tweaks, which is renamed with Pantheon Tweaks from Odin onwards.
```
sudo apt install software-properties-common
sudo add-apt-repository -y ppa:philip.scott/pantheon-tweaks
sudo apt install -y pantheon-tweaks
```
After installation, open System Settings and you can find Tweaks option there.
A detailed installation guide is [available here][5] (if you need more information).
#### 4\. Configure Dock
Dock is the center of the desktop. And honestly, the default apps that is included in the dock are not that popular. So, you can always configure the dock items using the below steps.
* To remove: Right click and uncheck the **Keep in Dock** option.
* To add new items: Click on Application at the top. Then right-click on the application icon which you want in dock. Select **Add to Dock**.
In my opinion, you should add at least File manager, screenshot tool, Firefox, Calculator among other things. And remove the ones you dont need.
#### 5\. Change the look and feel
The elementary OS 6 Odin revamped the overall look of the desktop with pre-loaded accent color, native dark mode for entire desktop and applications. Also, pre-loads nice wallpapers. You can customize all these via . There you will have options for Wallpaper, Appearance, Panels and Multitasking.
![elementary OS 6 Odin settings window Desktop][6]
Configure the look as you wish.
[][7]
SEE ALSO:   elementary OS 6 Odin: New Features and Release Date
Oh, you can also schedule the Dark and Light mode based on Sunset and Sunrise!
#### 6\. Install Additional Applications
The native AppCenter is great for this OS. I find it one of the best curated app store available in Linux desktop. However, sometimes Its also better to install necessary applications (mostly the known ones) those are not pre-loaded. Heres a quick list of applications which you can install in a fresh system. _(Seriously, why LibreOffice is not preloaded?)_
* firefox
* gimp
* gedit
* inkscape
* obs-studio
* libreoffice
#### 7\. Some Battery Saver Tips (Laptop)
There are many ways which you can configure your elementary OS (or Linux desktop in general) to save battery life. Remember that battery life depends on your Laptop hardware, how old the battery/Laptop is among other things. So, following some of the below tips to get the maximum out of your Laptop battery.
* Install [tlp][8]. The tlp is a simple to use, terminal based utility to help you to save Battery Life in Linux. You need to just install it, and it will take care of the other settings by default. Installation commands:
```
sudo add-apt-repository ppa:linrunner/tlp
sudo apt update
sudo apt-get install tlp
sudo tlp start
```
* Turn off Bluetooth, which is turned on by default. Enable it when required.
* Install thermald via below command. This utility (actually a daemon) controls the P-States, T-States of your CPU for temperature and controls the heating.
```
sudo apt install thermald
```
* Control brightness to minimum as per your need.
#### 8\. Install a Disk Utility
More often, you can find that you need to format a USB or write something to USB. By default, there are no application installed. The best applications with easy usage are the below ones. You can install them.
```
gnome-disk-utility
gparted
```
#### 9\. Enable Minimize and Maximize Option
Many users prefer to have the Maximize, Minimize window buttons at the left or right of the window title bar. The elementary OS only gives you close and restore options by default. Which is completely fine because of the way its designed. However, you can use Pantheon Tweaks to enable it via Tweaks &gt; Appearance &gt; Window Controls.
![enable minimize maximize buttons elementary OS][9]
#### 10\. Learn the new multi-touch gestures in Odin
If you are a Laptop user, and using elementary OS Odin, then you definitely check out the super cool new gestures. A three-finger swipe up smoothly opens the Multitasking View, exposing open apps and workspaces. A three-finger swipe left or right smoothly switches between the dynamic workspaces, making it even faster to jump between tasks.
And with two fingers you can achieve similar feature inside native applications as well.
### Closing Notes
I hope these 10 things to do after installing elementary OS 6 helps you and get you started with elementary OS 6 Odin. Although, these are completely user preference; hence these may or may not apply to you. But in general, these are expected tweaks that the average user prefers.
Let me know in the comments below if there are some more tweaks you feel that should be added in the list.
* * *
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2021/08/10-things-to-do-after-install-elementary-os-6/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: https://www.debugpoint.com/2021/08/elementary-os-6/
[2]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/hostname-change-before.jpeg
[3]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/changing-hostname.jpeg
[4]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/changed-hostname.jpeg
[5]: https://www.debugpoint.com/2021/07/elementary-tweaks-install/
[6]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/elementary-OS-6-Odin-settings-window-Desktop.jpeg
[7]: https://www.debugpoint.com/2020/09/elementary-os-6-odin-new-features-release-date/
[8]: https://linrunner.de/tlp/
[9]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/enable-minimize-maximize-buttons-elementary-OS-1024x501.png

View File

@ -0,0 +1,109 @@
[#]: subject: "How to set up your printer on Linux"
[#]: via: "https://opensource.com/article/21/8/add-printer-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "fisherue "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to set up your printer on Linux
======
In the event that your printer isn't auto-detected, this article teaches
you how to add a printer on Linux manually.
![printing on Linux][1]
Even though it's the future now and we're all supposed to be using e-ink and AR, there are still times when a printer is useful. Printer manufacturers have yet to standardize how their peripherals communicate with computers, so there's a necessary maze of printer drivers out there, regardless of what platform you're on. The IEEE-ISTO Printer Working Group (PWG) and the OpenPrinting.org site are working tirelessly to make printing as easy as possible, though. Today, many printers are autodetected with no interaction from the user.
In the event that your printer isn't auto-detected, this article teaches you how to add a printer on Linux manually. This article assumes you're on the GNOME desktop, but the basic workflow is the same for KDE and most other desktops.
### Printer drivers
Before attempting to interface with a printer from Linux, you should first verify that you have updated printer drivers.
There are three varieties of printer drivers:
* Open source [Gutenprint drivers][2] bundled with Linux and as an installable package
* Drivers provided by the printer manufacturer
* Drivers created by a third party
It's worth installing the open source drivers because there are over 700 of them, so having them available increases the chance of attaching a printer and having it automatically configured for you.
### Installing open source drivers
Your Linux distribution probably already has these installed, but if not, you can install them with your package manager. For example, on Fedora, CentOS, Mageia, and similar:
```
`$ sudo dnf install gutenprint`
```
For HP printers, also install Hewlett-Packard's Linux Imaging and Printing (HPLIP) project. For example, on Debian, Linux Mint, and similar:
```
`$ sudo apt install hplip`
```
### Installing vendor drivers
Sometimes a printer manufacturer uses non-standard protocols, so the open source drivers don't work. Other times, the open source drivers work but may lack special vendor-only features. When that happens, you must visit the manufacturer's website and search for a Linux driver for your printer model. The install process varies, so read the install instructions carefully.
In the event that your printer isn't supported at all by the vendor, there are [third-party driver authors][3] that may support your printer. These drivers aren't open source, but neither are most vendor drivers. It's frustrating to have to spend an extra $45 to get support for a printer, but the alternative is to throw the printer into the rubbish, and now you know at least one brand to avoid when you purchase your next printer!
### Common Unix Printing System (CUPS)
The Common Unix Printing System (CUPS) was developed in 1997 by Easy Software Products, and purchased by Apple in 2007. It's the open source basis for printing on Linux, but most modern distributions provide a customized interface for it. Thanks to CUPS, your computer can find printers attached to it by a USB cable and even a shared printer over a network.
Once you've gotten the necessary drivers installed, you can add your printer manually. First, attach your printer to your computer and power them both on. Then open the **Printers** application from the **Activities** screen or application menu.
![printer settings][4]
CC BY-SA Opensource.com
There's a possibility that your printer is autodetected by Linux, by way of the drivers you've installed, and that no further configuration is required.
![printer settings][5]
CC BY-SA Opensource.com
Provided that you see your printer listed, you're all set, and you can already print from Linux!
If you see that you need to add a printer, click the **Unlock** button in the top right corner of the **Printers** window. Enter your administrative password and the button transforms into an **Add** button.
Click the **Add** button.
Your computer searches for attached printers (also called a _local_ printer). To have your computer look for a shared network printer, enter the IP address of the printer or its host.
![searching for a printer][6]
CC BY-SA Opensource.com
Select the printer you want to add to your system and click the **Add** button.
### Print from Linux
Printing from Linux is as easy as printing can be, whether you're using a local or networked printer. If you're looking for a printer to purchase, then check the [OpenPrinting.org database][7] to confirm that a printer has an open source driver before you spend your money. If you already have a printer, you now know how to use it on your Linux computer.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/add-printer-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[fisherue](https://github.com/fisherue)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/happy-printer.png?itok=9J44YaDs (printing on Linux)
[2]: http://gimp-print.sourceforge.net/
[3]: https://www.turboprint.info/
[4]: https://opensource.com/sites/default/files/system-settings-printer_0.png (printer settings)
[5]: https://opensource.com/sites/default/files/settings-printer.png (printer settings)
[6]: https://opensource.com/sites/default/files/printer-search.png (searching for a printer)
[7]: http://www.openprinting.org/printers/

View File

@ -0,0 +1,118 @@
[#]: subject: "SparkyLinux 6.0 “Po-Tolo” Released Based on Debian 11 Bullseye"
[#]: via: "https://www.debugpoint.com/2021/08/sparky-linux-6-review/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
SparkyLinux 6.0 “Po-Tolo” Released Based on Debian 11 Bullseye
======
We review the SparkyLinux 6 “Po-Tolo” and round up the release.
[SparkyLinux][1] is a desktop-based Linux distribution based on Debian and provides almost all major desktop flavors. It is a unique distribution in the sense that it provides both Debian Stable and Debian Testing editions with the latest desktop packages. SparkyLinux also provides a collection of curated applications, with some special editions as well. For example, if you are a Game lover, then the SparkyLinux GameOver edition is there. For system admins, there is a Rescue Edition as well to fix broken systems. All these special editions come with pre-loaded games, utilities with some proprietary packages as well.
The latest release of SparkyLinux 6 brings the packages from [Debian 11 Bullseye][2], which released a while back. Lets take a look at whats new.
![SparkyLinux 6 desktop \(Xfce\)][3]
### SparkyLinux 6 Whats New
* SparkyLinux 6 is based on Debian 11 Bullseye.
* Powered by Linux Kernel 5.10.x LTS
* This distribution maintains its own repo, and it is now updated with Bullseye packages.
* The default and necessary applications are updated to their respective Debian stable version. Heres a quick update:
Firefox 78.13.0ESR instead of Firefox (latest)
Thunderbird 78.13.0
VLC 3.0.16
LibreOffice 7.0.4
Calamares 3.2.41.1
* The default AppCenter APTUS is included in this release which provides you curated 2000+ applications which can be installed via a simple GUI with one-click. This is one of the best feature of SparkyLinux, specially for new users or large deployments.
* The APTUS AppCenter also provides one-click features for the followings
System upgrade
Search packages
Fix broken packages
Edit repo
Clean up cache
…and more
* Desktop environments retain their current stable versions with Sparky flavors
Xfce 4.16
KDE Plasma 5.22
LXQt 0.17
* Other changes include, the MinimalGUI version changed file manager to PCManFM and browser to Firefox ESR.
Detailed changes with information is available [here][4].
### Download, Upgrade and Install
If you are using an earlier version of SparkyLinux, simple making a system upgrade takes you to SparkyLinux 6.0. No additional steps are required.
[][5]
SEE ALSO:   SparkyLinux 2021.03 Gets First-Ever KDE Plasma Edition with Debian 11
For fresh installation with respective Desktop environments refer below link for download. You can use [Etcher][6] or similar utility to create LIVE usb for fresh installation. Do not forget to turn off secure boot if you are installing in UEFI systems.
[download sparkylinux stable][7]
### Sparky Linux 6 Quick Review
* I ran SparkyLinux in a virtual machine and native install, both with Xfce desktop edition for a quick test. The installation went smooth, thanks to the awesome Calamares installer. No surprises there.
* After initial boot, a welcome screen guides you to go over the important items if you may want to read. SparkyLinux takes care of system configurations based on GUI based utility. For example, you do not need to open terminal and run “sudo apt upgrade” to update your system. Itll prompt you that an upgrade is available, you give admin password, and it takes care of it.
* SparkyLinux is super stable thanks to Debian and very lightweight. In idle scenario, the Xfce desktop with SparkyLinux was consuming around 600 MB of memory and most of the CPU is used by respective desktop window manager which is at around 5%.
* If you are using KDE Plasma or LXQt the memory and CPU usage should vary, but they would not fluctuate much.
* The APTUS AppCenter plus system administration utility is one of the best feature which makes it stand apart among other distributions.
![APTus APPCENTER in SparkyLinux][8]
* And the good thing is, it gives you flavors of Debian Rolling and Debian Stable both. If you want to use Debian Rolling packages in SparkyLinux, then you can get it out-of-the-box.
That said, its a simple, stable and user-friendly distribution. Give it a try if you have not yet; It is a perfect and suitable daily-usage distro.
Cheers.
* * *
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2021/08/sparky-linux-6-review/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: https://sparkylinux.org
[2]: https://www.debugpoint.com/2021/05/debian-11-features/
[3]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/SparkyLinux-6-desktop-Xfce-1024x764.jpeg
[4]: https://sparkylinux.org/sparky-6-0-po-tolo/
[5]: https://www.debugpoint.com/2021/03/sparkylinux-2021-03-release/
[6]: https://www.debugpoint.com/2021/01/etcher-bootable-usb-linux/
[7]: https://sparkylinux.org/download/stable/
[8]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/APTus-APPCENTER-in-SparkyLinux-1024x781.jpeg

View File

@ -0,0 +1,137 @@
[#]: subject: "How to Monitor Log Files in Real Time in Linux [Desktop and Server]"
[#]: via: "https://www.debugpoint.com/2021/08/monitor-log-files-real-time/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Monitor Log Files in Real Time in Linux [Desktop and Server]
======
This tutorial explains how you can monitor Linux log files (desktop,
server or applications) in real time for diagnosis and troubleshooting
purpose.Basic SyntaxUsage
When you ran into problems in your Linux desktop, or server or any application, you first look into the respective log files. The log files are generally a stream of text and messages from applications with a timestamp attached to it. It helps you to narrow down specific instances and helps you find the cause of any problem. It can also help to get assistance from the web as well.
In general, all log files are located in /var/log. This directory contains log files with extension .log for specific applications, services, and it also contains separate other directories which contains their log files.
![log files in var-log][1]
So, that said, if you want to monitor a bunch of log files Or, a specific one heres are some ways how you can do it.
### Monitor Log Files in real time Linux
#### Using tail command
Using the tail command is the most basic way of following a log file in real time. Specially, if you are in a server with only just a terminal, no GUI. This is very helpful.
Examples:
```
tail /path/to/log/file
```
![Monitoring multiple log files via tail][2]
Use the switch -f to follow the log file, which updates in real time. For example, if you want to follow syslog, you can use the following command.
```
tail -f /var/log/syslog
```
You can monitor multiple log files using a single command using
```
tail -f /var/log/syslog /var/log/dmesg
```
If you want to monitor http or sftp or any server, you can also their respective log files in this command.
Remember, above commands requires admin privileges.
#### Using lnav (The Logfile Navigator)
![lnav Running][3]
The lnav is a nice utility which you can use to monitor log files in a more structured way with color coded messages. This is not installed by default in Linux systems. You can install it using the below command:
```
sudo apt install lnav (Ubuntu)
sudo dnf install lnav (Fedora)
```
The good thing about lnav is, if you do not want to install it, you can just download its pre-compiled executable and run in anywhere. Even from a USB stick. No setup is required, plus loaded with features. Using lnav you can query the log files via SQL among other cool features which you can learn on it [official website][4].
[][5]
SEE ALSO:   This App is An Advanced Log File Viewer - lnav
Once installed, you can simply run lnav from terminal with admin privilege, and it will show all the logs from /var/log by default and start monitoring in real time.
#### A note about journalctl of systemd
All modern Linux distributions today use systemd, mostly. The systemd provides basic framework and components which runs Linux operating system in general. The systemd provides journal services via journalctl which helps to manage logs from all systemd services. You can also monitor respective systemd services and logs in real time using the following command.
```
journalctl -f
```
Here are some of the specific journalctl commands which you can use for several cases. You can combine these with -f switch above to start monitoring in real time.
* To emergency system messages use
```
journalctl -p 0
```
* Show errors with explanations
```
journalctl -xb -p 3
```
* Use time controls to filter out
```
journalctl --since "2020-12-04 06:00:00"
journalctl --since "2020-12-03" --until "2020-12-05 03:00:00"
journalctl --since yesterday
journalctl --since 09:00 --until "1 hour ago"
```
If you want to learn more about and want to find out details about journalctl I have written a [guide here][6].
### Closing Notes
I hope these commands and tricks helps you find out the root cause of your problem/errors in your desktop or servers. For more details, you can always refer to the man pages and play around with various switches. Let me know using the comment box below, if you have any comments or what do you think about this article.
Cheers.
* * *
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2021/08/monitor-log-files-real-time/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/log-files-in-var-log-1024x312.jpeg
[2]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/Monitoring-multiple-log-files-via-tail-1024x444.jpeg
[3]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/lnav-Running-1024x447.jpeg
[4]: https://lnav.org/features
[5]: https://www.debugpoint.com/2016/11/advanced-log-file-viewer-lnav-ubuntu-linux/
[6]: https://www.debugpoint.com/2020/12/systemd-journalctl/

View File

@ -0,0 +1,134 @@
[#]: subject: "Linux Phones: Here are Your Options"
[#]: via: "https://itsfoss.com/linux-phones/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Linux Phones: Here are Your Options
======
_**Brief:**_ _Linux phones could be the future to replace Android or iOS, but what are some of your options to give it a try?_
While Android is based on a Linux kernel, it has been heavily modified. So, that does not make it a full-fledged Linux-based operating system.
Google is trying to get the Android kernel close to the mainline Linux kernel, but that is still a distant dream.
So, in that case, what are some of the options if you are looking for a Linux phone? A smartphone powered by a Linux operating system.
It is not an easy decision to make because the options are super limited. Hence, I try to highlight some of the best Linux phones and a few different options from the mainstream choices.
### Top Linux phones you can use today
It is worth noting that the Linux phones mentioned here may not be able to replace your Android or iOS devices. So, make sure that you do some background research before making a purchase decision.
**Note:** You need to carefully check the availability, expected shipping date, and risks of using a Linux phone. Most of the options are only suitable for enthusiasts or early adopters.
#### 1\. PinePhone
![][1]
PinePhone is one of the most affordable and popular choices to consider as a promising Linux phone.
It is not limited to a single operating system. You can try it with Manjaro with Plasma mobile OS, UBports, Sailfish OS, and others. PinePhone packs in some decent specifications that include a Quad-core processor and 2/3 Gigs of RAM. It does support a bootable microSD card to help you with installation, along with 16/32 GB eMMC storage options.
The display is a basic 1440×720p IPS screen. You also get special privacy protection tweaks like kill switches for Bluetooth, microphones, and cameras.
PinePhone also gives you an option to add custom hardware extensions using the six pogo pins available.
The base edition (2 GB RAM and 16 GB storage) comes loaded with Manjaro by default and costs $149. And, the convergence edition (3 GB RAM / 32 GB storage) costs $199.
[PinePhone][2]
#### 2\. Fairphone
![][3]
Compared to others on the list, Fairphone is a commercial success. It is not a Linux smartphone, but it features a customized version of Android, i.e., Fairphone OS, and the option to opt for [/e/ OS][4], one of the [open-source Android alternatives][5]. Some community ports are available if you want to use the Linux operating system, but it could be a hit and miss.
The Fairphone offers some decent specs, considering there are two different variants. You will find a 48 MP camera sensor for Fairphone 3+ and a full-HD display. Not to forget, you will also find decent Qualcomm processors powering the device.
They focus on making smartphones that are sustainable and have been built using some amount of recycled plastic. Fairphone is also meant to be easily repairable.
So, it is not just an option away from mainstream smartphones, but you will also be helping with protecting the environment if you opt for it.
[Fairphone][6]
### 3\. Librem 5
![][7]
[Librem 5][8] is a smartphone that focuses heavily on user privacy while featuring an open-source operating system, i.e., PureOS, not based on Android.
The specifications offered are decent, with 3 Gigs of RAM and a quad-core Cortex A53 chipset. But, this is not something geared to compete with mainstream options. Hence, you may not find it as a value for money offering.
It is aimed at enthusiasts who are interested in testing privacy-respecting smartphones in the process.
Similar to others, Librem 5 also focuses on making the phone easily repairable by offering user-replaceable batteries.
For privacy, you will notice kill switches for Bluetooth, Cameras, and microphones. They also promise security updates for years to come.
[Librem 5][9]
### 4\. Pro 1X
![][10]
An interesting smartphone that supports Ubuntu Touch, Lineage OS, and Android as well.
It is not just a Linux smartphone but a mobile phone with a separate QWERTY keypad, which is rare to find these days.
The Pro 1 X features a decent specification, including a Snapdragon 662 processor coupled with 6 GB of RAM. You also get a respectable AMOLED Full HD display with the Pro 1 X.
The camera does not pack in anything crazy, but should be good enough for the most part.
[Pro 1X][11]
### 5\. Volla Phone
![][12]
An attractive offering that runs on Ubuntu Touch by UBports.
It comes with a pre-built VPN and focuses on making the user experience easy. The operating system has been customized so that everything essential should be accessible quickly without organizing anything yourself.
It packs in some impressive specifications that include an Octa-core MediaTek processor along with a 4700 mAh battery. You get a notch design resembling some of the latest smartphones available.
[Volla Phone][13]
### Wrapping Up
Linux smartphones are not readily available and certainly not yet suitable for the masses.
So, if you are an enthusiast or want to support the development of such phones, you can consider getting one of the devices.
Do you already own one of these smartphones? Please dont hesitate to share your experiences in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-phones/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/PinePhone-3.jpg?resize=800%2C800&ssl=1
[2]: https://www.pine64.org/pinephone/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/fairphone.png?resize=360%2C600&ssl=1
[4]: https://itsfoss.com/e-os-review/
[5]: https://itsfoss.com/open-source-alternatives-android/
[6]: https://shop.fairphone.com/en/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/librem-5.png?resize=800%2C450&ssl=1
[8]: https://itsfoss.com/librem-linux-phone/
[9]: https://puri.sm/products/librem-5/
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/pro1x.jpg?resize=800%2C542&ssl=1
[11]: https://www.fxtec.com/pro1x
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/volla-smartphone.jpg?resize=695%2C391&ssl=1
[13]: https://www.indiegogo.com/projects/volla-phone-free-your-mind-protect-your-privacy#/

View File

@ -0,0 +1,177 @@
[#]: collector: "lujun9972"
[#]: translator: "fisherue"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: subject: "5 ways to improve your Bash scripts"
[#]: via: "https://opensource.com/article/20/1/improve-bash-scripts"
[#]: author: "Alan Formy-Duval https://opensource.com/users/alanfdoss"
5 步提升你的脚本程序
======
巧用 Bash 脚本程序能帮助你完成很多极具挑战的任务。
![A person working.工作者图片][1]
系统管理员经常写脚本程序,不论长短,这些脚本可以完成某种任务。
你是否曾经查看过某个软件发行方提供的安装用的脚本 (script) 程序?为了能够适应不同用户的系统配置,顺利完成安装,这些脚本程序经常包含很多函数和逻辑分支。多年来,我已经集合了提升我的脚本程序的一些技巧,这里分享几个技巧,希望能对朋友们也有用。这里列出一组短脚本示例,展示给大家做脚本样本。
### 初步尝试
我尝试写一个脚本程序时,原始程序往往就是一组命令行,通常就是调用标准命令完成诸如更新网页内容之类的工作,这样可以节省时间。其中一个类似的工作是解压文件到阿帕奇 (Apache) 网站服务器的主目录里,我的最初脚本程序大概是下面这样:
```
cp january_schedule.tar.gz /usr/apache/home/calendar/
cd /usr/apache/home/calendar/
tar zvxf january_schedule.tar.gz
```
这帮我节省了时间,也减少了键入多条命令操作。时日久了,我掌握了另外的技巧,可以用 Bash 脚本程序完成更难的一些工作,比如说创建软件安装包、安装软件、备份文件系统等工作。
### 1\. 条件分支结构
和众多其他编程语言一样,脚本程序的条件分支结构同样是强大的常用技能。条件分支结构赋予了计算机程序逻辑能力,我的很多实例都是基于条件逻辑分支。
基本的条件分支结构就是 IF 条件分支结构。通过判定是否满足特定条件,可以控制程序选择执行相应的脚本命令段。比如说,想要判断系统是否安装了 Java ,可以通过判断系统有没有一个 Java 库目录;如果找到这个目录,就把这个目录路径添加到可运行程序路径,也就可以调用 Java 库应用了。
```
if [ -d "$JAVA_HOME/bin" ] ; then
    PATH="$JAVA_HOME/bin:$PATH"
```
### 2\. 限定运行权限
你或许想只允许特定的用户才能执行某个脚本程序。除了 Linux 的权限许可管理,比如对用户和用户组设定权限、通过 SELinux 设定此类的保护权限等,你还可以在脚本里设置逻辑判断来设置执行权限。类似的情况可能是,你需要确保只有网站程序的所有者才能执行相应的网站初始化操作脚本。甚至你可以限定只有根用户才能执行某个脚本。这个可以通过在脚本程序里设置逻辑判断实现, Linux 提供的几个环境变量可以帮忙。其中一个是保存用户名称的变量 **$USER**, 另一个是保存用户识别码的变量 **$UID** 。在脚本程序里,执行用户的 UID 值就保存在 **$UID** 变量里。
#### 用户名判别
第一个例子里,我在一个多用户环境里指定只有用户 jboss1 可以执行脚本程序。条件 if 语句猜测判断,‘要求执行这个脚本程序的用户不是 jboss1?’ 如果我猜测的没错,就会输出提示“用户不是 jboss1 ”然后直接退出这个脚本程序返回码为1 **exit 1**
```
if [ "$USER" != 'jboss1' ]; then
     echo "Sorry, this script must be run as JBOSS1!"
     exit 1
fi
echo "continue script"
```
#### 根用户判别
接下来的例子是要求只有根用户才能执行脚本程序。根用户的用户识别码 (UID) 是0,设置的条件判断采用大于操作符 (**-gt**) ,所有 UID 值大于0的用户都被禁止执行该脚本程序。
```
if [ "$UID" -gt 0 ]; then
     echo "Sorry, this script must be run as ROOT!"
     exit 1
fi
echo "continue script"
```
### 3\. 带参数执行程序
可执行程序可以附带参数作为执行选项,命令行脚本程序也是一样,下面给出几个例子。在这之前,我想告诉你,能写出好的程序并不只是写出我们想要它执行什么就执行什么的程序,程序还需要按照我们不想让它执行什么它可以按我们的意愿能够不执行相应操作。如果运行程序时没有提供参数造成程序缺少足够信息,我愿意脚本程序不要做任何破坏性的操作。因而,程序的第一步就是确认命令行是否提供了参数,判定的条件就是参数 **$# **的值是否为 0 ,如果是(意味着没有提供参数),就直接终止脚本程序并退出操作。
```
if [ $# -eq 0 ]; then
    echo "No arguments provided"
    exit 1
fi
echo "arguments found: $#"
```
#### 多个运行参数
可以传递给脚本程序的参数不知一个。脚本使用内部变量指代这些参数,内部变量名用非负整数递增标识,也就是 **$1**,** $2**,** $3 **等等递增。我只是扩展前面的程序,输出显示用户提供的前三个参数。显然,要针对所有的每个参数有对应的响应需要更多的逻辑判断,这里的例子只是简单展示参数的使用。
```
`echo $1 $2 $3`
```
我们在讨论这些参数变量名,你或许有个疑问,“参数变量名怎么跳过了**$0**,(而直接从**$1**开始)?”
是的,是这样,这是有原因的。变量名 **$0** 确实存在,也非常有用,它储存的是被执行的脚本程序的名称。
```
`echo $0`
```
程序执行过程中有一个变量名指代程序名称,很重要的一个原因是,可以在生成的日志文件名称里包含程序名称,最简单的方式应该是调用一个 "echo" 语句。
```
`echo test >> $0.log`
```
当然,你或许要增加一些代码,确保这个日志文件存放在你希望的路径,日志名称包含你认为有用的信息。
### 4\. 交互输入
脚本程序的另一个好用的特性是可以在执行过程中接受输入,最简单的情况是让用户可以输入一些信息。
```
echo "enter a word please:"
 read word
 echo $word
```
这样也可以让用户在程序执行中作出选择。
```
read -p "Install Software ?? [Y/n]: " answ
 if [ "$answ" == 'n' ]; then
   exit 1
 fi
   echo "Installation starting..."
```
### 5\. 出错退出执行
几年前,我写了各脚本,想在自己的电脑上安装最新版本的 Java 开发工具包 (JDK) 。这个脚本把 JDK 文件解压到指定目录,创建更新一些符号链接,再做一下设置告诉系统使用这个最新的版本。如果解压过程出现错误,在执行后面的操作就会使 Java 系统破坏不能使用。因而,这种情况下需要终止程序。如果解压过程没有成功,就不应该再继续进行之后的更新操作。下面语句段可以完成这个功能。
```
tar kxzmf jdk-8u221-linux-x64.tar.gz -C /jdk --checkpoint=.500; ec=$?
if [ $ec -ne 0 ]; then
     echo "Installation failed - exiting."
     exit 1
fi
```
下面的一行语句组可以快速展示变量 **$?** 的用法。
```
`ls T; ec=$?; echo $ec`
```
先用 **touch T** 命令创建一个文件名为 **T** 的文件,然后执行这个文件行,变量 **ec **的值会是0。然后,用 **rm T **命令删除文件,在执行示例的文件行,变量 **ec** 的值会是2,因为文件 T 不存在,命令 **ls **找不到指定文件报错,相应返回值是2。
在逻辑条件里利用这个出错标识,参照前文我使用的条件判断,可以使脚本文件按需完成设定操作。
### 结语
要完成复杂的功能,或许我们觉得应该使用诸如 Python, C, 或 Java 这类的高级编程语言,然而并不尽然,脚本编程语言也很强大,可以完成类似任务。要充分发挥脚本的作用,有很多需要学习的,希望这里的几个例子能让你意识到脚本编程的功能。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/improve-bash-scripts
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[fisherue](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl "工作者图片"

View File

@ -0,0 +1,421 @@
[#]: collector: (lujun9972)
[#]: translator: (YungeG)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Understanding systemd at startup on Linux)
[#]: via: (https://opensource.com/article/20/5/systemd-startup)
[#]: author: (David Both https://opensource.com/users/dboth)
在 Linux 启动时理解 systemd
======
systemd 启动过程提供的重要线索可以在问题出现时助你一臂之力。
![People at the start line of a race][1]
在本系列的第一篇文章[_学着爱上 systemd_][2],我考察了 systemd 的功能和架构,以及围绕 systemd 作为古老的 SystemV 初始化程序和启动脚本的替代品的争论。在这第二篇文章中,我将开始探索管理 Linux 启动序列的文件和工具。我会解释 systemd 启动序列、如何更改默认的启动目标SystemV 术语中的运行级别)、以及在不重启的情况下如何手动切换到不同的目标。
我还将考察两个重要的 systemd 工具。第一个 **systemctl** 命令是和 systemd 交互、向其发送命令的基本方式。第二个是 **journalctl**,用于访问 systemd 日志,后者包含了大量系统历史数据,比如内核和服务的消息(包括指示性信息和错误信息)。
务必使用一个非生产系统进行本文和后续文章中的测试和实验。你的测试系统需要安装一个 GUI 桌面(比如 XfceLXDEGnomeKD E或其他
上一篇文章中我写道计划在这篇文章创建一个 systemd 单元并添加到启动序列。由于这篇文章比我预期中要长,这些内容将留到本系列的下一篇文章。
### 使用 systemd 探索 Linux 的启动
在观察启动序列之前,你需要做几件事情得使引导和启动序列开放可见。正常情况下,大多数发行版使用一个开机动画或者启动画面隐藏 Linux 启动和关机过程中的显示细节,在基于 Red Hat 的发行版中称作 Plymouth 引导画面。这些隐藏的消息能够向寻找信息以排除程序故障、或者只是学习启动序列的系统管理员提供大量有关系统启动和关闭的信息。你可以通过 GRUBGrand Unified Boot Loader配置改变这个设置。
主要的 GRUB 配置文件是 **/boot/grub2/grub.cfg** ,但是这个文件在更新内核版本时会被覆盖,你不会想修改它的。相反,修改用于改变 **grub.cfg** 默认设置的 **/etc/default/grub** 文件。
**/etc/default/grub** 文件当前还未修改的版本看起:
```
[root@testvm1 ~]# cd /etc/default ; cat grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="resume=/dev/mapper/fedora_testvm1-swap rd.lvm.
lv=fedora_testvm1/root rd.lvm.lv=fedora_testvm1/swap rd.lvm.lv=fedora_
testvm1/usr rhgb quiet"
GRUB_DISABLE_RECOVERY="true"
[root@testvm1 default]#
```
[GRUB 文档][3]的第 6 章列出了 **/etc/default/grub** 文件的所有可用项,我只关注下面的部分:
* 我将 GRUB 菜单倒计时的秒数 **GRUB_TIMEOUT**,从 5 改成 10以便在倒计时达到 0 之前有更多的时间响应 GRUB 菜单。
* **GRUB_CMDLINE_LINUX** 列出了启动阶段传递给内核的命令行参数,我删除了其中的最后两个参数。其中的一个参数 **rhgb** 代表 Red Hat Graphical Boot在内核初始化阶段显示一个小小的 Fedora 图标动画,而不是显示启动阶段的信息。另一个参数 **quiet**,屏蔽记录启动进度和发生错误的消息。系统管理员需要这些信息,因此我删除了 **rhgb****quiet**。如果启动阶段发生了错误,屏幕上显示的信息可以指向故障的原因。
更改之后,你的 GRUB 文件将会像下面一样:
```
[root@testvm1 default]# cat grub
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="resume=/dev/mapper/fedora_testvm1-swap rd.lvm.
lv=fedora_testvm1/root rd.lvm.lv=fedora_testvm1/swap rd.lvm.lv=fedora_
testvm1/usr"
GRUB_DISABLE_RECOVERY="false"
[root@testvm1 default]#
```
**grub2-mkconfig** 程序使用 **/etc/default/grub** 文件的内容生成 **grub.cfg** 配置文件,从而改变一些默认的 GRUB 设置。**grub2-mkconfig** 输出到 **STDOUT**,你可以使用程序的 **-o** 参数指明数据流输出的文件,不过使用重定向也同样简单。执行下面的命令更新 **/boot/grub2/grub.cfg** 配置文件:
```
[root@testvm1 grub2]# grub2-mkconfig > /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.18.9-200.fc28.x86_64
Found initrd image: /boot/initramfs-4.18.9-200.fc28.x86_64.img
Found linux image: /boot/vmlinuz-4.17.14-202.fc28.x86_64
Found initrd image: /boot/initramfs-4.17.14-202.fc28.x86_64.img
Found linux image: /boot/vmlinuz-4.16.3-301.fc28.x86_64
Found initrd image: /boot/initramfs-4.16.3-301.fc28.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-7f12524278bd40e9b10a085bc82dc504
Found initrd image: /boot/initramfs-0-rescue-7f12524278bd40e9b10a085bc82dc504.img
done
[root@testvm1 grub2]#
```
重新启动你的测试系统查看本来会隐藏在 Plymouth 开机动画之下的启动信息。但是如果你没有关闭开机动画,又需要查看启动信息的话又该如何操作?或者你关闭了开机动画,而消息流过的速度太快,无法阅读怎么办?(实际情况如此。)
有两个解决方案,都涉及到日志文件和 systemd 日志——两个都是你的好伙伴。你可以使用 **less** 命令查看 **/var/log/messages** 文件的内容。这个文件包含引导和启动信息,以及操作系统执行正常操作时生成的信息。你也可以使用不加任何参数的 **journalctl** 命令查看 systemd 日志,包含基本相同的信息:
```
[root@testvm1 grub2]# journalctl
\-- Logs begin at Sat 2020-01-11 21:48:08 EST, end at Fri 2020-04-03 08:54:30 EDT. --
Jan 11 21:48:08 f31vm.both.org kernel: Linux version 5.3.7-301.fc31.x86_64 ([mockbuild@bkernel03.phx2.fedoraproject.org][4]) (gcc version 9.2.1 20190827 (Red Hat 9.2.1-1) (GCC)) #1 SMP Mon Oct &gt;
Jan 11 21:48:08 f31vm.both.org kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.3.7-301.fc31.x86_64 root=/dev/mapper/VG01-root ro resume=/dev/mapper/VG01-swap rd.lvm.lv=VG01/root rd&gt;
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-provided physical RAM map:
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000dffeffff] usable
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x00000000dfff0000-0x00000000dfffffff] ACPI data
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x0000000100000000-0x000000041fffffff] usable
Jan 11 21:48:08 f31vm.both.org kernel: NX (Execute Disable) protection: active
Jan 11 21:48:08 f31vm.both.org kernel: SMBIOS 2.5 present.
Jan 11 21:48:08 f31vm.both.org kernel: DMI: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006
Jan 11 21:48:08 f31vm.both.org kernel: Hypervisor detected: KVM
Jan 11 21:48:08 f31vm.both.org kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 11 21:48:08 f31vm.both.org kernel: kvm-clock: cpu 0, msr 30ae01001, primary cpu clock
Jan 11 21:48:08 f31vm.both.org kernel: kvm-clock: using sched offset of 8250734066 cycles
Jan 11 21:48:08 f31vm.both.org kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 11 21:48:08 f31vm.both.org kernel: tsc: Detected 2807.992 MHz processor
Jan 11 21:48:08 f31vm.both.org kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 11 21:48:08 f31vm.both.org kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
```
由于数据流可能长达几百甚至几百万行,我在这里截断了它。(我的主要工作站上列出的日志长度是 1,188,482 行。)一定要在你的测试系统尝试这个命令。如果系统已经运行了一段时间——即使重启过很多次——还是会显示大量的数据。进行问题诊断时查看这个日志数据,因为其中包含了很多可能十分有用的信息。了解这个数据文件在正常的引导和启动过程中的模样,可以帮助你在问题出现时定位问题。
我将在本系列之后的文章讨论 systemd 日志、**journalctl** 命令、以及如何整列输出的日志数据来寻找更详细的信息。
内核被 GRUB 加载到内存后,必须先将自己从压缩后的文件中解压出来,才能执行任何有意义的操作。解压自己后,内核开始运行,加载 systemd 并转交控制权。
引导阶段到此结束,此时 Linux 内核和 systemd 正在运行,但是无法为用户执行任何生产性任务,因为其他的程序都没有执行,没有命令行解释器提供命令行,没有后台进程管理网络和其他的通信链接,也没有任何东西能够控制计算机执行生产功能。
现在 systemd 可以加载所需的功能性单元以便将系统启动到选择的目标运行状态。
### 目标
一个 systemd 目标代表一个 Linux 系统当前的或期望的运行状态。与 SystemV 启动脚本十分类似,目标定义了系统运行必须存在的服务,以及处于目标状态下必须激活的服务。图片 1 展示了使用 systemd 的 Linux 系统可能的运行状态目标。就像在本系列的第一篇文章以及 systemd 启动的手册页(`man bootup`)所看到的一样,有一些开启不同必要服务的其他中间目标,包括 **swap.target**、**timers.target**、**local-fs.target** 等。一些目标(像 **basic.target**)作为检查点使用,在移动到下一个更高级的目标之前保证所有需要的服务已经启动并运行。
除非开机时在 GRUB 菜单进行更改systemd 总是启动 **default.target**。**default.target** 文件是指向真实的目标文件的符号链接。对于桌面工作站,**default.target** 通常是 **graphical.target**,等同于 SystemV 的运行等级 5。对于服务器默认目标多半是 **multi-user.target**,就像 SystemV 的运行等级 3。**emergency.target** 文件类似单用户模式。目标和服务都是 systemd 单元。
下面的表格,包含在本系列的上一篇文章中,比较了 systemd 目标和古老的 SystemV 启动运行等级。为了向后兼容systemd 提供了 systemd 目标别名,允许脚本——和系统管理员——使用像 **init 3** 一样的 SystemV 命令改变运行等级。当然SystemV 命令被转发给 systemd 进行解释和执行。
**systemd targets** | **SystemV runlevel** | **target aliases** | **Description**
---|---|---|---
default.target | | | 这个目标通常是一个符号链接,作为 **multi-user.target****graphical.target** 的别名。systemd 总是用 **default.target** 启动系统。**default.target** 不能命名为 **halt.target**、**poweroff.target**、和 **reboot.target**
graphical.target | 5 | runlevel5.target | 带有 GUI 的 **Multi-user.target**
| 4 | runlevel4.target | 未使用。运行等级 4 和 SystemV 的运行等级 3 一致,可以创建这个目标并进行定制,用于启动本地服务,而不必更改默认的 **multi-user.target**
multi-user.target | 3 | runlevel3.target | 运行所有的服务但是只有命令行接口command-line interfaceCLI
| 2 | runlevel2.target | 多用户,没有 NFS但是运行其他所有的非 GUI 服务
rescue.target | 1 | runlevel1.target | 一个基本的系统,包括挂载文件系统,但是只运行最基础的服务,以及一个主控制台上的救援命令行解释器
emergency.target | S | | 单用户模式——没有服务运行;文件系统没有挂载。这是最基础级的操作模式,只有一个运行在主控制台的紧急情况命令行解释器,供用户和系统交互。
halt.target | | | 不断电的情况下停止系统
reboot.target | 6 | runlevel6.target | 重启
poweroff.target | 0 | runlevel0.target | 停止系统并关闭电源
每个目标在配置文件中都描述了一组依赖关系。systemd 启动需要的依赖,即 Linux 主机运行在特定功能级别所需的服务。加载目标配置文件中列出的所有依赖并运行后,系统就运行在那个目标等级。如果愿意,你可以在本系列的第一篇文章 [_学着爱上 systemd_][2] 中回顾 systemd 的启动序列和运行时目标。
### 探索当前的目标
许多 Linux 发行版默认安装一个 GUI 桌面接口,以便安装的系统可以像工作站一样使用。我总是从 Fedora Live USB 引导驱动器安装 Xfce 或 LXDE 桌面。即使是安装一个服务器或者其他基础类型的主机(比如用于路由器和防火墙的主机),我也使用 GUI 桌面的安装方式。
我可以安装一个没有桌面的服务器(数据中心的典型做法),但是这样不满足我的需求。原因不是我需要 GUI 桌面本身,而是 LXDE 安装包含了许多其他默认的服务器安装没有提供的工具,这意味着初始安装之后我需要做的工作更少。
但是,仅仅因为有一个 GUI 桌面并不意味着我要使用它。我有一个 16 端口的 KVM可以用于访问我的大部分 Linux 系统的 KVM 接口,但我和它们交互的大部分交互是通过从我的主要工作站建立的远程 SSH 连接。这种方式更安全,而且和 **graphical.target** 相比,运行 **multi-user.target** 使用更少的系统资源。
首先,检查默认目标,确认是 **graphical.target**
```
[root@testvm1 ~]# systemctl get-default
graphical.target
[root@testvm1 ~]#
```
然后确认当前正在运行的目标,应该和默认目标相同。你仍可以使用老方法,输出古老的 SystemV 运行等级。注意,前一个运行等级在左边,这里是 **N**(意思是 None表示主机启动后没有修改过运行等级。数字 5 是当前的目标,正如古老的 SystemV 术语中的定义:
```
[root@testvm1 ~]# runlevel
N 5
[root@testvm1 ~]#
```
注意runlevel 的手册页指出运行等级已经被淘汰,并提供了一个转换表。
你也可以使用 systemd 方式,命令的输出有很多行,但确实用 systemd 术语提供了答案:
```
[root@testvm1 ~]# systemctl list-units --type target
UNIT                   LOAD   ACTIVE SUB    DESCRIPTION                
basic.target           loaded active active Basic System              
cryptsetup.target      loaded active active Local Encrypted Volumes    
getty.target           loaded active active Login Prompts              
graphical.target       loaded active active Graphical Interface        
local-fs-pre.target    loaded active active Local File Systems (Pre)  
local-fs.target        loaded active active Local File Systems        
multi-user.target      loaded active active Multi-User System          
network-online.target  loaded active active Network is Online          
network.target         loaded active active Network                    
nfs-client.target      loaded active active NFS client services        
nss-user-lookup.target loaded active active User and Group Name Lookups
paths.target           loaded active active Paths                      
remote-fs-pre.target   loaded active active Remote File Systems (Pre)  
remote-fs.target       loaded active active Remote File Systems        
rpc_pipefs.target      loaded active active rpc_pipefs.target          
slices.target          loaded active active Slices                    
sockets.target         loaded active active Sockets                    
sshd-keygen.target     loaded active active sshd-keygen.target        
swap.target            loaded active active Swap                      
sysinit.target         loaded active active System Initialization      
timers.target          loaded active active Timers                    
LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.
21 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
```
上面列出了当前加载的和激活的目标,你也可以看到 **graphical.target****multi-user.target**。**multi-user.target** 需要在 **graphical.target** 之前加载。这个例子中,**graphical.target** 是激活的。
### 切换到不同的目标
切换到 **multi-user.target** 很简单:
```
[root@testvm1 ~]# systemctl isolate multi-user.target
```
显示器现在应该从 GUI 桌面或登录界面切换到了一个虚拟控制台。登录并列出当前激活的 systemd 单元,确认 **graphical.target** 不再运行:
```
[root@testvm1 ~]# systemctl list-units --type target
```
务必使用 **runlevel** 确认命令输出了之前的和当前的“运行等级”:
```
[root@testvm1 ~]# runlevel
5 3
```
### 更改默认目标
现在,将默认目标改为 **multi-user.target**,以便系统总是启动进入 **multi-user.target**,从而使用控制台命令行接口而不是 GUI 桌面接口。使用你的测试主机的根用户,切换到保存 systemd 配置的目录,执行一次快速列出操作:
```
[root@testvm1 ~]# cd /etc/systemd/system/ ; ll
drwxr-xr-x. 2 root root 4096 Apr 25  2018  basic.target.wants
&lt;snip&gt;
lrwxrwxrwx. 1 root root   36 Aug 13 16:23  default.target -> /lib/systemd/system/graphical.target
lrwxrwxrwx. 1 root root   39 Apr 25  2018  display-manager.service -> /usr/lib/systemd/system/lightdm.service
drwxr-xr-x. 2 root root 4096 Apr 25  2018  getty.target.wants
drwxr-xr-x. 2 root root 4096 Aug 18 10:16  graphical.target.wants
drwxr-xr-x. 2 root root 4096 Apr 25  2018  local-fs.target.wants
drwxr-xr-x. 2 root root 4096 Oct 30 16:54  multi-user.target.wants
&lt;snip&gt;
[root@testvm1 system]#
```
为了强调一些有助于解释 systemd 如何管理启动过程的重要事项,我缩短了这个列表。你应该可以在虚拟机看到完整的目录和链接列表。
**default.target** 项是指向目录 **/lib/systemd/system/graphical.target** 的符号链接(软链接),列出那个目录查看目录中的其他内容:
```
[root@testvm1 system]# ll /lib/systemd/system/ | less
```
你应该在这个列表中看到文件、目录、以及更多链接,但是专门寻找一下 **multi-user.target****graphical.target**。现在列出 **default.target**——一个指向 **/lib/systemd/system/graphical.target** 的链接——的内容:
```
[root@testvm1 system]# cat default.target
#  SPDX-License-Identifier: LGPL-2.1+
#
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
[Unit]
Description=Graphical Interface
Documentation=man:systemd.special(7)
Requires=multi-user.target
Wants=display-manager.service
Conflicts=rescue.service rescue.target
After=multi-user.target rescue.service rescue.target display-manager.service
AllowIsolate=yes
[root@testvm1 system]#
```
**graphical.target** 文件的这个链接描述了图形用户接口需要的所有必备条件。我会在本系列的下一篇文章至少探讨其中的一些选项。
为了使主机启动到多用户模式,你需要删除已有的链接,创建一个新链接指向正确目标。如果你的 [PWD][5] 不是 **/etc/systemd/system**,切换过去:
```
[root@testvm1 system]# rm -f default.target
[root@testvm1 system]# ln -s /lib/systemd/system/multi-user.target default.target
```
列出 **default.target** 链接,确认其指向了正确的文件:
```
[root@testvm1 system]# ll default.target
lrwxrwxrwx 1 root root 37 Nov 28 16:08 default.target -&gt; /lib/systemd/system/multi-user.target
[root@testvm1 system]#
```
如果你的链接看起来不一样,删除并重试。列出 **default.target** 链接的内容:
```
[root@testvm1 system]# cat default.target
#  SPDX-License-Identifier: LGPL-2.1+
#
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
[Unit]
Description=Multi-User System
Documentation=man:systemd.special(7)
Requires=basic.target
Conflicts=rescue.service rescue.target
After=basic.target rescue.service rescue.target
AllowIsolate=yes
[root@testvm1 system]#
```
**default.target**——这里其实是指向 **multi-user.target** 的链接——其中的 **[Unit]** 部分现在有不同的必需条件。这个目标不需要有图形显示管理器。
重启,你的虚拟机应该启动到虚拟控制台 1 的控制台登录,虚拟控制台 1 在显示器标识为 tty1。现在你已经知道如何修改默认的目标使用所需的命令将默认目标改回 **graphical.target**
首先检查当前的默认目标:
```
[root@testvm1 ~]# systemctl get-default
multi-user.target
[root@testvm1 ~]# systemctl set-default graphical.target
Removed /etc/systemd/system/default.target.
Created symlink /etc/systemd/system/default.target → /usr/lib/systemd/system/graphical.target.
[root@testvm1 ~]#
```
输入下面的命令直接切换到 **graphical.target** 和显示管理器的登录界面,不需要重启:
```
[root@testvm1 system]# systemctl isolate default.target
```
我不清楚为何 systemd 的开发者选择了术语 “isolate” 作为这个子命令。我的研究表明指的可能是运行指明的目标,但是“隔离”并终结其他所有启动该目标不需要的目标。然而,命令执行的效果是从一个运行的目标切换到另一个——在这个例子中,从多用户目标切换到图形目标。上面的命令等同于 SystemV 启动脚本和 init 程序中古老的 init 5 命令。
登录 GUI 桌面,确认能正常工作。
### 总结
本文探索了 Linux systemd 启动序列,开始探讨两个重要的 systemd 工具 **systemdctl****journalctl**,还说明了如何从一个目标切换到另一个目标,以及如何修改默认目标。
本系列的下一篇文章中将会创建一个新的 systemd 单元,并配置为启动阶段运行。下一篇文章还会查看一些配置选项,可以帮助确定某个特定的单元在序列中启动的位置,比如在网络启动运行后。
### 资源
关于 systemd 网络上有大量的信息,但大部分都简短生硬、愚钝、甚至令人误解。除了本文提到的资源,下面的网页提供了关于 systemd 启动更详细可靠的信息。
* Fedora 项目有一个优质实用的 [systemd 指南][6],几乎有你使用 systemd 配置、管理、维护一个 Fedora 计算机需要知道的一切。
* Fedora 项目还有一个好用的[速查表][7],交叉引用了古老的 SystemV 命令和对应的 systemd 命令。
* 要获取 systemd 的详细技术信息和创立的原因,查看 [Freedesktop.org][8] 的 [systemd 描述][9]。
* Linux.com 上”systemd 的更多乐趣"提供了更高级的 systemd [信息和提示][11]。
还有一系列针对系统管理员的深层技术文章,由 systemd 的设计者和主要开发者 Lennart Poettering 所作。这些文章写于 2010 年 4 月到 2011 年 9 月之间,但在当下仍然像当时一样有价值。关于 systemd 及其生态的许多其他优秀的作品都是基于这些文章的。
* [Rethinking PID 1][12]
* [systemd for Administrators, Part I][13]
* [systemd for Administrators, Part II][14]
* [systemd for Administrators, Part III][15]
* [systemd for Administrators, Part IV][16]
* [systemd for Administrators, Part V][17]
* [systemd for Administrators, Part VI][18]
* [systemd for Administrators, Part VII][19]
* [systemd for Administrators, Part VIII][20]
* [systemd for Administrators, Part IX][21]
* [systemd for Administrators, Part X][22]
* [systemd for Administrators, Part XI][23]
Mentor Graphics 公司的一位 Linux 内核和系统工程师 Alison Chiaken对 systemd 进行了预展...
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/5/systemd-startup
作者:[David Both][a]
选题:[lujun9972][b]
译者:[YungeG](https://github.com/YungeG)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/start_line.jpg?itok=9reaaW6m (People at the start line of a race)
[2]: https://opensource.com/article/20/4/systemd
[3]: http://www.gnu.org/software/grub/manual/grub
[4]: mailto:mockbuild@bkernel03.phx2.fedoraproject.org
[5]: https://en.wikipedia.org/wiki/Pwd
[6]: https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html
[7]: https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet
[8]: http://Freedesktop.org
[9]: http://www.freedesktop.org/wiki/Software/systemd
[10]: http://Linux.com
[11]: https://www.linux.com/training-tutorials/more-systemd-fun-blame-game-and-stopping-services-prejudice/
[12]: http://0pointer.de/blog/projects/systemd.html
[13]: http://0pointer.de/blog/projects/systemd-for-admins-1.html
[14]: http://0pointer.de/blog/projects/systemd-for-admins-2.html
[15]: http://0pointer.de/blog/projects/systemd-for-admins-3.html
[16]: http://0pointer.de/blog/projects/systemd-for-admins-4.html
[17]: http://0pointer.de/blog/projects/three-levels-of-off.html
[18]: http://0pointer.de/blog/projects/changing-roots
[19]: http://0pointer.de/blog/projects/blame-game.html
[20]: http://0pointer.de/blog/projects/the-new-configuration-files.html
[21]: http://0pointer.de/blog/projects/on-etc-sysinit.html
[22]: http://0pointer.de/blog/projects/instances.html
[23]: http://0pointer.de/blog/projects/inetd.html

View File

@ -0,0 +1,130 @@
[#]: subject: "Access OpenVPN from a client computer"
[#]: via: "https://opensource.com/article/21/7/openvpn-client"
[#]: author: "D. Greg Scott https://opensource.com/users/greg-scott"
[#]: collector: "lujun9972"
[#]: translator: "perfiffer"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
从客户端计算机连接到 0penVPN
======
在 Linux 上安装好 VPN 之后,是时候使用它了。
![Woman programming][1]
0penVPN 在两点之间创建一个加密通道,阻止第三方访问你的网络流量数据。通过设置你的 “虚拟专用网络” 服务,你可以成为你自己的 “虚拟专用网络” 服务商。许多流行的 “虚拟专用网络” 服务都使用 0penVPN所以当你可以掌控自己的网络时为什么还要将你的网络连接绑定到特定的提供商呢
本系列的 [第一篇文章][3] 安装了一个 VPN 的服务器,[第二篇文章][4] 介绍了如何安装和配置一个 0penVPN 服务软件,[第三篇文章][5] 解释了如何配置防火墙并启动你的 0penVPN 服务。第四篇也是最后一篇文章将演示如何从客户端计算机使用你的 0penVPN 服务器。这就是你做了前三篇文章中所有工作的原因!
### 创建客户端证书
请记住0penVPN 的身份验证方法要求服务器和客户端都拥有某些东西(证书)并知道某些东西(密码)。是时候设置它了。
首先,为你的客户端计算机创建一个客户端证书和一个私钥。在你的 0penVPN 服务器上,生成证书请求。它会要求你输入密码;确保你记住它:
```
$ cd /etc/openvpn/ca
$ sudo /etc/openvpn/easy-rsa/easyrsa \
gen-req greglaptop
```
本例中,`greglaptop` 是创建证书的客户端计算机主机名。
无需将请求导入证书颁发机构CA因为它已经存在。审查它以确保请求存在
```
$ cd /etc/openvpn/ca
$ /etc/openvpn/easy-rsa/easyrsa \
show-req greglaptop
```
你也可以以客户端身份签署请求:
```
$ /etc/openvpn/easy-rsa/easyrsa \
sign-req client greglaptop
```
### 安装 0penVPN 客户端软件
在 Linux 系统上,网络管理器可能已经包含了一个 0penVPN 客户端。如果没有,你可以安装插件:
```
$ sudo dnf install NetworkManager-openvpn
```
在 Windows 系统上,你必须从 0penVPN 下载网页下载和安装 0penVPN 客户端。启动安装程序并按照提示操作。
### 复制证书和私钥到客户端
现在你的客户端需要你为其生成的身份验证凭据。你在服务器上生成了这些,因此你必须将它们传输到你的客户端。我推荐使用 SSH 来完成传输。在 Linux 系统上,通过 `scp` 命令实现。在 Windows 系统上,你可以以管理员身份运行 [WinSCP][6] 来推送证书和密钥。
假设客户端名称为 `greglaptop`,那么证书和私钥的文件名以及服务的位置如下:
```
/etc/openvpn/ca/pki/issued/greglaptop.crt
/etc/openvpn/ca/pki/private/greglaptop.key
/etc/openvpn/ca/pki/issued/ca.crt
```
在 Linux 系统上,复制这些文件到 `/etc/pki/tls/certs` 目录。在 Windows 系统上,复制它们到 `C:\Program Files\OpenVPN\config` 目录。
### 复制和自定义客户端配置文件
在 Linux 系统上,你可以复制服务器上的 `/etc/openvpn/client/OVPNclient2020.ovpn` 文件到 `/etc/NetworkManager/system-connections/` 目录,或者你也可以导航到系统设置中的网络管理器添加一个 VPN 连接。
连接类型选择 **证书**。告知网络管理器你从服务器上复制的证书和密钥。
![VPN displayed in Network Manager][7]
在 Windows 系统上,以管理员身份运行 WinSCP将服务器上的客户端配置模板 `/etc/openvpn/client/OVPNclient2020.ovpn` 文件复制到客户端上的 `C:\Program Files\OpenVPN\config` 目录。然后:
* 重命名它以匹配上面的证书。
* 更改 CA 证书、客户端证书和密钥的名称以匹配上面从服务器复制的名称。
* 修改 IP 信息,以匹配你的网络。
你需要超级管理员权限来编辑客户端配置文件。最简单的方式就是以管理员身份启动一个 CMD 窗口,然后从管理员 CMD 窗口启动记事本来编辑此文件。
### 将你的客户端连接到服务器
在 Linux 系统上,网络管理器会显示你的 VPN 连接。选择它进行连接。
![Add a VPN connection in Network Manager][9]
在 Windows 系统上,启动 0penVPN 图形用户界面 (GUI)。它会在任务栏右侧的 Windows 系统托盘中生成一个图标,通常位于 Windows 桌面的右下角。右键单击图标以连接、断开连接或查看状态。
对于第一次连接编辑客户端配置文件的“remote”行以使用 0penVPN 服务器的内部 IP 地址。通过右键单击 Windows 系统托盘中的 0penVPN GUI 并单击 **连接**,从办公室网络内部连接到服务器。调试此连接。这应该可以找到并解决问题,而不会出现任何防火墙问题,因为客户端和服务器都在防火墙的同一侧。
接下来编辑客户端配置文件的“remote”行以使用 0penVPN 服务器的公共 IP 地址。将 Windows 客户端连接到外部网络并进行连接。调试有可能的问题。
### 安全连接
恭喜!你已经为其他客户端系统准备好了 0penVPN 网络。对其余客户端重复设置步骤。你甚至可以使用 Ansible 来分发证书和密钥并使其保持最新。
* * *
本文基于 D.Greg Scott 的 [博客][10],经许可后重新使用。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/openvpn-client
作者:[D. Greg Scott][a]
选题:[lujun9972][b]
译者:[perfiffer](https://github.com/perfiffer)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/greg-scott
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming)
[2]: https://openvpn.net/
[3]: https://linux.cn/article-13680-1.html
[4]: https://linux.cn/article-13702-1.html
[5]: https://linux.cn/article-13707-1.html
[6]: https://winscp.net/eng/index.php
[7]: https://opensource.com/sites/default/files/uploads/network-manager-profile.jpg (VPN displayed in Network Manager)
[8]: https://creativecommons.org/licenses/by-sa/4.0/
[9]: https://opensource.com/sites/default/files/uploads/network-manager-connect.jpg (Add a VPN connection in Network Manager)
[10]: https://www.dgregscott.com/how-to-build-a-vpn-in-four-easy-steps-without-spending-one-penny/

View File

@ -1,140 +0,0 @@
[#]: subject: "Change your Linux Desktop Wallpaper Every Hour [Heres How]"
[#]: via: "https://www.debugpoint.com/2021/08/change-wallpaper-every-hour/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
如何每小时改变你的 Linux 桌面墙纸
======
这个 shell 脚本 styli.sh 可以帮助你每小时自动改变你的 Linux 桌面壁纸,并且有几个选项。
用一张漂亮的壁纸来开始你的一天,你的桌面让人耳目一新。但寻找墙纸,然后保存,最终设置为墙纸,是非常麻烦的。所有这些步骤都可以通过这个叫做 [styl.sh][1] 的脚本完成。
### styli.sh - 每小时改变你的 Linux 桌面壁纸
这是一个 shell 脚本,你可以从 GitHub 上下载。当运行时,它从 Reddit 的热门 Subreddits 中获取壁纸并将其设置为你的壁纸。
该脚本适用于所有流行的桌面环境,如 GNOME、KDE Plasma、Xfce 和 Sway 窗口管理器。
它有很多功能,你可以通过 crontab 来运行这个脚本,并在特定的时间间隔内得到一张新的墙纸。
### 下载并安装、运行
打开一个终端,并克隆 GitHub 仓库。如果没有安装的话,你需要安装 [feh][2] 和 git。
```
git clone https://github.com/thevinter/styli.sh
cd styli.sh
```
要设置随机墙纸,根据你的桌面环境运行以下内容。
![Change your Linux Desktop Wallpaper Every Hour using styli.sh][3]
```
./styli.sh -g
```
```
./styli.sh -x
```
```
./styli.sh -k
```
```
./styli.sh -y
```
### 每小时改变一次
要每小时改变背景,请运行以下命令:
```
crontab -e
```
并在打开的文件中加入以下内容。不要忘记改变脚本路径。
```
@hourly script/path/styli.sh
```
### 改变 subreddits
在源目录中,有一个名为 subreddits 的文件。它填满了一些标准的 subreddits。如果你想要更多一些只需在文件末尾添加 subreddit 名称。
### 更多配置选项
壁纸的类型,大小,也可以设置。以下是这个脚本的一些独特的配置选项。
> 设置一个随机的 1920×1080 背景
> ./styli.sh
>
> 指定一个所需的宽度或高度
> ./styli.sh -w 1080 -h 720
> ./styli.sh -w 2560
> ./styli.sh -h 1440
>
> 根据搜索词设置墙纸
> ./styli.sh -s island
> ./styli.sh -s “sea sunset”
> ./styli.sh -s sea -w 1080
>
> 从设定的一个 subreddits 中获得一个随机壁纸
> 注意:宽度/高度/搜索参数对 reddit 不起作用。
> ./styli.sh -l reddit
>
> 从一个自定义的 subreddit 获得随机墙纸
> ./styli.sh -r
> ./styli.sh -r wallpaperdump
>
> 使用内置的 feh -bg 选项
> ./styli.sh -b
> ./styli.sh -b bg-scale -r widescreen-wallpaper
>
> 添加自定义的 feh 标志
> ./styli.sh -c
> ./styli.sh -c no-xinerama -r widescreen-wallpaper
>
> 自动设置终端的颜色
> ./styli.sh -p
>
> 使用 nitrogen 而不是 feh
> ./styli.sh -n
>
> 使用 nitrogen 更新 &gt; 1 个屏幕
> ./styli.sh -n -m
>
> 从一个目录中选择一个随机的背景
> ./styli.sh -d /path/to/dir
### 最后说明
一个独特且方便的脚本,内存占用小,可以直接在一个时间间隔内比如一个小时获取图片。让你的桌面看起来[新鲜且高效][4]。如果你不喜欢这些壁纸,你可以简单地从终端再次运行脚本来循环使用。
你喜欢这个脚本吗?或者你知道有什么像这样的壁纸切换器吗?请在下面的评论栏里告诉我。
* * *
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2021/08/change-wallpaper-every-hour/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: https://github.com/thevinter/styli.sh
[2]: https://feh.finalrewind.org/
[3]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/Change-your-Linux-Desktop-Wallpaper-Every-Hour-using-styli.sh_.jpg
[4]: https://www.debugpoint.com/category/themes

View File

@ -0,0 +1,196 @@
[#]: subject: "Monitor your Linux system in your terminal with procps-ng"
[#]: via: "https://opensource.com/article/21/8/linux-procps-ng"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
用 procps-ng 在终端监控你的 Linux 系统
======
如何找到一个程序的进程 IDPID。最常见的 Linux 工具是由 procps-ng 包提供的,包括 ps、pstree、pidof 和 pgrep 命令。
![System monitor][1]
在[POSIX][2]术语中,进程是一个正在进行的事件,由操作系统的内核管理。当你启动一个应用时就会产生一个进程,尽管还有许多其他的进程在你的计算机后台运行,包括保持系统时间准确的程序,监测新的文件系统,索引文件,等等。
大多数操作系统都有某种类型的系统活动监视器因此你可以了解在任何特定时刻有哪些进程在运行。Linux 有一些供你选择,包括 GNOME 系统监视器和 KSysGuard。这两个软件在桌面上都很有用但 Linux 也提供了在终端监控系统的能力。不管你选择哪一种,对于那些积极管理自己电脑的人来说,检查一个特定的进程是一项常见的任务。
在这篇文章中,我演示了如何找到一个程序的进程 IDPID。最常见的工具是由 [procps-ng][3] 包提供的,包括 `ps`、`pstree`、`pidof` 和 `pgrep` 命令。
### 查找一个正在运行的程序的 PID
有时你想得到一个你知道正在运行的特定程序的进程 IDPID。`pidof` 和 `pgrep` 命令通过命令名称查找进程。
`pidof` 命令返回一个命令的 PID按名称搜索确切的命令
```
$ pidof bash
1776 5736
```
`pgrep` 命令允许使用正则表达式regex
```
$ pgrep .sh
1605
1679
1688
1776
2333
5736
$ pgrep bash
5736
```
### 通过文件查找 PID
你可以用 `fuser` 命令找到使用特定文件的进程的 PID。
```
$ fuser --user ~/example.txt
/home/tux/example.txt: 3234(tux)
```
### 通过 PID 获得进程名称
如果你有一个进程的 PID _编号_,但没有生成它的命令,你可以用 `ps` 做一个“反向查找”:
```
$ ps 3234
PID TTY STAT TIME COMMAND
5736 pts/1 Ss 0:00 emacs
```
### 列出所有进程
`ps` 命令列出进程。你可以用 `-e` 选项列出你系统上的每一个进程:
```
$ ps -e | less
PID TTY TIME CMD
1 ? 00:00:03 systemd
2 ? 00:00:00 kthreadd
3 ? 00:00:00 rcu_gp
4 ? 00:00:00 rcu_par_gp
6 ? 00:00:00 kworker/0:0H-events_highpri
[...]
5648 ? 00:00:00 gnome-control-c
5656 ? 00:00:00 gnome-terminal-
5736 pts/1 00:00:00 bash
5791 pts/1 00:00:00 ps
5792 pts/1 00:00:00 less
(END)
```
### 只列出你的进程
`ps -e` 的输出可能会让人不知所措,所以使用 `-U` 来查看一个用户的进程:
```
$ ps -U tux | less
PID TTY TIME CMD
3545 ? 00:00:00 systemd
3548 ? 00:00:00 (sd-pam)
3566 ? 00:00:18 pulseaudio
3570 ? 00:00:00 gnome-keyring-d
3583 ? 00:00:00 dbus-daemon
3589 tty2 00:00:00 gdm-wayland-ses
3592 tty2 00:00:00 gnome-session-b
3613 ? 00:00:00 gvfsd
3618 ? 00:00:00 gvfsd-fuse
3665 tty2 00:01:03 gnome-shell
[...]
```
这样就减少了 200 个(可能是 100 个,取决于你运行的系统)需要分类的进程。
你可以用 `pstree` 命令以不同的格式查看同样的输出:
```
$ pstree -U tux -u --show-pids
[...]
├─gvfsd-metadata(3921)─┬─{gvfsd-metadata}(3923)
│ └─{gvfsd-metadata}(3924)
├─ibus-portal(3836)─┬─{ibus-portal}(3840)
│ └─{ibus-portal}(3842)
├─obexd(5214)
├─pulseaudio(3566)─┬─{pulseaudio}(3640)
│ ├─{pulseaudio}(3649)
│ └─{pulseaudio}(5258)
├─tracker-store(4150)─┬─{tracker-store}(4153)
│ ├─{tracker-store}(4154)
│ ├─{tracker-store}(4157)
│ └─{tracker-store}(4178)
└─xdg-permission-(3847)─┬─{xdg-permission-}(3848)
└─{xdg-permission-}(3850)
```
### 列出进程的上下文
你可以用 `-u` 选项查看你拥有的所有进程的额外上下文。
```
$ ps -U tux -u
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
tux 3545 0.0 0.0 89656 9708 ? Ss 13:59 0:00 /usr/lib/systemd/systemd --user
tux 3548 0.0 0.0 171416 5288 ? S 13:59 0:00 (sd-pam)
tux 3566 0.9 0.1 1722212 17352 ? S&lt;sl 13:59 0:29 /usr/bin/pulseaudio [...]
tux 3570 0.0 0.0 664736 8036 ? SLl 13:59 0:00 /usr/bin/gnome-keyring-daemon [...]
[...]
tux 5736 0.0 0.0 235628 6036 pts/1 Ss 14:18 0:00 bash
tux 6227 0.0 0.4 2816872 74512 tty2 Sl+14:30 0:00 /opt/firefox/firefox-bin [...]
tux 6660 0.0 0.0 268524 3996 pts/1 R+ 14:50 0:00 ps -U tux -u
tux 6661 0.0 0.0 219468 2460 pts/1 S+ 14:50 0:00 less
```
### 用 PID 排除故障
如果你在某个特定的程序上有问题,或者你只是好奇某个程序在你的系统上还使用了什么,你可以用 `pmap` 查看运行中的进程的内存图。
```
$ pmap 1776
5736: bash
000055f9060ec000 1056K r-x-- bash
000055f9063f3000 16K r---- bash
000055f906400000 40K rw--- [ anon ]
00007faf0fa67000 9040K r--s- passwd
00007faf1033b000 40K r-x-- libnss_sss.so.2
00007faf10345000 2044K ----- libnss_sss.so.2
00007faf10545000 4K rw--- libnss_sss.so.2
00007faf10546000 212692K r---- locale-archive
00007faf1d4fb000 1776K r-x-- libc-2.28.so
00007faf1d6b7000 2044K ----- libc-2.28.so
00007faf1d8ba000 8K rw--- libc-2.28.so
[...]
```
### 处理进程 ID
**procps-ng** 软件包有你需要的所有命令,以调查和监控你的系统在任何时候的使用情况。无论你是对 Linux 系统中所有分散的部分如何结合在一起感到好奇,还是对一个错误进行调查,或者你想优化你的计算机的性能,学习这些命令都会为你了解你的操作系统提供一个重要的优势。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/linux-procps-ng
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/system-monitor-splash.png?itok=0UqsjuBQ (System monitor)
[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[3]: https://gitlab.com/procps-ng

View File

@ -0,0 +1,71 @@
[#]: subject: "4 alternatives to cron in Linux"
[#]: via: "https://opensource.com/article/21/7/alternatives-cron-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "unigeorge"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Linux 中 cron 命令的 4 种替代方案
======
在 Linux 系统中有一些其他开源项目可以结合或者替代 cron 命令使用。
![Alarm clocks with different time][1]
[Linux `cron` 系统][2] 是一项经过时间检验的成熟技术,然而在任何情况下它都是最合适的系统自动化工具吗?答案是否定的。有一些开源项目就可以用来与 `cron` 结合或者直接代替 `cron` 使用。
### at 命令
`cron` 适用于长期重复任务。如果你设置了一个工作任务,它会从现在开始定期运行,直到计算机报废为止。但有些情况下你可能只想设置一个一次性命令,以备不在计算机旁时该命令可以自动运行。这时你可以选择使用 `at` 命令。
`at` 的语法比 `cron` 语法简单和灵活得多,并且兼具交互式和非交互式调度方法。(只要你想,你甚至可以使用 `at` 作业创建一个 `at` 作业。)
```
$ echo "rsync -av /home/tux/ me@myserver:/home/tux/" | at 1:30 AM
```
该命令语法自然且易用,并且不需要用户清理旧作业,因为它们一旦运行后就完全被计算机遗忘了。
阅读有关 [at 命令][3] 的更多信息并开始使用吧。
### systemd 命令
除了管理计算机上的进程外,`systemd` 还可以帮你调度这些进程。与传统的 `cron` 作业一样,`systemd` 计时器可以在指定的时间间隔触发事件,例如 shell 脚本和命令。时间间隔可以是每月特定日期的一天一次(例如在星期一的时候触发),或者在 09:00 到 17:00 的工作时间内每 15 分钟一次。
此外 `systemd` 里的计时器还可以做一些 `cron` 作业不能做的事情。
例如,计时器可以在一个事件 _之后_ 触发脚本或程序来运行特定时长,这个事件可以是开机,可以是前置任务的完成,甚至可以是计时器本身调用的服务单元的完成!
如果你的系统运行着 `systemd` 服务,那么你的机器就已经在技术层面上使用 `systemd` 计时器了。默认计时器会执行一些琐碎的任务,例如滚动日志文件、更新 mlocate 数据库、管理 DNF 数据库等。创建自己的计时器很容易,具体可以参阅 David Both 的文章 [使用 systemd 计时器来代替 cron][4]。
### anacron 命令
`cron` 专门用于在特定时间运行命令,这适用于从不休眠或断电的服务器。然而对笔记本电脑和台式工作站而言,时常有意或无意地关机是很常见的。当计算机处于关机状态时,`cron` 不会运行,因此设定在这段时间内的一些重要工作(例如备份数据)也就会跳过执行。
`anacron` 系统旨在确保作业定期运行,而不是按计划时间点运行。这就意味着你可以将计算机关机几天,再次启动时仍然靠 `anacron` 来运行基本任务。`anacron` 与 `cron` 协同工作,因此严格来说前者不是后者的替代品,而是一种调度任务的有效可选方案。许多系统管理员配置了一个 `cron` 作业来在深夜备份远程工作者计算机上的数据,结果却发现该作业在过去六个月中只运行过一次。`anacron` 确保重要的工作在 _可执行的时候_ 发生,而不是必须在安排好的 _特定时间点_ 发生。
点击参阅关于 [使用 anacron 获得更好的 crontab 效果][5] 的更多内容。
### 自动化
计算机和技术旨在让人们的生活更美好工作更轻松。Linux 为用户提供了许多有用的功能以确保完成重要的操作系统任务。查看这些可用的功能然后试着将这些功能用于你自己的工作任务吧。LCTT译注作者本段有些语焉不详读者可参阅譬如 [Ansible 自动化工具安装、配置和快速入门指南](https://linux.cn/article-13142-1.html) 等关于 Linux 自动化的文章)
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/alternatives-cron-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/clocks_time.png?itok=_ID09GDk (Alarm clocks with different time)
[2]: https://opensource.com/article/21/7/cron-linux
[3]: https://opensource.com/article/21/7/intro-command
[4]: https://opensource.com/article/20/7/systemd-timers
[5]: https://opensource.com/article/21/2/linux-automation

View File

@ -0,0 +1,87 @@
[#]: subject: "Automatically Synchronize Subtitle With Video Using SubSync"
[#]: via: "https://itsfoss.com/subsync/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
使用 SubSync 自动将字幕与视频同步化
======
让我分享一个场景。你正试图观看一部电影或视频,你需要字幕。你下载了字幕,却发现字幕没有正确同步。没有其他好的字幕可用。现在该怎么做?
你可以[在 VLC 中按 G 或 H 键来同步字幕][1]。它为字幕增加了一个延迟。如果字幕在整个视频中以相同的时间间隔不同步这可能会起作用。但如果不是这种情况SubSync 在这里会有很大帮助。
### SubSync: 字幕语音同步器
[SubSync][2] 是一个灵巧的开源工具,可用于 Linux、macOS 和 Windows。
它通过收听音轨来同步字幕,这就是它的神奇之处。即使音轨和字幕使用的是不同的语言,它也能发挥作用。如果有必要,它也可以被翻译,但我没有测试这个功能。
我做了一个简单的测试,使用一个与我正在播放的视频不同步的字幕。令我惊讶的是,它工作得很顺利,我得到了完美的同步字幕。
使用 SubSync 很简单。你启动应用,它要求你添加字幕文件和视频文件。
![User interface for SubSync][3]
你必须在界面上指定字幕和视频的语言。它可能会根据使用的语言下载额外的资源。
![SubSync may download additional packages for language support][4]
请记住,同步字幕需要一些时间,这取决于视频和字幕的长度。在等待过程完成时,你可以拿起你的茶/咖啡或啤酒。
你可以看到正在进行的同步状态,甚至可以在完成之前保存它。
![SubSync synchronization in progress][5]
同步完成后,你就可以点击保存按钮,把修改的内容保存到原文件中,或者把它保存为新的字幕文件。
![Synchronization completed][6]
我不能说它在所有情况下都能工作,但在我运行的样本测试中它是有效的。
### 安装 SubSync
SubSync 是一个跨平台的应用,你可以从它的[下载页面][7]获得 Windows 和 MacOS 的安装文件。
对于 Linux 用户SubSync 是作为一个 Snap 包提供的。如果你的发行版已经启用了 Snap 支持,使用下面的命令来安装 SubSync
```
sudo snap install subsync
```
请记住,下载 SubSync snap 包将需要一些时间。所以要有一个良好的网络连接或足够的耐心。
### 最后
就我个人而言,我对字幕很上瘾。即使我在 Netflix 上看英文电影,我也会把字幕打开。它有助于清楚地理解每段对话,特别是在有强烈口音的情况下。如果没有字幕,我永远无法理解[电影 Snatch 中 Mickey O'Neil由 Brad Pitt 扮演)的一句话][8]。
使用 SubSync 比[使用 Subtitle Editor][9] 同步字幕要容易得多。在[企鹅字幕播放器][10]之后,对于像我这样在整个互联网上搜索不同国家的稀有或推荐(神秘)电影的人来说,这是另一个很棒的工具。
如果你是一个“字幕用户”,我感觉你会喜欢这个工具。如果你使用过它,请在评论区分享你的使用经验。
--------------------------------------------------------------------------------
via: https://itsfoss.com/subsync/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/how-to-synchronize-subtitles-with-movie-quick-tip/
[2]: https://subsync.online/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/subsync-interface.png?resize=593%2C280&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/subsync-subtitle-synchronize.png?resize=522%2C189&ssl=1
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/subsync-subtitle-synchronize-1.png?resize=424%2C278&ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/subsync-subtitle-synchronize-2.png?resize=424%2C207&ssl=1
[7]: https://subsync.online/en/download.html
[8]: https://www.youtube.com/watch?v=tGDO-9hfaiI
[9]: https://itsfoss.com/subtitld/
[10]: https://itsfoss.com/penguin-subtitle-player/