Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2020-01-18 21:27:05 +08:00
commit 3c2087f838
9 changed files with 1397 additions and 39 deletions

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11795-1.html)
[#]: subject: (Run a server with Git)
[#]: via: (https://opensource.com/article/19/4/server-administration-git)
[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/seth)
@ -10,37 +10,37 @@
使用 Git 来管理 Git 服务器
======
> 借助 Gitolite你可以使用 Git 来管理 Git 服务器。在我们的系列中了解这些鲜为人知的 Git 用途。
> 借助 Gitolite你可以使用 Git 来管理 Git 服务器。在我们的系列文章中了解这些鲜为人知的 Git 用途。
![computer servers processing data][1]
![](https://img.linux.net.cn/data/attachment/album/202001/18/132045yrr1pb9n497tfbiy.png)
正如我在系列文章中演示的那样,[Git][2] 除了跟踪源代码外还可以做很多事情。信不信由你Git 甚至可以管理你的 Git 服务器,因此你可以或多或少地使用 Git 本身运行 Git 服务器。
正如我在系列文章中演示的那样,[Git][2] 除了跟踪源代码外还可以做很多事情。信不信由你Git 甚至可以管理你的 Git 服务器,因此你可以或多或少地使用 Git 本身运行 Git 服务器。
当然,这涉及除日常使用 Git 之外的许多组件,其中最重要的是 [Gitolite][3],该后端应用程序可以管理你使用 Git 的每个细的配置。Gitolite 的优点在于,由于它使用 Git 作为其前端接口,因此很容易将 Git 服务器管理集成到其他基于 Git 的工作流中。Gitolite 可以精确控制谁可以访问你服务器上的特定存储库以及他们具有哪些权限。你可以使用常规的 Linux 系统工具自行管理此类事务,但是如果在六个用户中只有一个或两个以上的仓库,则需要大量的工作。
当然,这涉及除日常使用 Git 之外的许多组件,其中最重要的是 [Gitolite][3],该后端应用程序可以管理你使用 Git 的每个细的配置。Gitolite 的优点在于,由于它使用 Git 作为其前端接口,因此很容易将 Git 服务器管理集成到其他基于 Git 的工作流中。Gitolite 可以精确控制谁可以访问你服务器上的特定存储库以及他们具有哪些权限。你可以使用常规的 Linux 系统工具自行管理此类事务,但是如果有好几个用户和不止一两个仓库,则需要大量的工作。
Gitolite 的开发人员做了艰苦的工作,使你可以轻松地为许多用户提供对你的 Git 服务器的访问权,而又不让他们访问你的整个环境 —— 而这一切,你可以使用 Git 来完成全部工作。
Gitolite 并`不是` 图形化的管理员和用户面板。优秀的 [Gitea][4] 项目可提供这种经验,但是本文重点介绍 Gitolite 的简单优雅和令人舒适的熟悉感。
Gitolite 并**不是**图形化的管理员和用户面板。优秀的 [Gitea][4] 项目可提供这种体验,但是本文重点介绍 Gitolite 的简单优雅和令人舒适的熟悉感。
### 安装 Gitolite
假设你的 Git 服务器运行 Linux则可以使用包管理器安装 Gitolite在 CentOS 和 RHEL 上为 `yum`,在 Debian 和 Ubuntu 上为 `apt`,在 OpenSUSE 上为 `zypper` 等)。例如,在 RHEL 上:
假设你的 Git 服务器运行 Linux,则可以使用包管理器安装 Gitolite在 CentOS 和 RHEL 上为 `yum`,在 Debian 和 Ubuntu 上为 `apt`,在 OpenSUSE 上为 `zypper` 等)。例如,在 RHEL 上:
```
$ sudo yum install gitolite3
```
许多发行版的存储库仍提供的是旧版本的 Gitolite但当前版本为版本 3。
许多发行版的存储库提供的仍是旧版本的 Gitolite但最新版本为版本 3。
你必须具有对服务器的无密码 SSH 访问权限。如果愿意,你可以使用密码登录服务器,但是 Gitolite 依赖于 SSH 密钥,因此必须配置使用密钥登录的选项。如果你不知道如何配置服务器以进行无密码 SSH 访问请首先学习如何进行操作Steve Ovens 的 Ansible 文章的[设置 SSH 密钥身份验证][5]部分对此进行了很好的说明)。这是加强服务器管理的安全以及运行 Gitolite 的重要组成部分。
### 配置 Git 用户
如果没有 Gitolite则如果某人请求访问你在服务器上托管的 Git 存储库则必须向该人提供用户帐户。Git 提供了一个特殊的外壳,即 `git-shell`,这是一个仅执行 Git 任务的特别特定 shell。这可以让你有个只能通过非常受限的 Shell 环境的过滤器来访问服务器的用户。
如果没有 Gitolite则如果某人请求访问你在服务器上托管的 Git 存储库则必须向该人提供用户帐户。Git 提供了一个特殊的外壳,即 `git-shell`,这是一个仅执行 Git 任务的特别特定 shell。这可以让你有个只能通过非常受限的 Shell 环境来过滤访问你的服务器的用户。
该解决方案可行,但通常意味着用户可以访问服务器上的所有存储库,除非你具有用于组权限的良好模式,并在创建新存储库时严格保持这些权限。这种方式还需要在系统级别进行大量手动配置,这通常是为特定级别的系统管理员保留的区域,而不一定是通常负责 Git 存储库的人员。
这个解决方案是一个办法,但通常意味着用户可以访问服务器上的所有存储库,除非你具有用于组权限的良好模式,并在创建新存储库时严格遵循这些权限。这种方式还需要在系统级别进行大量手动配置,这通常是只有特定级别的系统管理员才能做的工作,而不一定是通常负责 Git 存储库的人员。
Gitolite 通过为需要访问任何存储库的每个人指定一个用户名来完全回避此问题。 默认情况下,用户名是 `git`,并且由于 Gitolite 的文档假定使用的是它,因此在学习该工具时保留它是一个很好的默认设置。对于曾经使用过 GitLab 或 GitHub 或任何其他 Git 托管服务的人来说,这也是一个众所周知的约定。
Gitolite 通过为需要访问任何存储库的每个人指定一个用户名来完全回避此问题。默认情况下,用户名是 `git`,并且由于 Gitolite 的文档假定使用的是它,因此在学习该工具时保留它是一个很好的默认设置。对于曾经使用过 GitLab 或 GitHub 或任何其他 Git 托管服务的人来说,这也是一个众所周知的约定。
Gitolite 将此用户称为**托管用户**。在服务器上创建一个帐户以充当托管用户(我习惯使用 `git`,因为这是惯例):
@ -48,7 +48,7 @@ Gitolite 将此用户称为**托管用户**。在服务器上创建一个帐户
$ sudo adduser --create-home git
```
为了控制该 `git` 用户帐户,该帐户必须具有属于你的有效 SSH 公钥。你应该已经进行了设置,因此复制你的公钥(**不是你的私钥**)添加到 `git` 用户的家目录中:
为了控制该 `git` 用户帐户,该帐户必须具有属于你的有效 SSH 公钥。你应该已经进行了设置,因此复制你的公钥(**不是你的私钥**)添加到 `git` 用户的家目录中:
```
$ sudo cp ~/.ssh/id_ed25519.pub /home/git/
@ -62,11 +62,11 @@ $ sudo su - git
$ gitolite setup --pubkey id_ed25519.pub
```
安装脚本运行后,`git` 的家用户目录将有一个 `repository` 目录,该目录(目前)包含文件 `git-admin.git``testing.git`。这就是该服务器所需的全部设置,现在请登出 `git` 用户。
安装脚本运行后,`git` 的家用户目录将有一个 `repository` 目录,该目录(目前)包含存储库 `git-admin.git``testing.git`。这就是该服务器所需的全部设置,现在请登出 `git` 用户。
### 使用 Gitolite
管理 Gitolite 就是编辑 Git 存储库中的文本文件,尤其是 `gitolite-admin.git`。你不会通过 SSH 进入服务器来进行 Git 管理,并且 Gitolite 也建议你不要这样尝试。你和你的用户存储在 Gitolite 服务器上的存储库是个**裸**存储库,因此最好不要使用它们。
管理 Gitolite 就是编辑 Git 存储库中的文本文件,尤其是 `gitolite-admin.git` 中的。你不会通过 SSH 进入服务器来进行 Git 管理,并且 Gitolite 也建议你不要这样尝试。在 Gitolite 服务器上存储你和你的用户的存储库是个**裸**存储库,因此最好不要使用它们。
```
$ git clone git@example.com:gitolite-admin.git gitolite-admin.git
@ -76,7 +76,7 @@ conf
keydir
```
该存储库中的 `conf` 目录包含一个名为 `gitolite.conf` 的文件。在文本编辑器中打开它,或使用`cat`查看其内容:
该存储库中的 `conf` 目录包含一个名为 `gitolite.conf` 的文件。在文本编辑器中打开它,或使用 `cat` 查看其内容:
```
repo gitolite-admin
@ -86,15 +86,15 @@ repo testing
RW+ = @all
```
你可能对该配置文件的功能有所了解:`gitolite-admin` 代表此存储库,并且 `id_ed25519` 密钥的所有者具有读取、写入和 Git 管理权限。换句话说,不是将用户映射到普通的本地 Unix 用户(因为所有用户都使用 `git` 用户托管用户身份),而是将用户映射到 `keydir` 目录中列出的 SSH 密钥。
你可能对该配置文件的功能有所了解:`gitolite-admin` 代表此存储库,并且 `id_ed25519` 密钥的所有者具有读取、写入和管理 Git 的权限。换句话说,不是将用户映射到普通的本地 Unix 用户(因为所有用户都使用 `git` 用户托管用户身份),而是将用户映射到 `keydir` 目录中列出的 SSH 密钥。
`testing.git` 存储库使用特殊组符号为访问服务器的每个人提供了全部权限。
#### 添加用户
如果要向 Git 服务器添加一个名为 `alice` 的用户Alice 必须向你发送她的 SSH 公钥。Gitolite 使用 `.pub` 扩展名左边的任何内容作为该 Git 用户的标识符。不要使用默认的密钥名称值而是给密钥指定一个指示密钥所有者的名称。如果用户有多个密钥例如一个用于笔记本电脑一个用于台式机则可以使用子目录来避免文件名冲突。例如Alice 在笔记本电脑上使用的密钥可能是默认的 `id_rsa.pub`,因此将其重命名为`alice.pub` 或类似名称(或让用户根据其计算机上的本地用户帐户来命名密钥),然后将其放入 `gitolite-admin.git/keydir/work/laptop/` 目录中。如果她从她的桌面发送了另一个密钥,命名为 `alice.pub`(与上一个相同),然后将其添加到 `keydir/home/desktop/` 中。另一个密钥可能放到 `keydir/home/desktop/`依此类推。Gitolite 递归地在 `keydir` 中搜索与存储库“用户”匹配的 `.pub` 文件,并将所有匹配项视为相同的身份。
如果要向 Git 服务器添加一个名为 `alice` 的用户Alice 必须向你发送她的 SSH 公钥。Gitolite 使用文件名的 `.pub` 扩展名左边的任何内容作为该 Git 用户的标识符。不要使用默认的密钥名称值而是给密钥指定一个指示密钥所有者的名称。如果用户有多个密钥例如一个用于笔记本电脑一个用于台式机则可以使用子目录来避免文件名冲突。例如Alice 在笔记本电脑上使用的密钥可能是默认的 `id_rsa.pub`,因此将其重命名为`alice.pub` 或类似名称(或让用户根据其计算机上的本地用户帐户来命名密钥),然后将其放入 `gitolite-admin.git/keydir/work/laptop/` 目录中。如果她从她的桌面计算机发送了另一个密钥,命名为 `alice.pub`(与上一个相同),然后将其添加到 `keydir/home/desktop/` 中。另一个密钥可能放到 `keydir/home/desktop/`依此类推。Gitolite 递归地在 `keydir` 中搜索与存储库“用户”匹配的 `.pub` 文件,并将所有匹配项视为相同的身份。
当你将密钥添加到 `keydir` 目录时,必须将它们提交回服务器。这是一件很容易忘记的事情,这里有一个使用自动化的 Git 应用程序(例如 [Sparkleshare] [7])的真正的理由,因此任何更改都将立即提交给你的 Gitolite 管理员。第一次忘记提交和推送,在浪费了三个小时的时间以及用户的故障排除时间之后,你会发现 Gitolite 是使用 Sparkleshare 的完美理由。
当你将密钥添加到 `keydir` 目录时,必须将它们提交回服务器。这是一件很容易忘记的事情,这里有一个使用自动化的 Git 应用程序(例如 [Sparkleshare][7])的真正的理由,因此任何更改都将立即提交给你的 Gitolite 管理员。第一次忘记提交和推送,在浪费了三个小时的你和你的用户的故障排除时间之后,你会发现 Gitolite 是使用 Sparkleshare 的完美理由。
```
$ git add keydir
@ -106,10 +106,10 @@ $ git push origin HEAD
#### 设置权限
与用户一样,目录权限和组也是从你可能习惯的的常规 Unix 工具中抽象出来的(或可从在线信息查找)。在`gitolite-admin.git/conf` 目录中的 `gitolite.conf` 文件中授予对项目的权限。权限分为四个级别:
与用户一样,目录权限和组也是从你可能习惯的的常规 Unix 工具中抽象出来的(或可从在线信息查找)。在 `gitolite-admin.git/conf` 目录中的 `gitolite.conf` 文件中授予对项目的权限。权限分为四个级别:
* `R` 允许只读。在存储库上具有 `R` 权限的用户可以克隆它,仅此而已。
* `RW` 允许用户执行分支的快进推送、创建新分支和创建新标签。对于大多数用户来说,这个或多或少感觉就像一个“普通”的 Git 存储库。
* `RW` 允许用户执行分支的快进推送、创建新分支和创建新标签。对于大多数用户来说,这个基本上就像是一个“普通”的 Git 存储库。
* `RW+` 允许可能具有破坏性的 Git 动作。用户可以执行常规的快进推送、回滚推送、变基以及删除分支和标签。你可能想要或不希望将其授予项目中的所有贡献者。
* `-` 明确拒绝访问存储库。这与未在存储库的配置中列出的用户相同。
@ -126,7 +126,7 @@ repo widgets
RW+ = alice
```
现在Alice也仅 Alice 一个人)可以克隆该存储库:
现在Alice也仅 Alice 一个人)可以克隆该存储库:
```
[alice]$ git clone git@example.com:widgets.git
@ -188,7 +188,7 @@ repo foo/CREATOR/[a-z]..*
R = READERS
```
第一行定义了一组用户:该组称为 `@managers`,其中包含用户 `alice``bob`。下一行设置了通配符允许创建尚不存在的存储库,放在名为 `foo` 的目录下的创建存储库的用户名的子目录中。例如:
第一行定义了一组用户:该组称为 `@managers`,其中包含用户 `alice``bob`。下一行设置了通配符允许创建尚不存在的存储库,放在名为 `foo` 的目录下的创建存储库的用户名的子目录中。例如:
```
[alice]$ git clone git@example.com:foo/alice/cool-app.git
@ -197,11 +197,11 @@ Initialized empty Git repository in /home/git/repositories/foo/alice/cool-app.gi
warning: You appear to have cloned an empty repository.
```
野生仓库的创建者可以使用一些机制来定义谁可以读取和写入其存储库,但是他们是被限定范围的。在大多数情况下Gitolite 假定由一组特定的用户来管理项目权限。一种解决方案是使用 Git 挂钩授予所有用户对 `gitolite-admin` 的访问权限,以要求管理者批准将更改合并到 master 分支中。
野生仓库的创建者可以使用一些机制来定义谁可以读取和写入其存储库,但是他们是有范围限定的。在大多数情况下Gitolite 假定由一组特定的用户来管理项目权限。一种解决方案是使用 Git 挂钩授予所有用户对 `gitolite-admin` 的访问权限,以要求管理者批准将更改合并到 master 分支中。
### 了解更多
Gitolite 具有比此介绍性文章涵盖的更多功能,因此请尝试一下。其[文档][8]非常出色,一旦你通读了它,就可以自定义 Gitolite 服务器以向用户提供你喜欢的任何级别的控制。Gitolite 是一种维护成本低、简单的系统,你可以安装、设置它,然后基本上就可以将其忘却。
Gitolite 具有比此介绍性文章涵盖的更多功能,因此请尝试一下。其[文档][8]非常出色,一旦你通读了它,就可以自定义 Gitolite 服务器以向用户提供你喜欢的任何级别的控制。Gitolite 是一种维护成本低、简单的系统,你可以安装、设置它,然后基本上就可以将其忘却。
--------------------------------------------------------------------------------
@ -210,7 +210,7 @@ via: https://opensource.com/article/19/4/server-administration-git
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,38 +1,39 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11796-1.html)
[#]: subject: (Use Stow for configuration management of multiple machines)
[#]: via: (https://opensource.com/article/20/1/configuration-management-stow)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
使用 Stow 管理多台机器配置
======
2020 年,在我们的 20 个使用开源提升生产力的系列文章中,让我们了解如何使用 Stow 跨机器管理配置。
![A person programming][1]
> 2020 年,在我们的 20 个使用开源提升生产力的系列文章中,让我们了解如何使用 Stow 跨机器管理配置。
![](https://img.linux.net.cn/data/attachment/album/202001/18/141330jdcjalqzjal84a03.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 使用 Stow 管理符号链接
昨天,我解释了如何使用 [Syncthing][2] 在多台计算机上保持文件同步。但是,这只是我用来保持配置一致性的工具之一。还有另一个简单的工具 [Stow][3]。
昨天,我解释了如何使用 [Syncthing][2] 在多台计算机上保持文件同步。但是,这只是我用来保持配置一致性的工具之一。还有另一个表面上看起来更简单的工具:[Stow][3]。
![Stow help screen][4]
Stow 管理符号链接。默认情况下,它会链接目录到上一级目录。还有设置源和目标目录的选项,但我通常不使用它们。
正如我在 Syncthing 的[文章][5] 中提到的,我使用 Syncthing 来保持 **myconfigs** 目录在我所有的计算机上一致。**myconfigs** 目录下面有多个子目录。每个子目录包含我经常使用的应用之一的配置文件。
正如我在 Syncthing 的[文章][5] 中提到的,我使用 Syncthing 来保持 `myconfigs` 目录在我所有的计算机上一致。`myconfigs` 目录下面有多个子目录。每个子目录包含我经常使用的应用之一的配置文件。
![myconfigs directory][6]
在每台计算机上,我进入 **myconfigs** 目录,并运行 **stow -S <目录名称>** 以将目录中的文件符号链接到我的家目录。例如,在**vim** 目录下,我有 **.vimrc** 和 **.vim** 目录。在每台机器上,我运行 **stow -S vim** 来创建符号链接 **~/.vimrc** 和 **~/.vim**。当我在一台计算机上更改 Vim 配置时,它会应用到我的所有机器上。
在每台计算机上,我进入 `myconfigs` 目录,并运行 `stow -S <目录名称>` 以将目录中的文件符号链接到我的家目录。例如,在 `vim` 目录下,我有 `.vimrc``.vim` 目录。在每台机器上,我运行 `stow -S vim` 来创建符号链接 `~/.vimrc``~/.vim`。当我在一台计算机上更改 Vim 配置时,它会应用到我的所有机器上。
然而,有时候,我需要一些特定于机器的配置,这就是为什么我有如 **msmtp-personal****msmtp-elastic**(我的雇主)这样的目录。由于我的 **msmtp** SMTP 客户端需要知道要中继的电子邮件服务器,并且每个服务器都有不同的设置和凭据,我会使用 **-D** 标志来取消链接,接着链接另外一个。
然而,有时候,我需要一些特定于机器的配置,这就是为什么我有如 `msmtp-personal``msmtp-elastic`(我的雇主)这样的目录。由于我的 `msmtp` SMTP 客户端需要知道要中继电子邮件服务器,并且每个服务器都有不同的设置和凭据,我会使用 `-D` 标志来取消链接,接着链接另外一个。
![Unstow one, stow the other][7]
有时我要给配置添加文件。为此,有一个 **-R** 选项来”重新链接“。例如,我喜欢在图形化 Vim 中使用一种与控制台不同的特定字体。除了标准 **.vimrc** 文件,**.gvimrc** 文件能让我设置特定于图形化版本的选项。当我第一次设置它时,我移动 **~/.gvimrc** 到 **~/myconfigs/vim** 中,然后运行 **stow -R vim**,它取消链接并重新链接该目录中的所有内容。
有时我要给配置添加文件。为此,有一个 `-R` 选项来“重新链接”。例如,我喜欢在图形化 Vim 中使用一种与控制台不同的特定字体。除了标准 `.vimrc` 文件,`.gvimrc` 文件能让我设置特定于图形化版本的选项。当我第一次设置它时,我移动 `~/.gvimrc``~/myconfigs/vim` 中,然后运行 `stow -R vim`,它取消链接并重新链接该目录中的所有内容。
Stow 让我使用一个简单的命令行在多种配置之间切换,并且,结合 Syncthing我可以确保无论我身在何处或在哪里进行更改我都有我喜欢的工具的设置。
@ -43,7 +44,7 @@ via: https://opensource.com/article/20/1/configuration-management-stow
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -53,6 +54,6 @@ via: https://opensource.com/article/20/1/configuration-management-stow
[2]: https://syncthing.net/
[3]: https://www.gnu.org/software/stow/
[4]: https://opensource.com/sites/default/files/uploads/productivity_2-1.png (Stow help screen)
[5]: https://opensource.com/article/20/1/20-productivity-tools-syncthing
[5]: https://linux.cn/article-11793-1.html
[6]: https://opensource.com/sites/default/files/uploads/productivity_2-2.png (myconfigs directory)
[7]: https://opensource.com/sites/default/files/uploads/productivity_2-3.png (Unstow one, stow the other)

View File

@ -0,0 +1,108 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fedora CoreOS out of preview)
[#]: via: (https://fedoramagazine.org/fedora-coreos-out-of-preview/)
[#]: author: (bgilbert https://fedoramagazine.org/author/bgilbert/)
Fedora CoreOS out of preview
======
![The Fedora CoreOS logo on a gray background.][1]
The Fedora CoreOS team is pleased to announce that Fedora CoreOS is now [available for general use][2].
Fedora CoreOS is a new Fedora Edition built specifically for running containerized workloads securely and at scale. Its the successor to both [Fedora Atomic Host][3] and [CoreOS Container Linux][4] and is part of our effort to explore new ways of assembling and updating an OS. Fedora CoreOS combines the provisioning tools and automatic update model of Container Linux with the packaging technology, OCI support, and SELinux security of Atomic Host.  For more on the Fedora CoreOS philosophy, goals, and design, see the [announcement of the preview release][5].
Some highlights of the current Fedora CoreOS release:
* [Automatic updates][6], with staged deployments and phased rollouts
* Built from Fedora 31, featuring:
* Linux 5.4
* systemd 243
* Ignition 2.1
* OCI and Docker Container support via Podman 1.7 and Moby 18.09
* cgroups v1 enabled by default for broader compatibility; cgroups v2 available via configuration
Fedora CoreOS is available on a variety of platforms:
* Bare metal, QEMU, OpenStack, and VMware
* Images available in all public AWS regions
* Downloadable cloud images for Alibaba, AWS, Azure, and GCP
* Can run live from RAM via ISO and PXE (netboot) images
Fedora CoreOS is under active development.  Planned future enhancements include:
* Addition of the _next_ release stream for extended testing of upcoming Fedora releases.
* Support for additional cloud and virtualization platforms, and processor architectures other than _x86_64_.
* Closer integration with Kubernetes distributions, including [OKD][7].
* [Aggregate statistics collection][8].
* Additional [documentation][9].
### Where do I get it?
To try out the new release, head over to the [download page][10] to get OS images or cloud image IDs.  Then use the [quick start guide][11] to get a machine running quickly.
### How do I get involved?
Its easy!  You can report bugs and missing features to the [issue tracker][12]. You can also discuss Fedora CoreOS in [Fedora Discourse][13], the [development mailing list][14], in _#fedora-coreos_ on Freenode, or at our [weekly IRC meetings][15].
### Are there stability guarantees?
In general, the Fedora Project does not make any guarantees around stability.  While Fedora CoreOS strives for a high level of stability, this can be challenging to achieve in the rapidly evolving Linux and container ecosystems.  Weve found that the incremental, exploratory, forward-looking development required for Fedora CoreOS — which is also a cornerstone of the Fedora Project as a whole — is difficult to reconcile with the iron-clad stability guarantee that ideally exists when automatically updating systems.
Well continue to do our best not to break existing systems over time, and to give users the tools to manage the impact of any regressions.  Nevertheless, automatic updates may produce regressions or breaking changes for some use cases. You should make your own decisions about where and how to run Fedora CoreOS based on your risk tolerance, operational needs, and experience with the OS.  We will continue to announce any major planned or unplanned breakage to the [coreos-status mailing list][16], along with recommended mitigations.
### How do I migrate from CoreOS Container Linux?
Container Linux machines cannot be migrated in place to Fedora CoreOS.  We recommend [writing a new Fedora CoreOS Config][11] to provision Fedora CoreOS machines.  Fedora CoreOS Configs are similar to Container Linux Configs, and must be passed through the Fedora CoreOS Config Transpiler to produce an Ignition config for provisioning a Fedora CoreOS machine.
Whether youre currently provisioning your Container Linux machines using a Container Linux Config, handwritten Ignition config, or cloud-config, youll need to adjust your configs for differences between Container Linux and Fedora CoreOS.  For example, on Fedora CoreOS network configuration is performed with [NetworkManager key files][17] instead of _systemd-networkd_, and time synchronization is performed by _chrony_ rather than _systemd-timesyncd_.  Initial migration documentation will be [available soon][9] and a skeleton list of differences between the two OSes is available in [this issue][18].
CoreOS Container Linux will be maintained for a few more months, and then will be declared end-of-life.  Well announce the exact end-of-life date later this month.
### How do I migrate from Fedora Atomic Host?
Fedora Atomic Host has already reached end-of-life, and you should migrate to Fedora CoreOS as soon as possible.  We do not recommend in-place migration of Atomic Host machines to Fedora CoreOS. Instead, we recommend [writing a Fedora CoreOS Config][11] and using it to provision new Fedora CoreOS machines.  As with CoreOS Container Linux, youll need to adjust your existing cloud-configs for differences between Fedora Atomic Host and Fedora CoreOS.
Welcome to Fedora CoreOS.  Deploy it, launch your apps, and let us know what you think!
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-coreos-out-of-preview/
作者:[bgilbert][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bgilbert/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/07/introducing-fedora-coreos-816x345.png
[2]: https://getfedora.org/coreos/
[3]: https://www.projectatomic.io/
[4]: https://coreos.com/os/docs/latest/
[5]: https://fedoramagazine.org/introducing-fedora-coreos/
[6]: https://docs.fedoraproject.org/en-US/fedora-coreos/auto-updates/
[7]: https://www.okd.io/
[8]: https://github.com/coreos/fedora-coreos-pinger/
[9]: https://docs.fedoraproject.org/en-US/fedora-coreos/
[10]: https://getfedora.org/coreos/download/
[11]: https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/
[12]: https://github.com/coreos/fedora-coreos-tracker/issues
[13]: https://discussion.fedoraproject.org/c/server/coreos
[14]: https://lists.fedoraproject.org/archives/list/coreos@lists.fedoraproject.org/
[15]: https://github.com/coreos/fedora-coreos-tracker#meetings
[16]: https://lists.fedoraproject.org/archives/list/coreos-status@lists.fedoraproject.org/
[17]: https://developer.gnome.org/NetworkManager/stable/nm-settings-keyfile.html
[18]: https://github.com/coreos/fedora-coreos-tracker/issues/159

View File

@ -0,0 +1,509 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (C vs. Rust: Which to choose for programming hardware abstractions)
[#]: via: (https://opensource.com/article/20/1/c-vs-rust-abstractions)
[#]: author: (Dan Pittman https://opensource.com/users/dan-pittman)
C vs. Rust: Which to choose for programming hardware abstractions
======
Using type-level programming in Rust can make hardware abstractions
safer.
![Tools illustration][1]
Rust is an increasingly popular programming language positioned to be the best choice for hardware interfaces. It's often compared to C for its level of abstraction. This article explains how Rust can handle bitwise operations in a number of ways and offers a solution that provides both safety and ease of use.
Language | Origin | Official description | Overview
---|---|---|---
C | 1972 | C is a general-purpose programming language which features economy of expression, modern control flow and data structures, and a rich set of operators. (Source: [CS Fundamentals][2]) | C is [an] imperative language and designed to compile in a relatively straightforward manner which provides low-level access to the memory. (Source: [W3schools.in][3])
Rust | 2010 | A language empowering everyone to build reliable and efficient software (Source: [Rust website][4]) | Rust is a multi-paradigm system programming language focused on safety, especially safe concurrency. (Source: [Wikipedia][5])
### Bitwise operation over register values in C
In the world of systems programming, where you may find yourself writing hardware drivers or interacting directly with memory-mapped devices, interaction is almost always done through memory-mapped registers provided by the hardware. You typically interact with these things through bitwise operations on some fixed-width numeric type.
For instance, imagine an 8-bit register with three fields:
```
+----------+------+-----------+---------+
| (unused) | Kind | Interrupt | Enabled |
+----------+------+-----------+---------+
   5-7       2-4        1          0
```
The number below the field name prescribes the bits used by that field in the register. To enable this register, you would write the value **1**, represented in binary as **0000_0001**, to set the enabled field's bit. Often, though, you also have an existing configuration in the register that you don't want to disturb. Say you want to enable interrupts on the device but also want to be sure the device remains enabled. To do that, you must combine the Interrupt field's value with the Enabled field's value. You would do that with bitwise operations:
```
`1 | (1 << 1)`
```
This gives you the binary value **0000_0011** by **or**-ing 1 with 2, which you get by shifting 1 left by 1. You can write this to your register, leaving it enabled but also enabling interrupts.
This is a lot to keep in your head, especially when you're dealing with potentially hundreds of registers for a complete system. In practice, you do this with mnemonics which track a field's position in a register and how wide the field is—i.e., _what's its upper bound?_
Here's an example of one of these mnemonics. They are C macros that replace their occurrences with the code on the right-hand side. This is the shorthand for the register laid out above. The left-hand side of the **&amp;** puts you in position for that field, and the right-hand side limits you to only that field's bits:
```
#define REG_ENABLED_FIELD(x) (x &lt;&lt; 0) &amp; 1
#define REG_INTERRUPT_FIELD(x) (x &lt;&lt; 1) &amp; 2
#define REG_KIND_FIELD(x) (x &lt;&lt; 2) &amp; (7 &lt;&lt; 2)
```
You'd then use these to abstract over the derivation of a register's value with something like:
```
void set_reg_val(reg* u8, val u8);
fn enable_reg_with_interrupt(reg* u8) {
    set_reg_val(reg, REG_ENABLED_FIELD(1) | REG_INTERRUPT_FIELD(1));
}
```
This is the state of the art. In fact, this is how the bulk of drivers appear in the Linux kernel.
Is there a better way? Consider the boon to safety and expressibility if the type system was borne out of research on modern programming languages. That is, what could you do with a richer, more expressive type system to make this process safer and more tenable?
### Bitwise operation over register values in Rust
Continuing with the register above as an example:
```
+----------+------+-----------+---------+
| (unused) | Kind | Interrupt | Enabled |
+----------+------+-----------+---------+
   5-7       2-4        1          0
```
How might you want to express such a thing in Rust types?
You'll start in a similar way, by defining constants for each field's _offset_—that is, how far it is from the least significant bit—and its mask. A _mask_ is a value whose binary representation can be used to update or read the field from inside the register:
```
const ENABLED_MASK: u8 = 1;
const ENABLED_OFFSET: u8 = 0;
const INTERRUPT_MASK: u8 = 2;
const INTERRUPT_OFFSET: u8 = 1;
const KIND_MASK: u8 = 7 &lt;&lt; 2;
const KIND_OFFSET: u8 = 2;
```
Next, you'll declare a field type and do your operations to convert a given value into its position-relevant value for use inside the register:
```
struct Field {
    value: u8,
}
impl Field {
    fn new(mask: u8, offset: u8, val: u8) -&gt; Self {
        Field {
            value: (val &lt;&lt; offset) &amp; mask,
        }
    }
}
```
Finally, you'll use a **Register** type, which wraps around a numeric type that matches the width of your register. **Register** has an **update** function that updates the register with the given field:
```
struct Register(u8);
impl Register {
    fn update(&amp;mut self, val: Field) {
        self.0 = self.0 | field.value;
    }
}
fn enable_register(&amp;mut reg) {
    reg.update(Field::new(ENABLED_MASK, ENABLED_OFFSET, 1));
}
```
With Rust, you can use data structures to represent fields, attach them to specific registers, and provide concise and sensible ergonomics while interacting with the hardware. This example uses the most basic facilities provided by Rust; regardless, the added structure alleviates some of the density from the C example above. Now a field is a named thing, not a number derived from shadowy bitwise operators, and registers are types with state—one extra layer of abstraction over the hardware.
### A Rust implementation for ease of use
The first rewrite in Rust is nice, but it's not ideal. You have to remember to bring the mask and offset, and you're calculating them ad hoc, by hand, which is error-prone. Humans aren't great at precise and repetitive tasks—we tend to get tired or lose focus, and this leads to mistakes. Transcribing the masks and offsets by hand, one register at a time, will almost certainly end badly. This is the kind of task best left to a machine.
Second, thinking more structurally: What if there were a way to have the field's type carry the mask and offset information? What if you could catch mistakes in your implementation for how you access and interact with hardware registers at compile time instead of discovering them at runtime? Perhaps you can lean on one of the strategies commonly used to suss out issues at compile time, like types.
You can modify the earlier example by using [**typenum**][6], a library that provides numbers and arithmetic at the type level. Here, you'll parameterize the **Field** type with its mask and offset, making it available for any instance of **Field** without having to include it at the call site:
```
#[macro_use]
extern crate typenum;
use core::marker::PhantomData;
use typenum::*;
// Now we'll add Mask and Offset to Field's type
struct Field&lt;Mask: Unsigned, Offset: Unsigned&gt; {
    value: u8,
    _mask: PhantomData&lt;Mask&gt;,
    _offset: PhantomData&lt;Offset&gt;,
}
// We can use type aliases to give meaningful names to
// our fields (and not have to remember their offsets and masks).
type RegEnabled = Field&lt;U1, U0&gt;;
type RegInterrupt = Field&lt;U2, U1&gt;;
type RegKind = Field&lt;op!(U7 &lt;&lt; U2), U2&gt;;
```
Now, when revisiting **Field**'s constructor, you can elide the mask and offset parameters because the type contains that information:
```
impl&lt;Mask: Unsigned, Offset: Unsigned&gt; Field&lt;Mask, Offset&gt; {
    fn new(val: u8) -&gt; Self {
        Field {
            value: (val &lt;&lt; Offset::U8) &amp; Mask::U8,
            _mask: PhantomData,
            _offset: PhantomData,
        }
    }
}
// And to enable our register...
fn enable_register(&amp;mut reg) {
    reg.update(RegEnabled::new(1));
}
```
It looks pretty good, but… what happens when you make a mistake regarding whether a given value will _fit_ into a field? Consider a simple typo where you put **10** instead of **1**:
```
fn enable_register(&amp;mut reg) {
    reg.update(RegEnabled::new(10));
}
```
In the code above, what is the expected outcome? Well, the code will set that enabled bit to 0 because **10 &amp; 1 = 0**. That's unfortunate; it would be nice to know whether a value you're trying to write into a field will fit into the field before attempting a write. As a matter of fact, I'd consider lopping off the high bits of an errant field value _undefined behavior_ (gasps).
### Using Rust with safety in mind
How can you check that a field's value fits in its prescribed position in a general way? More type-level numbers!
You can add a **Width** parameter to **Field** and use it to verify that a given value can fit into the field:
```
struct Field&lt;Width: Unsigned, Mask: Unsigned, Offset: Unsigned&gt; {
    value: u8,
    _mask: PhantomData&lt;Mask&gt;,
    _offset: PhantomData&lt;Offset&gt;,
    _width: PhantomData&lt;Width&gt;,
}
type RegEnabled = Field&lt;U1,U1, U0&gt;;
type RegInterrupt = Field&lt;U1, U2, U1&gt;;
type RegKind = Field&lt;U3, op!(U7 &lt;&lt; U2), U2&gt;;
impl&lt;Width: Unsigned, Mask: Unsigned, Offset: Unsigned&gt; Field&lt;Width, Mask, Offset&gt; {
    fn new(val: u8) -&gt; Option&lt;Self&gt; {
        if val &lt;= (1 &lt;&lt; Width::U8) - 1 {
            Some(Field {
                value: (val &lt;&lt; Offset::U8) &amp; Mask::U8,
                _mask: PhantomData,
                _offset: PhantomData,
                _width: PhantomData,
            })
        } else {
            None
        }
    }
}
```
Now you can construct a **Field** only if the given value fits! Otherwise, you have **None**, which signals that an error has occurred, rather than lopping off the high bits of the value and silently writing an unexpected value.
Note, though, this will raise an error at runtime. However, we knew the value we wanted to write beforehand, remember? Given that, we can teach the compiler to reject entirely a program which has an invalid field value—we dont have to wait until we run it!
This time, you'll add a _trait bound_ (the **where** clause) to a new realization of new, called **new_checked**, that asks the incoming value to be less than or equal to the maximum possible value a field with the given **Width** can hold:
```
struct Field&lt;Width: Unsigned, Mask: Unsigned, Offset: Unsigned&gt; {
    value: u8,
    _mask: PhantomData&lt;Mask&gt;,
    _offset: PhantomData&lt;Offset&gt;,
    _width: PhantomData&lt;Width&gt;,
}
type RegEnabled = Field&lt;U1, U1, U0&gt;;
type RegInterrupt = Field&lt;U1, U2, U1&gt;;
type RegKind = Field&lt;U3, op!(U7 &lt;&lt; U2), U2&gt;;
impl&lt;Width: Unsigned, Mask: Unsigned, Offset: Unsigned&gt; Field&lt;Width, Mask, Offset&gt; {
    const fn new_checked&lt;V: Unsigned&gt;() -&gt; Self
    where
        V: IsLessOrEqual&lt;op!((U1 &lt;&lt; Width) - U1), Output = True&gt;,
    {
        Field {
            value: (V::U8 &lt;&lt; Offset::U8) &amp; Mask::U8,
            _mask: PhantomData,
            _offset: PhantomData,
            _width: PhantomData,
        }
    }
}
```
Only numbers for which this property holds has an implementation of this trait, so if you use a number that does not fit, it will fail to compile. Take a look!
```
fn enable_register(&amp;mut reg) {
    reg.update(RegEnabled::new_checked::&lt;U10&gt;());
}
12 |     reg.update(RegEnabled::new_checked::&lt;U10&gt;());
   |                           ^^^^^^^^^^^^^^^^ expected struct `typenum::B0`, found struct `typenum::B1`
   |
   = note: expected type `typenum::B0`
           found type `typenum::B1`
```
**new_checked** will fail to produce a program that has an errant too-high value for a field. Your typo won't blow up at runtime because you could never have gotten an artifact to run.
You're nearing Peak Rust in terms of how safe you can make memory-mapped hardware interactions. However, what you wrote back in the first example in C was far more succinct than the type parameter salad you ended up with. Is doing such a thing even tractable when you're talking about potentially hundreds or even thousands of registers?
### Just right with Rust: both safe and accessible
Earlier, I called out calculating masks by hand as being problematic, but I just did that same problematic thing—albeit at the type level. While using such an approach is nice, getting to the point when you can write any code requires quite a bit of boilerplate and manual transcription (I'm talking about the type synonyms here).
Our team wanted something like the [TockOS mmio registers][7], but one that would generate typesafe implementations with the least amount of manual transcription possible. The result we came up with is a macro that generates the necessary boilerplate to get a Tock-like API plus type-based bounds checking. To use it, write down some information about a register, its fields, their width and offsets, and optional [enum][8]-like values (should you want to give "meaning" to the possible values a field can have):
```
register! {
    // The register's name
    Status,
    // The type which represents the whole register.
    u8,
    // The register's mode, ReadOnly, ReadWrite, or WriteOnly.
    RW,
    // And the fields in this register.
    Fields [
        On    WIDTH(U1) OFFSET(U0),
        Dead  WIDTH(U1) OFFSET(U1),
        Color WIDTH(U3) OFFSET(U2) [
            Red    = U1,
            Blue   = U2,
            Green  = U3,
            Yellow = U4
        ]
    ]
}
```
From this, you can generate register and field types like the previous example where the indices—the **Width**, **Mask**, and **Offset**—are derived from the values input in the **WIDTH** and **OFFSET** sections of a field's definition. Also, notice that all of these numbers are **typenums**; they're going to go directly into your **Field** definitions!
The generated code provides namespaces for registers and their associated fields through the name given for the register and the fields. That's a mouthful; here's what it looks like:
```
mod Status {
    struct Register(u8);
    mod On {
        struct Field; // There is of course more to this definition
    }
    mod Dead {
        struct Field;
    }
    mod Color {
        struct Field;
        pub const Red: Field = Field::&lt;U1&gt;new();
        // &amp;c.
    }
}
```
The generated API contains the nominally expected read and write primitives to get at the raw register value, but it also has ways to get a single field's value, do collective actions, and find out if any (or all) of a collection of bits is set. You can read the documentation on the [complete generated API][9].
### Kicking the tires
What does it look like to use these definitions for a real device? Will the code be littered with type parameters, obscuring any real logic from view?
No! By using type synonyms and type inference, you effectively never have to think about the type-level part of the program at all. You get to interact with the hardware in a straightforward way and get those bounds-related assurances automatically.
Here's an example of a [UART][10] register block. I'll skip the declaration of the registers themselves, as that would be too much to include here. Instead, it starts with a register "block" then helps the compiler know how to look up the registers from a pointer to the head of the block. We do that by implementing **Deref** and **DerefMut**:
```
#[repr(C)]
pub struct UartBlock {
    rx: UartRX::Register,
    _padding1: [u32; 15],
    tx: UartTX::Register,
    _padding2: [u32; 15],
    control1: UartControl1::Register,
}
pub struct Regs {
    addr: usize,
}
impl Deref for Regs {
    type Target = UartBlock;
    fn deref(&amp;self) -&gt; &amp;UartBlock {
        unsafe { &amp;*(self.addr as *const UartBlock) }
    }
}
impl DerefMut for Regs {
    fn deref_mut(&amp;mut self) -&gt; &amp;mut UartBlock {
        unsafe { &amp;mut *(self.addr as *mut UartBlock) }
    }
}
```
Once this is in place, using these registers is as simple as **read()** and **modify()**:
```
fn main() {
    // A pretend register block.
    let mut x = [0_u32; 33];
    let mut regs = Regs {
        // Some shenanigans to get at `x` as though it were a
        // pointer. Normally you'd be given some address like
        // `0xDEADBEEF` over which you'd instantiate a `Regs`.
        addr: &amp;mut x as *mut [u32; 33] as usize,
    };
    assert_eq!(regs.rx.read(), 0);
    regs.control1
        .modify(UartControl1::Enable::Set + UartControl1::RecvReadyInterrupt::Set);
    // The first bit and the 10th bit should be set.
    assert_eq!(regs.control1.read(), 0b_10_0000_0001);
}
```
When we're working with runtime values we use **Option** like we saw earlier. Here I'm using **unwrap**, but in a real program with unknown inputs, you'd probably want to check that you got a **Some** back from that new call:[1][11],[2][12]
```
fn main() {
    // A pretend register block.
    let mut x = [0_u32; 33];
    let mut regs = Regs {
        // Some shenanigans to get at `x` as though it were a
        // pointer. Normally you'd be given some address like
        // `0xDEADBEEF` over which you'd instantiate a `Regs`.
        addr: &amp;mut x as *mut [u32; 33] as usize,
    };
    let input = regs.rx.get_field(UartRX::Data::Field::Read).unwrap();
    regs.tx.modify(UartTX::Data::Field::new(input).unwrap());
}
```
### Decoding failure conditions
Depending on your personal pain threshold, you may have noticed that the errors are nearly unintelligible. Take a look at a not-so-subtle reminder of what I'm talking about:
```
error[E0271]: type mismatch resolving `&lt;typenum::UInt&lt;typenum::UInt&lt;typenum::UInt&lt;typenum::UInt&lt;typenum::UInt&lt;typenum::UTerm, typenum::B1&gt;, typenum::B0&gt;, typenum::B1&gt;, typenum::B0&gt;, typenum::B0&gt; as typenum::IsLessOrEqual&lt;typenum::UInt&lt;typenum::UInt&lt;typenum::UInt&lt;typenum::UInt&lt;typenum::UTerm, typenum::B1&gt;, typenum::B0&gt;, typenum::B1&gt;, typenum::B0&gt;&gt;&gt;::Output == typenum::B1`
  --&gt; src/main.rs:12:5
   |
12 |     less_than_ten::&lt;U20&gt;();
   |     ^^^^^^^^^^^^^^^^^^^^ expected struct `typenum::B0`, found struct `typenum::B1`
   |
   = note: expected type `typenum::B0`
       found type `typenum::B1`
```
The **expected typenum::B0 found typenum::B1** part kind of makes sense, but what on earth is the **typenum::UInt&lt;typenum::UInt, typenum::UInt…** nonsense? Well, **typenum** represents numbers as binary [cons][13] cells! Errors like this make it hard, especially when you have several of these type-level numbers confined to tight quarters, to know which number it's talking about. Unless, of course, it's second nature for you to translate baroque binary representations to decimal ones.
After the **U100**th time attempting to decipher any meaning from this mess, a teammate got Mad As Hell And Wasn't Going To Take It Anymore and made a little utility, **tnfilt**, to parse the meaning out from the misery that is namespaced binary cons cells. **tnfilt** takes the cons cell-style notation and replaces it with sensible decimal numbers. We imagine that others will face similar difficulties, so we shared [**tnfilt**][14]. You can use it like this:
```
`$ cargo build 2>&1 | tnfilt`
```
It transforms the output above into something like this:
```
`error[E0271]: type mismatch resolving `<U20 as typenum::IsLessOrEqual<U10>>::Output == typenum::B1``
```
Now _that_ makes sense!
### In conclusion
Memory-mapped registers are used ubiquitously when interacting with hardware from software, and there are myriad ways to portray those interactions, each of which has a different place on the spectra of ease-of-use and safety. We found that the use of type-level programming to get compile-time checking on memory-mapped register interactions gave us the necessary information to make safer software. That code is available in the **[bounded-registers][15] crate** (Rust package).
Our team started out right at the edge of the more-safe side of that safety spectrum and then tried to figure out how to move the ease-of-use slider closer to the easy end. From those ambitions, **bounded-registers** was born, and we use it anytime we encounter memory-mapped devices in our adventures at Auxon.
* * *
1. Technically, a read from a register field, by definition, will only give a value within the prescribed bounds, but none of us lives in a pure world, and you never know what's going to happen when external systems come into play. You're at the behest of the Hardware Gods here, so instead of forcing you into a "might panic" situation, it gives you the **Option** to handle a "This Should Never Happen" case.
2. **get_field** looks a little weird. I'm looking at the **Field::Read** part, specifically. **Field** is a type, and you need an instance of that type to pass to **get_field**. A cleaner API might be something like:
```
`regs.rx.get_field::<UartRx::Data::Field>();`
```
But remember that **Field** is a type synonym that has fixed indices for width, offset, etc. To be able to parameterize **get_field** like this, you'd need higher-kinded types.
* * *
_This originally appeared on the [Auxon Engineering blog][16] and is edited and republished with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/c-vs-rust-abstractions
作者:[Dan Pittman][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dan-pittman
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_hardware_purple.png?itok=3NdVoYhl (Tools illustration)
[2]: https://cs-fundamentals.com/c-programming/history-of-c-programming-language.php
[3]: https://www.w3schools.in/c-tutorial/history-of-c/
[4]: https://www.rust-lang.org/
[5]: https://en.wikipedia.org/wiki/Rust_(programming_language)
[6]: https://docs.rs/crate/typenum
[7]: https://docs.rs/tock-registers/0.3.0/tock_registers/
[8]: https://en.wikipedia.org/wiki/Enumerated_type
[9]: https://github.com/auxoncorp/bounded-registers#the-register-api
[10]: https://en.wikipedia.org/wiki/Universal_asynchronous_receiver-transmitter
[11]: tmp.shpxgDsodx#1
[12]: tmp.shpxgDsodx#2
[13]: https://en.wikipedia.org/wiki/Cons
[14]: https://github.com/auxoncorp/tnfilt
[15]: https://crates.io/crates/bounded-registers
[16]: https://blog.auxon.io/2019/10/25/type-level-registers/

View File

@ -0,0 +1,109 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Get started with this open source to-do list manager)
[#]: via: (https://opensource.com/article/20/1/open-source-to-do-list)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
Get started with this open source to-do list manager
======
Todo is a powerful way to keep track of your task list. Learn how to use
it in the seventh in our series on 20 ways to be more productive with
open source in 2020.
![Team checklist][1]
Last year, I brought you 19 days of new (to you) productivity tools for 2019. This year, I'm taking a different approach: building an environment that will allow you to be more productive in the new year, using tools you may or may not already be using.
### Track your tasks with todo
Tasks and to-do lists are very near and dear to my heart. I'm a big fan of productivity (so much so that I do a [podcast][2] about it) and try all sorts of different applications. I've even [given presentations][3] and [written articles][4] about them. So it only makes sense that, when I talk about being productive, task and to-do list tools are certain to come up.
![Getting fancy with Todo.txt][5]
In all honesty, for being simple, cross-platform, and easily synchronized, you cannot go wrong with [todo.txt][6]. It is one of the two to-do list and task management apps that I keep coming back to over and over again (the other is [Org mode][7]). And what keeps me coming back is that it is simple, portable, understandable, and has many great add-ons that don't break it if one machine has them and the others don't. And since it is a Bash shell script, I have never found a system that cannot support it.
#### Set up todo.txt
First things first, you need to install the base shell script and copy the default configuration file to the **~/.todo** directory:
```
git clone <https://github.com/todotxt/todo.txt-cli.git>
cd todo.txt-cli
make
sudo make install
mkdir ~/.todo
cp todo.cfg ~/.todo/config
```
Next, set up the configuration file. I like to uncomment the color settings at this point, but the only thing that must be set up right away is the **TODO_DIR** variable:
```
`export TODO_DIR="$HOME/.todo"`
```
#### Add to-do's
To add your first to-do item, simply type **todo.sh add &lt;NewTodo&gt;**, and it will be added. This will also create three files in **$HOME/.todo/**: todo.txt, done.txt, and reports.txt.
After adding a few items, run **todo.sh ls** to see your to-do list.
![Basic todo.txt list][8]
#### Manage your tasks
You can improve it a little by prioritizing the items. To add a priority to an item, run **todo.sh pri # A**. The number is the number of the task on the list, and the letter "A" is the priority. You can set the priority as anything from A to Z since that's how it will get sorted.
To complete a task, run **todo.sh do #** to mark the item done and move the item to done.txt. Running **todo.sh report** will write a count of done and not done items to reports.txt.
The file format used for all three files is well documented, so you can make changes with your text editor of choice. The basic format of todo.txt is:
```
`(Priority) YYYY-MM-DD Task`
```
The date indicates the due date of a task, if one is set. When editing the file manually, just put an "x" in front of the task to mark it as done. Running **todo.sh archive** will move these items to done.txt, and you can work in that text file and archive the done items when you have time.
#### Set up recurring tasks
I have a lot of recurring tasks that I need to schedule every day/week/month.
![Recurring tasks with the ice_recur add-on][9]
This is where todo.txt's flexibility comes in. By using [add-ons][10] in **~/.todo.actions.d/**, you can add commands and extend the functionality of the base todo.sh. The add-ons are basically scripts that implement specific commands. For recurring tasks, the plugin [ice_recur][11] should fit the bill. By following the instructions on the page, you can set up tasks to recur in a very flexible manner.
![Todour on MacOS][12]
There are a lot of add-ons in the directory, including syncing to some cloud services. There are also links to desktop and mobile apps, so you can keep your to-do list with you on the go.
I've only scratched the surface of todo's functionality, so take some time to dig in and see how powerful this tool is! It really helps me keep on task every day.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/open-source-to-do-list
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
[2]: https://productivityalchemy.com/
[3]: https://www.slideshare.net/AllThingsOpen/getting-to-done-on-the-command-line
[4]: https://opensource.com/article/18/2/getting-to-done-agile-linux-command-line
[5]: https://opensource.com/sites/default/files/uploads/productivity_7-1.png
[6]: http://todotxt.org/
[7]: https://orgmode.org/
[8]: https://opensource.com/sites/default/files/uploads/productivity_7-2.png (Basic todo.txt list)
[9]: https://opensource.com/sites/default/files/uploads/productivity_7-3.png (Recurring tasks with the ice_recur add-on)
[10]: https://github.com/todotxt/todo.txt-cli/wiki/Todo.sh-Add-on-Directory
[11]: https://github.com/rlpowell/todo-text-stuff
[12]: https://opensource.com/sites/default/files/uploads/productivity_7-4.png (Todour on MacOS)

View File

@ -0,0 +1,118 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Locking and unlocking accounts on Linux systems)
[#]: via: (https://www.networkworld.com/article/3513982/locking-and-unlocking-accounts-on-linux-systems.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Locking and unlocking accounts on Linux systems
======
There are times when locking a Linux user account is necessary and times when you need to reverse that action. Here are commands for managing account access and what's behind them.
SQBack / Getty Images
If you are administering a [Linux][1] system, there will likely be times that you need to lock an account. Maybe someone is changing positions and their continued need for the account is under question; maybe theres reason to believe that access to the account has been compromised. In any event, knowing how to lock an account and how to unlock it should it be needed again is something you need to be able to do.
One important thing to keep in mind is that there are multiple ways to lock an account, and they don't all have the same effect. If the account user is accessing an account using public/private keys instead of a password, some commands you might use to block access to an account will not be effective.
### Locking an account using the passwd command
One of the simplest ways to lock an account is with the **passwd -l** command. For example:
[][2]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][2]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
```
$ sudo passwd -l tadpole
```
The effect of this command is to insert an exclamation point as the first character in the encrypted password field in the /etc/shadow file. This is enough to keep the password from working. What previously looked like this (note the first character):
```
$6$IC6icrWlNhndMFj6$Jj14Regv3b2EdK.8iLjSeO893fFig75f32rpWpbKPNz7g/eqeaPCnXl3iQ7RFIN0BGC0E91sghFdX2eWTe2ET0:18184:0:99999:7:::
```
will look like this:
```
!$6$IC6icrWlNhndMFj6$Jj14Regv3b2EdK.8iLjSeO893fFig75f32rpWpbKPNz7g/eqeaPCnXl3iQ7RFIN0BGC0E91sghFdX2eWTe2ET0:18184:0:99999:7:::
```
On his next login attempt (should there be one), tadpole would probably try his password numerous times and not gain access. You, on the other hand, would be able to check the status of his account with a command like this (-S = status):
```
$ sudo passwd -S tadpole
tadpole L 10/15/2019 0 99999 7 -1
```
The "L" in the second field tells you that the account is locked. Before the account was locked, it would have been a "P". An "NP" would mean that no password was set.
**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][3] ]**
The **usermod -L** command would have the same effect (inserting the exclamation character to disable use of the password).
One of the benefits of locking an account in this way is that it's very easy to unlock the account when and if needed. Just reverse the change by removing the added exclamation point with a text editor or, better yet, by using the **passwd -u** command.
```
$ sudo passwd -u tadpole
passwd: password expiry information changed.
```
The problem with this approach is that, if the user is accessing his or her account with public/private keys, this change will not block their use.
### Locking accounts with the chage command
Another way to lock a user account is to the the **chage** command that helps manage account expiration dates.
```
$ sudu chage -E0 tadpole
$ sudo passwd -S tadpole
tadpole P 10/15/2019 0 99999 7 -1
```
The **chage** command is going to make a subtle change to the /etc/shadow file. The eighth field in that colon-separated file (shown below) will be set to zero (previously empty) and this means the account is essentially expired. The **chage** command tracks the number of days between password changes, but also provides account expiration information when this option is used. A zero in the eiighth field would mean that the account expires a day after January 1, 1970, but also simply locks it when a command like that shown above is used.
```
$ sudo grep tadpole /etc/shadow | fold
tadpole:$6$IC6icrWlNhndMFj6$Jj14Regv3b2EdK.8iLjSeO893fFig75f32rpWpbKPNz7g/eqeaPC
nXl3iQ7RFIN0BGC0E91sghFdX2eWTe2ET0:18184:0:99999:7::0:
^
|
+--- days until expiration
```
To reverse this change, you can simply remove the 0 that was placed in the /etc/shadow entry for the user with a command like this:
```
% sudo chage -E-1 tadpole
```
Once an account is expired in this way, even passwordless [SSH][4] will not provide access.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3513982/locking-and-unlocking-accounts-on-linux-systems.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3215226/what-is-linux-uses-featres-products-operating-systems.html
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[3]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[4]: https://www.networkworld.com/article/3441777/how-the-linux-screen-tool-can-save-your-tasks-and-your-sanity-if-ssh-is-interrupted.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,178 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Use this Python script to find bugs in your Overcloud)
[#]: via: (https://opensource.com/article/20/1/logtool-root-cause-identification)
[#]: author: (Arkady Shtempler https://opensource.com/users/ashtempl)
Use this Python script to find bugs in your Overcloud
======
LogTool is a set of Python scripts that helps you investigate root
causes for problems in Overcloud nodes.
![Searching for code][1]
OpenStack stores and manages a bunch of log files on its Overcloud nodes and Undercloud host. Therefore, it's not easy to use OSP log files to investigate a problem you're having, especially when you don't even know what could have caused the problem.
If that's your situation, [LogTool][2] makes your life much easier! It saves you the time and work it would otherwise take to investigate the root cause manually. Based on a fuzzy string matching algorithm, LogTool provides all the unique error and warning messages that have occurred in the past. You can export these messages for a particular time period, such as 10 minutes ago, an hour ago, a day ago, and so on, based on timestamp in the log.
LogTool is a set of Python scripts, and its main module, **PyTool.py**, is executed on the Undercloud host. Some operation modes use additional scripts that are executed directly on Overcloud nodes, such as exporting  errors and warnings from Overcloud logs.
LogTool supports Python 2 and 3, and you can change the working directory according to your needs: [LogTool_Python2][3] or [LogTool_Python3][4].
### Operation modes
#### 1\. Export errors and warnings from Overcloud logs
This mode is used to extract all unique **ERROR** and **WARNING** messages from Overcloud nodes that took place in the past. As the user, you're prompted to provide the "since time" and debug level to be used for extraction of errors or warnings. For example, if something went wrong in the last 10 minutes, you're be able to extract error and warning messages for just that time period.
This operation mode generates a directory containing a result file for each Overcloud node. A result file is a simple text file that is compressed (***.gz**) to reduce the time needed to download it from the Overcloud node. To convert a compressed file to a regular text file, you can use [zcat][5] or a similar tool. Also, some versions of Vi and any recent version of Emacs both support reading compressed data. The result file is divided into sections and contains a table of contents at the bottom.
There are two kinds of log files LogTool detects on the fly: _Standard_ and _Not Standard_. In _Standard_, each log line has a known and defined structure: timestamp, debug level, msg, and so on. In _Not Standard_, the log's structure is unknown; it could be a third party's logs, for example. In the table of contents, you find a "Section name --&gt; Line number" per section, for example:
* **Raw Data - extracted Errors/Warnings from standard OSP logs since:** This section contains all extracted Error/Warning messages as-is without any modifications or changes. These messages are the raw data LogTool uses for fuzzy matching analysis.
* **Statistics - Number of Errors/Warnings per standard OSP log since:** In this section, you find the amount of Errors and Warnings per Standard log file. This may help you understand potential components used to search for the root cause of your issue.
* **Statistics - Unique messages, per STANDARD OSP log file since:** This section addresses unique Error and Warning messages since a timestamp you provide. For more details about each unique Error or Warning, search for the same message in the Raw Data section.
* **Statistics - Unique messages per NON STANDARD log file, since any time:** This section contains the unique messages in nonstandard log files. Unfortunately, LogTool cannot handle these log files in the same manner as Standard Log files; therefore, the "since time" you provide on extraction will be ignored, and you'll see all of the unique Errors/Warnings messages ever created. So first, scroll down to the table of contents at the bottom of the result file and review its sections—use the line indexes in the table of contents to jump to the relevant sections, where numbers 3, 4, and 5 are most important.
#### 2\. Download all logs from Overcloud nodes
Logs from all Overcloud nodes are compressed and downloaded to a local directory on your Undercloud host.
#### 3\. Grep for a string in all Overcloud logs
This mode "greps" (searches) a string provided by the user on all Overcloud logs. For example, you might want to see all logged messages for a specific request ID, such as the request ID for a "Create VM" that has failed.
#### 4\. Check current CPU,RAM and Disk on Overcloud
This mode displays the current CPU, RAM, and disk info on each Overcloud node.
#### 5\. Execute user's script
This enables users to run their own scripts on Overcloud nodes. For instance, say an Overcloud deployment failed, so you need to execute the same procedure on each Controller node to fix that. You can implement a "work around" script and to run it on Controllers using this mode.
#### 6\. Download relevant logs only, by given timestamp
This mode downloads only the Overcloud logs with _"Last Modified" &gt; "given by user timestamp."_ For example, if you got an error 10 minutes ago, old log files won't be relevant, so downloading them is unnecessary. In addition, you can't (or shouldn't)  attach large files in some bug reporting tools, so this mode might help with making bug reports.
#### 7\. Export errors and warnings from Undercloud logs
This is the same as mode #1 above, but for Undercloud logs.
#### 8\. Check Unhealthy dockers on the Overcloud
This mode is used to search for unhealthy Dockers on nodes.
#### 9\. Download OSP logs and run LogTool locally
This mode allows you to download OSP logs from Jenkins or Log Storage (for example, **cougar11.scl.lab.tlv.redhat.com**) and to analyze the downloaded logs locally.
#### 10\. Analyze deployment log on the Undercloud
This mode may help you understand what went wrong during Overcloud or Undercloud deployment. Deployment logs are generated when the **\--log** option is used, for example, inside the **overcloud_deploy.sh** script; the problem is that such logs are not "friendly," and it's hard to understand what went wrong, especially when verbosity is set to **vv** or more, as this makes the log unreadable with a bunch of data inside it. This mode provides some details about all failed tasks.
#### 11\. Analyze Gerrit(Zuul) failed gate logs
This mode is used to analyze Gerrit(Zuul) log files. It automatically downloads all files from a remote Gerrit gate (HTTP download) and analyzes all files locally.
### Installation
LogTool is available on GitHub. Clone it to your Undercloud host with:
```
`git clone https://github.com/zahlabut/LogTool.git`
```
Some external Python modules are also used by the tool:
#### Paramiko
This SSH module is usually installed on Undercloud by default. Use the following command to verify whether it's installed:
```
`ls -a /usr/lib/python2.7/site-packages | grep paramiko`
```
If you need to install the module, on your Undercloud, execute the following commands:
```
sudo easy_install pip
sudo pip install paramiko==2.1.1
```
#### BeautifulSoup
This HTML parser module is used only in modes where log files are downloaded using HTTP. It's used to parse the Artifacts HTML page to get all of the links in it. To install BeautifulSoup, enter this command:
```
`pip install beautifulsoup4`
```
You can also use the [requirements.txt][6] file to install all the required modules by executing:
```
`pip install -r requirements.txt`
```
### Configuration
All required parameters are set directly inside the **PyTool.py** script. The defaults are:
```
overcloud_logs_dir = '/var/log/containers'
overcloud_ssh_user = 'heat-admin'
overcloud_ssh_key = '/home/stack/.ssh/id_rsa'
undercloud_logs_dir ='/var/log/containers'
source_rc_file_path='/home/stack/'
```
### Usage
This tool is interactive, so to start it, just enter:
```
cd LogTool
python PyTool.py
```
### Troubleshooting LogTool
Two log files are created on runtime: Error.log and Runtime.log*.* Please add the contents of both in the description of the issue you'd like to open.
### Limitations
LogTool is hardcoded to handle files up to 500 MB.
### LogTool_Python3 script
Get it at [github.com/zahlabut/LogTool][2]
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/logtool-root-cause-identification
作者:[Arkady Shtempler][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ashtempl
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_python_programming.png?itok=ynSL8XRV (Searching for code)
[2]: https://github.com/zahlabut/LogTool
[3]: https://github.com/zahlabut/LogTool/tree/master/LogTool_Python2
[4]: https://github.com/zahlabut/LogTool/tree/master/LogTool_Python3
[5]: https://opensource.com/article/19/2/getting-started-cat-command
[6]: https://github.com/zahlabut/LogTool/blob/master/LogTool_Python3/requirements.txt

View File

@ -0,0 +1,227 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (3 Methods to Install the Latest PHP 7 Package on CentOS/RHEL 7 and CentOS/RHEL 6)
[#]: via: (https://www.2daygeek.com/install-php-7-on-centos-6-centos-7-rhel-7-redhat-7/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
3 Methods to Install the Latest PHP 7 Package on CentOS/RHEL 7 and CentOS/RHEL 6
======
PHP is the most popular open-source general-purpose scripting language and is widely used for web development.
Its part of the LAMP stack application suite and is used to create dynamic websites.
Popular CMS applications WordPress, Joomla and Drupal are developed in PHP language.
These applications require PHP 7 for their installation and configuration.
PHP 7 loads your web application faster and consumes less server resources.
By default the CentOS/RHEL 6 operating system provides PHP 5.3 in their official repository and CentOS/RHEL 7 provides PHP 5.4.
In this article we will show you how to install the latest version of PHP on CentOS/RHEL 7 and CentOS/RHEL 6 systems.
This can be done by adding the necessary **[additional third-party RPM repository][1]** to the system.
### Method-1 : How to Install PHP 7 on CentOS 6/7 Using the Software Collections Repository (SCL)
The SCL repository is now maintained by a CentOS SIG, which rebuilds the Red Hat Software Collections and also provides some additional packages of their own.
It contains newer versions of various programs that can be installed alongside existing older packages and invoked by using the scl command.
Run the following **[yum command][2]** to install Software Collections Repository (SCL) on CentOS
```
# yum install centos-release-scl
```
Run the following command to verify the PHP 7 version available in the scl repository.
```
# yum --disablerepo="*" --enablerepo="centos-sclo-rh" list *php
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
centos-sclo-rh: centos.mirrors.estointernet.in
Available Packages
php54-php.x86_64 5.4.40-4.el7 centos-sclo-rh
php55-php.x86_64 5.5.21-5.el7 centos-sclo-rh
rh-php70-php.x86_64 7.0.27-2.el7 centos-sclo-rh
rh-php71-php.x86_64 7.1.30-2.el7 centos-sclo-rh
rh-php72-php.x86_64 7.2.24-1.el7 centos-sclo-rh
```
Run the command below to install the PHP 7.2 on your system from scl.
```
# yum --disablerepo="*" --enablerepo="centos-sclo-rh" install rh-php72-php
```
If you need to install additional modules for PHP 7.2, you can install them by running the command format below. For instance, you can install the **“gd”** and **“pdo”** packages by executing the command below.
```
# yum --disablerepo="*" --enablerepo="centos-sclo-rh" install rh-php72-php-gd rh-php72-php-pdo
```
### Method-1a : How to Install PHP 7 on RHEL 7 Using the Software Collections Repository (SCL)
For Red Hat 7, enable the following repositories to install the latest PHP 7 package.
```
# sudo subscription-manager repos --enable rhel-7-server-extras-rpms
# sudo subscription-manager repos --enable rhel-7-server-optional-rpms
# sudo subscription-manager repos --enable rhel-server-rhscl-7-rpms
```
Run the command below to search the available PHP 7 version from the RHSCL repository.
```
# yum search rh-php*
```
You can easily install PHP 7.3 on the RHEL 7 machine by running the command below from the RHSCL repository.
```
# yum install rh-php73
```
### Method-2 : How to Install PHP 7 on CentOS 6/7 Using the Remi Repository
The **[Remi repository][3]** stores and maintains the latest version of PHP packages with a large collection of libraries, extensions and tools. Some of them are back-ported from Fedora and EPEL.
This is a CentOS community-recognized repository and doesnt modify or affect any underlying packages.
As a prerequisite, this installs the **[EPEL repository][4]** if it is not already installed on your system.
You can easily find the available version of the PHP 7 package from the Remy repository because it adds a separate repo to each version. You can view them using the **[ls command][5]**.
```
# ls -lh /etc/yum.repos.d/remi-php*
-rw-r--r--. 1 root root 456 Sep 6 01:31 /etc/yum.repos.d/remi-php54.repo
-rw-r--r--. 1 root root 1.3K Sep 6 01:31 /etc/yum.repos.d/remi-php70.repo
-rw-r--r--. 1 root root 1.3K Sep 6 01:31 /etc/yum.repos.d/remi-php71.repo
-rw-r--r--. 1 root root 1.3K Sep 6 01:31 /etc/yum.repos.d/remi-php72.repo
-rw-r--r--. 1 root root 1.3K Sep 6 01:31 /etc/yum.repos.d/remi-php73.repo
-rw-r--r--. 1 root root 1.3K Sep 6 01:31 /etc/yum.repos.d/remi-php74.repo
```
You can easily install PHP 7.4 on the CentOS 6/7 systems by running the command below from the remi repository.
```
# yum --disablerepo="*" --enablerepo="remi-php74" install php php-mcrypt php-cli php-gd php-curl php-mysql php-ldap php-zip php-fileinfo
```
### Method-2a : How to Install PHP 7 on RHEL 7 Using the Remi Reposiotry
For Red Hat 7, install the following repositories to install the latest PHP 7 package.
To install EPEL Repository on RHEL 7
```
# yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
```
To install Remi Repository on RHEL 7
```
# yum install http://rpms.remirepo.net/enterprise/remi-release-7.rpm
```
To enable the optional RPMS repository.
```
# subscription-manager repos --enable=rhel-7-server-optional-rpms
```
You can easily install PHP 7.4 on the RHEL 7 systems by running the below command from the remi repository.
```
# yum --disablerepo="*" --enablerepo="remi-php74" install php php-mcrypt php-cli php-gd php-curl php-mysql php-ldap php-zip php-fileinfo
```
To verify the PHP 7 installation, run the following command
```
# php -v
PHP 7.4.1 (cli) (built: Dec 17 2019 16:35:58) ( NTS )
Copyright (c) The PHP Group
Zend Engine v3.4.0, Copyright (c) Zend Technologies
```
### Method-3 : How to Install PHP 7 on CentOS 6/7 Using the IUS Community Repository
IUS Community is a CentOS Community Approved third-party RPM repository which contains latest upstream versions of PHP, Python, MySQL, etc.., packages for Enterprise Linux (RHEL &amp; CentOS) 5, 6 &amp; 7.
**[IUS Community Repository][6]** have dependency with EPEL Repository so we have to install EPEL repository prior to IUS repository installation. Follow the below steps to install &amp; enable EPEL &amp; IUS Community Repository to RPM systems and install the packages.
EPEL package is included in the CentOS Extras repository and enabled by default so, we can install this by running below command.
```
# yum install epel-release
```
Download IUS Community Repository Shell script
```
# curl 'https://setup.ius.io/' -o setup-ius.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1914 100 1914 0 0 6563 0 --:--:-- --:--:-- --:--:-- 133k
```
Install/Enable IUS Community Repository.
```
# sh setup-ius.sh
```
Run the following command to check available PHP 7 version in the IUS repository.
```
# yum --disablerepo="*" --enablerepo="ius" list *php7*
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Available Packages
mod_php71u.x86_64 7.1.33-1.el7.ius ius
mod_php72u.x86_64 7.2.26-1.el7.ius ius
mod_php73.x86_64 7.3.13-1.el7.ius ius
php71u-bcmath.x86_64 7.1.33-1.el7.ius ius
php71u-cli.x86_64 7.1.33-1.el7.ius ius
php71u-common.x86_64 7.1.33-1.el7.ius ius
php71u-dba.x86_64 7.1.33-1.el7.ius ius
php71u-dbg.x86_64 7.1.33-1.el7.ius ius
php71u-devel.x86_64 7.1.33-1.el7.ius ius
php71u-embedded.x86_64 7.1.33-1.el7.ius ius
```
You can easily install PHP 7.3 on the CentOS 6/7 systems by running the command below from the IUS Community repository.
```
# yum --disablerepo="*" --enablerepo="ius" install php73-common php73-cli php73-gd php73-gd php73-mysqlnd php73-ldap php73-soap php73-mbstring
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/install-php-7-on-centos-6-centos-7-rhel-7-redhat-7/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/8-additional-thirdparty-yum-repositories-centos-rhel-fedora-linux/
[2]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[3]: https://www.2daygeek.com/install-enable-remi-repository-on-centos-rhel-fedora-linux/
[4]: https://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-oracle-linux/
[5]: https://www.2daygeek.com/linux-unix-ls-command-display-directory-contents/
[6]: https://www.2daygeek.com/install-enable-ius-community-repository-on-rhel-centos/

View File

@ -0,0 +1,108 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Zorin Grid Lets You Remotely Manage Multiple Zorin OS Computers)
[#]: via: (https://itsfoss.com/zorin-grid/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Zorin Grid Lets You Remotely Manage Multiple Zorin OS Computers
======
One of the major hurdles institutes face is in managing and updating multiple Linux systems from a central point.
Well, Zorin OS has come up with a new cloud-based tool that will help you manage multiple computers running Zorin OS from one single interface. You can update the systems, install apps and configuration all systems remotely using this tool called [Zorin Grid][1].
### Zorin Grid: Manage a fleet of Zorin OS computers remotely
![][2]
**Zorin Grid is a tool that makes it simple to set up, manage, and secure a fleet of Zorin OS-powered computers in businesses, schools, and organizations.**
When it comes to managing Linux distributions (here, Zorin OS) on a multitude of systems for an organization it is quite time-consuming.
If it will be easier to manage Linux systems, more organizations will be interested to switch using Linux just like the [Italian city Vicenza replaced Windows by Zorin OS][3].
For the very same reason, the Zorin team decided to create **Zorin Grid** with the help of which every school, enterprises, organizations, and businesses will be able to easily manage their Zorin OS-powered machines.
### Zorin Grid features
![Zorin Grid Features][4]
You might have guessed what it is capable of but let me highlight the key features of Zorin Grid as per its official webpage:
* Install and Remove Apps
* Set software update and security patch policies
* Monitor computer status
* Enforce security policies
* Keep track of software and hardware inventory
* Set desktop settings
* Organize computers into groups (for teams and departments)
* Role-based access control and audit logging
In addition to these, you will be able to do a couple more things using the Zorin Grid service. But, it looks like most of the essential tasks will be covered by Zorin Grid.
### How does Zorin Grid work?
![][5]
Zorin Grid is a cloud based software as a service. Zorin will be charging a monthly subscription fee for each computer managed by Zorin Grid in an organization.
Youll have to install the Zorin Grid client on all the systems that you want to manage. Since it is cloud-based, you can manage all the Zorin systems on your grid from a web browser by logging into you Zorin Grid account.
You choose how to configure the computers once and the Zorin Grid applies the same configuration to all or specific computers in your organization.
The price has not been finalized. [Artyom Zorin][6], **CEO of Zorin Group, told Its FOSS** that schools and non-profit organizations will get Zorin Grid for a reduced pricing.
While client-side software for Zorin Grid will be open source, the Zorin Grid server wont be open source initially. Releasing it under an open source license is _tentatively_ on their roadmap.
Artyom also told that they **plan to support other Linux distributions starting with Ubuntu and Ubuntu-based distros** after launching Zorin Grid for Zorin OS systems this summer.
In case you decide to migrate from Windows to Zorin OS for your organization or business, you will find a [useful migration guide][7] by the Zorin OS team to help you switch to Linux.
[Zorin Grid][1]
**Wrapping Up**
Let me summarize all the important points about Zorin Grid:
* Zorin Grid is an upcoming cloud based service that lets you manage multiple Zorin OS systems.
* Its a premium service that charges for each computer used. The pricing is not determined yet.
* Educational institutes and non-profit organizations can get Zorin Grid for a reduced pricing.
* Initially it can only handle Zorin OS. Other Ubuntu-based distributions are on the road-map but there is no definite timeline for that.
* The service should be available in the summer 2020.
* Zorin Grid server wont be open source initially.
Zorin Grid looks to be an impressive premium tool for organizations or businesses that want to use Linux while also being able to maintain their systems easily.
Personally, I wouldnt mind paying for the service if it makes deploying and using Linux easier, in general.
Of course, it does not support every Linux distro as of yet but it is indeed a promising service to keep an eye out for.
What do you think about it? Do you know of a better alternative to Zorin Grid? Do share your views in the comments.
--------------------------------------------------------------------------------
via: https://itsfoss.com/zorin-grid/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://zorinos.com/grid/
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/zorin-grid-dashboard.png?ssl=1
[3]: https://itsfoss.com/vicenza-windows-zorin/
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/zorin_grid_features.jpg?ssl=1
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/zorin-os-computers.jpg?ssl=1
[6]: https://itsfoss.com/zorin-os-interview/
[7]: https://zorinos.com/help/switch-your-organization-to-zorin-os/