Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2020-09-17 23:18:15 +08:00
commit b10ed7f9d5
14 changed files with 1582 additions and 332 deletions

View File

@ -0,0 +1,125 @@
[#]: collector: (lujun9972)
[#]: translator: (LazyWolfLin)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12621-1.html)
[#]: subject: (6 best practices for teams using Git)
[#]: via: (https://opensource.com/article/20/7/git-best-practices)
[#]: author: (Ravi Chandran https://opensource.com/users/ravichandran)
6 个在团队中使用 Git 的最佳实践
======
> 采用这些 Git 协作策略,让团队工作更高效。
![](https://img.linux.net.cn/data/attachment/album/202009/16/234908ge77j9j799i4eaj7.jpg)
Git 非常有助于小团队管理他们的软件开发进度,但有些方法能让你变得更高效。我发现了许多有助于我的团队的最佳实践,尤其是当不同 Git 水平的新人加入时。
### 在你的团队中正式确立 Git 约定
每个人都应当遵循对于分支命名、标记和编码的规范。每个组织都有自己的规范或者最佳实践,并且很多建议都可以从网上免费获取,而重要的是尽早选择合适的规范并在团队中遵循。
同时,不同的团队成员的 Git 水平参差不齐。你需要创建并维护一组符合团队规范的基础指令,用于执行通用的 Git 操作。
### 正确地合并变更
每个团队成员都需要在一个单独的功能分支上开发。但即使是使用了单独的分支,每个人也会修改一些共同的文件。当把更改合并回 `master` 分支时,合并通常无法自动进行。可能需要手动解决不同的人对同一文件不同变更的冲突。这就是你必须学会如何处理 Git 合并的原因。
现代编辑器具有协助解决 [Git 合并冲突][2]的功能。它们对同一文件的每个部分提供了合并的各种选择,例如,是否保留你的更改,或者是保留另一分支的更改,亦或者是全部保留。如果你的编辑器不支持这些功能,那么可能是时候换一个代码编辑器了。
### 经常变基你的功能分支
当你持续地开发你的功能分支时,请经常对它做<ruby>变基<rt>rebase</rt></ruby>`rebase master`。这意味着要经常执行以下步骤:
```
git checkout master
git pull
git checkout feature-xyz  # 假设的功能分支名称
git rebase master  # 可能需要解决 feature-xyz 上的合并冲突
```
这些步骤会在你的功能分支上[重写历史][3](这并不是件坏事)。首先,它会使你的功能分支获得 `master` 分支上当前的所有更新。其次,你在功能分支上的所有提交都会在该分支历史的顶部重写,因此它们会顺序地出现在日志中。你可能需要一路解决遇到的合并冲突,这也许是个挑战。但是,这是解决冲突最好的时机,因为它只影响你的功能分支。
在解决完所有冲突并进行回归测试后,如果你准备好将功能分支合并回 `master`,那么就可以在再次执行上述的变基步骤几次后进行合并:
```
git checkout master
git pull
git merge feature-xyz
```
在此期间,如果其他人也将和你有冲突的更改推送到 `master`,那么 Git 合并将再次发生冲突。你需要解决它们并重新进行回归测试。
还有一些其他的合并哲学(例如,只使用合并而不使用变基以防止重写历史),其中一些甚至可能更简单易用。但是,我发现上述方法是一个干净可靠的策略。提交历史日志将以有意义的功能序列进行排列。
如果使用“纯合并”策略(上面所说的,不进行定期的变基),那么 `master` 分支的历史将穿插着所有同时开发的功能的提交。这样混乱的历史很难回顾。确切的提交时间通常并不是那么重要。最好是有一个易于查看的历史日志。
### 在合并前压扁提交
当你在功能分支上开发时,即使再小的修改也可以作为一个提交。但是,如果每个功能分支都要产生五十个提交,那么随着不断地增添新功能,`master` 分支的提交数终将无谓地膨胀。通常来说,每个功能分支只应该往 `master` 中增加一个或几个提交。为此,你需要将多个提交<ruby>压扁<rt>Squash</rt></ruby>成一个或者几个带有更详细信息的提交中。通常使用以下命令来完成:
```
git rebase -i HEAD~20  # 查看可进行压扁的二十个提交
```
当这条命令执行后,将弹出一个提交列表的编辑器,你可以通过包括<ruby>遴选<rt>pick</rt></ruby><ruby>压扁<rt>squash</rt></ruby>在内的数种方式编辑它。“遴选”一个提交即保留这个提交。“压扁”一个提交则是将这个提交合并到前一个之中。使用这些方法,你就可以将多个提交合并到一个提交之中,对其进行编辑和清理。这也是一个清理不重要的提交信息的机会(例如,带错字的提交)。
总之,保留所有与提交相关的操作,但在合并到 `master` 分支前,合并并编辑相关信息以明确意图。注意,不要在变基的过程中不小心删掉提交。
在执行完诸如变基之类的操作后,我会再次看看 `git log` 并做最终的修改:
```
git commit --amend
```
最后,由于重写了分支的 Git 提交历史,必须强制更新远程分支:
```
git push -f
```
### 使用标签
当你完成测试并准备从 `master` 分支部署软件到线上时,又或者当你出于某种原因想要保留当前状态作为一个里程碑时,那么可以创建一个 Git 标签。对于一个积累了一些变更和相应提交的分支而言,标签就是该分支在那一时刻的快照。一个标签可以看作是没有历史记录的分支,也可以看作是直接指向标签创建前那个提交的命名指针。
所谓的“配置控制”就是在不同的里程碑上保存代码的状态。大多数项目都有一个需求能够重现任一里程碑上的软件源码以便在需要时重新构建。Git 标签为每个代码的里程碑提供了一个唯一标识。打标签非常简单:
```
git tag milestone-id -m "short message saying what this milestone is about"
git push --tags   # 不要忘记将标签显式推送到远程
```
考虑这样一种情况Git 标签对应的软件版本已经分发给客户,而客户报告了一个问题。尽管代码库中的代码可能已经在继续开发,但通常情况下为了准确地重现客户问题以便做出修复,必须回退到 Git 标签对应的代码状态。有时候新代码可能已经修复了那个问题,但并非一直如此。通常你需要切换到特定的标签并从那个标签创建一个分支:
```
git checkout milestone-id        # 切换到分发给客户的标签
git checkout -b new-branch-name  # 创建新的分支用于重现 bug
```
此外,如果带附注的标记和带签名的标记有助于你的项目,可以考虑使用它们。
### 让软件运行时打印标签
在大多数嵌入式项目中,从代码版本构建出的二进制文件有固定的名称,这样无法从它的名称推断出对应的 Git 标签。在构建时“嵌入标签”有助于将未来发现的问题精准地关联到特定的构建版本。在构建过程中可以自动地嵌入标签。通常,`git describe` 生成的标签字符串会在代码编译前插入到代码中,以便生成的可执行文件能够在启时时输出标签字符串。当客户报告问题时,可以指导他们给你发送启动时输出的内容。
### 总结
Git 是一个需要花时间去掌握的复杂工具。使用这些实践可以帮助团队成功地使用 Git 协作,无论他们的知识水平。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/git-best-practices
作者:[Ravi Chandran][a]
选题:[lujun9972][b]
译者:[LazyWolfLin](https://github.com/LazyWolfLin)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ravichandran
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/christina-wocintechchat-com-rg1y72ekw6o-unsplash_1.jpg?itok=MoIv8HlK (Women in tech boardroom)
[2]: https://opensource.com/article/20/4/git-merge-conflict
[3]: https://opensource.com/article/20/4/git-rebase-i

View File

@ -0,0 +1,110 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12620-1.html)
[#]: subject: (Design a book cover with an open source alternative to InDesign)
[#]: via: (https://opensource.com/article/20/9/open-source-publishing-scribus)
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
用 InDesign 的开源替代方案 Scribus 设计书籍封面
======
> 使用开源的出版软件 Scribus 来制作你的下一本自出版书籍的封面。
![](https://img.linux.net.cn/data/attachment/album/202009/16/213714ppvfzm6idv9nnynp.jpg)
我最近写完了一本关于 [C 语言编程][2]的书,我通过 [Lulu.com][3] 自行出版。我已经用 Lulu 做了好几个图书项目它是一个很棒的平台。今年早些时候Lulu 做了一些改变让作者在创作图书封面时有了更大的控制权。以前你只需上传一对大尺寸图片作为书的封面和封底。现在Lulu 允许作者上传完全按照你的书的尺寸定制的 PDF。
你可以使用 [Scribus][4] 这个开源页面布局程序来创建封面。下面是我的做法。
### 下载一个模板
当你在 Lulu 上输入图书的信息时,最终会进入<ruby>设计<rt>Design</rt></ruby>栏。在该页面的<ruby>设计封面<rt>Design Your Cover</rt></ruby>部分,你会发现一个方便的<ruby>下载模板<rt>Download Template</rt></ruby>按钮,它为你的图书封面提供了一个 PDF 模板。
![Lulu Design your Cover page][5]
下载此模板,它为你提供了在 Scribus 中创建自己的书籍封面所需的信息。
![Lulu's cover template][7]
最重要的细节是:
* <ruby>文件总尺寸(含出血)<rt>Total Document Size (with bleed)</rt></ruby>
* <ruby>出血区(从裁切边缘)<rt>Bleed area (from trim edge)</rt></ruby>
* <ruby>书脊区<rt>Spine area</rt></ruby>
<ruby>出血<rt>Bleed</rt></ruby>是一个印刷术语,在准备“印刷就绪”文件时,这个术语很重要。它与普通文件中的页边距不同。打印文件时,你会为顶部、底部和侧面设置一个页边距。在大多数文档中,页边距通常为一英寸左右。
但在印刷就绪的文件中,文档的尺寸需要比成品书大一些,因为书籍的封面通常包括颜色或图片,一直到封面的边缘。为了创建这种设计,你要使颜色或图片超出你的边距,印刷厂就会把多余的部分裁切掉,使封面缩小到准确的尺寸。因此,“裁切”就是印刷厂将封面精确地裁剪成相应尺寸。而“出血区”就是印刷厂裁掉的多余部分。
如果你没有出血区,印刷厂就很难完全按照尺寸印刷封面。如果打印机只偏离了一点点,你的封面最终会在边缘留下一个微小的、白色的、没有印刷的边缘。使用出血和修剪意味着你的封面每次都能看起来正确。
### 在 Scribus 中设置书籍的封面文档
要在 Scribus 中创建新文档,请从定义文档尺寸的<ruby>新文档<rt>New Document</rt></ruby>对话框开始。单击<ruby>出血<rt>Bleeds</rt></ruby>选项卡,并输入 PDF 模板所说的出血尺寸。Lulu 图书通常在所有边缘使用 0.125 英寸的出血量。
对于 Scribus 中的文档总尺寸,你不能只使用 PDF 模板上的文档总尺寸。如果这样做,你的 Scribus 文档的尺寸会出现错误。相反,你需要做一些数学计算来获取正确的尺寸。
看下 PDF 模板中的<ruby>文件总尺寸(含出血)<rt>Total Document Size (with bleed)</rt></ruby>。这是将要发送给打印机的 PDF 的总尺寸,它包括封底、书脊和封面(包含出血)。要在 Scribus 中输入正确的尺寸,你必须从所有边缘中减去出血。例如,我最新的书的尺寸是<ruby>四开本<rt>Crown Quarto</rt></ruby>,装订后尺寸为 7.44" x 9.68",书脊宽度为 0.411"。加上 0.125" 的出血量,文件总尺寸(含出血)是 15.541" × 9.93"。因此,我在 Scribus 中的文档尺寸是:
* 宽15.541-(2 x 0.125)=15.291"
* 高9.93-(2 x 0.125)=9.68"
![Scribus document setup][8]
这将设置一个新的适合我的书的封面尺寸的 Scribus 文档。新的 Scribus 文档尺寸应与 PDF 模板上列出的“文件总尺寸(含出血)”完全匹配。
### 从书脊开始
在 Scribus 中创建新的书籍封面时,我喜欢从书脊区域开始。这可以帮助我验证我是否在 Scribus 中正确定义了文档。
使用<ruby>矩形<rt>Rectangle</rt></ruby>工具在文档上绘制一个彩色方框,书脊需要出现在那里。你不必完全按照正确的尺寸和位置来绘制,只要大小差不多并使用<ruby>属性<rt>Properties</rt></ruby>来设置正确的值即可。在形状的属性中,选择左上角基点,然后输入书脊需要放在的 x、y 位置和尺寸。同样,你需要做一些数学计算,并使用 PDF 模板上的尺寸作为参考。
![Empty Scribus document][9]
例如,我的书的修边尺寸是 7.44"×9.68",这是印刷厂修边后的封面和封底的尺寸。我的书的书脊大小是 0.411",出血量是 0.125"。也就是说,书脊的左上角 X、Y 的正确位置是:
* X 位置(出血量+裁剪宽度0.411+7.44=7.8510"
* Y 位置(减去出血量):-0.125"
矩形的尺寸是我的书封面的全高(包括出血)和 PDF 模板中标明的书脊宽度。
* 宽度0.411"
* 高度9.93"
将矩形的<ruby>填充<rt>Fill</rt></ruby>设置为你喜欢的颜色,将<ruby>笔触<rt>Stroke</rt></ruby>设置为<ruby><rt>None</rt></ruby>以隐藏边界。如果你正确地定义了 Scribus 文档,你应该最终得到一个矩形,它可以延伸到位于文档中心的图书封面的顶部和底部边缘。
![Book spine in Scribus][10]
如果矩形与文档不完全匹配,可能是你在创建 Scribus 文档时设置了错误的尺寸。由于你还没有在书的封面上花太多精力,所以可能最容易的做法是重新开始,而不是尝试修复你的错误。
### 剩下的就看你自己了
接下来,你可以创建你的书的封面的其余部分。始终使用 PDF 模板作为指导。封底在左边,封面在右边
我可以做一个简单的书籍封面,但我缺乏艺术能力,无法创造出真正醒目的设计。在自己设计了几个书的封面后,我对那些能设计出好封面的人产生了敬意。但如果你只是需要制作一个简单的封面,你可以通过开源软件自己动手。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/open-source-publishing-scribus
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_read_list_stack_study.png?itok=GZxb9OAv (Stack of books for reading)
[2]: https://opensource.com/article/20/8/c-programming-cheat-sheet
[3]: https://www.lulu.com/
[4]: https://www.scribus.net/
[5]: https://opensource.com/sites/default/files/uploads/lulu-download-template.jpg (Lulu Design your Cover page)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://opensource.com/sites/default/files/uploads/lulu-pdf-template.jpg (Lulu's cover template)
[8]: https://opensource.com/sites/default/files/uploads/scribus-new-document.jpg (Scribus document setup)
[9]: https://opensource.com/sites/default/files/uploads/scribus-empty-document.jpg (Empty Scribus document)
[10]: https://opensource.com/sites/default/files/uploads/scribus-spine-rectangle.jpg (Book spine in Scribus)

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (MATRIXKOO)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,83 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Huawei ban could complicate 5G deployment)
[#]: via: (https://www.networkworld.com/article/3575408/huawei-ban-could-complicate-5g-deployment.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
Huawei ban could complicate 5G deployment
======
Bans on Huawei, ZTE mean fewer choices for wireless carriers building 5G services
Vertigo3D / Getty Images
As carriers race to build out their 5G networks, options for buying the gear they need are fewer in the U.S. than in other countries thanks to federal pressure, which could be slowing deployments.
### 5G resources
* [What is 5G? Fast wireless technology for enterprises and phones][1]
* [How 5G frequency affects range and speed][2]
* [Private 5G can solve some problems that Wi-Fi cant][3]
* [Private 5G keeps Whirlpool driverless vehicles rolling][4]
* [5G can make for cost-effective private backhaul][5]
* [CBRS can bring private 5G to enterprises][6]
China-based Huawei and ZTE were both banned from providing equipment to the government itself in the Defense Authorization Act of 2018, and a general import ban followed shortly thereafter. That has changed the competitive landscape considerably, and raises questions about how the shape of 5G in America could change as a consequence.
Michael Porowski , a Gartner analyst, said its possible, though not completely clear, that the restriction on where carriers can buy their 5G equipment is slowing deployment.
“Theres still an ample number of suppliers Ericsson, Nokia, Samsung,” he said. Both ZTE and Huawei are more economical options, he said, and “if they were available, you might see a bit faster adoption.”
[[Get regularly scheduled insights by signing up for Network World newsletters.]][7]
Theres a sense in the industry that Huawei equipment is both sophisticated and priced to move, according to Christian Renaud, research director at 451 Research, but theres also no clear alternative that carriers will gravitate to in the absence of Huawei.
“Here, youll have carriers that have standardized on Nokia or Ericsson,” he said. “[And] its too soon to tell whos most sophisticated because deployments are so limited.”
That contention is borne out by coverage maps from the carriers themselves. While they have been quick to trumpet the presence of 5G service in many U.S. markets, the actual geographic coverage is mostly restricted to public spaces in the urban cores of major cities. The lions share of 5G deployment, in short, is yet to come.
There are good reasons for slow deployment. 5G access points have to be deployed far more densely than earlier generation wireless technology, making the process more involved and time-consuming. Theres also the issue that the number of currently available 5G user devices is vanishingly small.
“Its like saying Ive got this eight-lane superhighway, before someone has invented cars,” said Renaud.
Part of the current goal for equipment vendors is demonstrating the potentials of 5G through private deployments that use the technology for backhaul, supporting [IoT][8] and other link use cases specific to a single enterprise.
“[The equipment vendors] are all pushing hard on the private piece, and then they can use that to say, Look, Im working the Brooklyn dockyards or something in a private 5G network, so … if I can do that I can run peoples YouTube connections,’” Renaud said.
An unfortunate result of the China ban might be a splintering of the specifications that vendors follow to meet 5G requirements. If non-China vendors have to make one version for markets where Huawei and ZTE are allowed and a different version for places they are not, it could create a new headache for them, according to Renaud.
“Thatll shift the burden of costs to the device makers to try to support the different carrier implementations," he said. “Well have created nontechnical barriers.” And those, in turn, could cause customer experience to suffer.
But 5G has embraced a move toward greater interoperability with [open radio access network technology][9] that standardizes the software interfaces between layers of the 5G stack. The push is embraced by carriers and equipment vendors alike, making interoperability more likely, which could draw in even more players in the future.
Of course, even with pervasive interoperability, equipment makers will still try to build customer dependency. “Theres always going to be a tug of war between vendors trying to lock in customers and customers trying to stay vendor-neutral,” he said. “Thats not going to change a lot. [But] weve obviously seen a move toward trying to be more open.”
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3575408/huawei-ban-could-complicate-5g-deployment.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3203489/what-is-5g-fast-wireless-technology-for-enterprises-and-phones.html
[2]: https://www.networkworld.com/article/3568253/how-5g-frequency-affects-range-and-speed.html
[3]: https://www.networkworld.com/article/3568614/private-5g-can-solve-some-enterprise-problems-that-wi-fi-can-t.html
[4]: https://www.networkworld.com/article/3488799/private-5g-keeps-whirlpool-driverless-vehicles-rolling.html
[5]: https://www.networkworld.com/article/3570724/5g-can-make-for-cost-effective-private-backhaul.html
[6]: https://www.networkworld.com/article/3529291/cbrs-wireless-can-bring-private-5g-to-enterprises.html
[7]: https://www.networkworld.com/newsletters/signup.html
[8]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
[9]: https://www.networkworld.com/article/3574977/carriers-vendors-work-to-promote-5g-network-flexibility-with-open-standards.html
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -1,79 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The New YubiKey 5C NFC Security Key Lets You Use NFC to Easily Authenticate Your Secure Devices)
[#]: via: (https://itsfoss.com/yubikey-5c-nfc/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
The New YubiKey 5C NFC Security Key Lets You Use NFC to Easily Authenticate Your Secure Devices
======
If you are extra cautious about securing your online accounts with the best possible authentication method, you probably know about [Yubico][1]. They make hardware authentication security keys to replace [two-factor authentication][2] and get rid of the password authentication system for your online accounts.
Basically, you just plug the security key on your computer or use the NFC on your smartphone to unlock access to accounts. In this way, your authentication method stays completely offline.
![][3]
Of course, you can always use a [good password manager for Linux][4] available out there. But if you own or work for a business or just extra cautious about your privacy and security and want to add an extra layer of security, these hardware security keys could be worth a try. These devices have gained some popularity lately.
Yubicos latest product [YubiKey 5C NFC][5] is probably something impressive because it can be used both as USB type C key and NFC (just touch your device with the key).
Here, lets take a look at an overview of this security key.
_Please note that Its FOSS is an affiliate partner of Yubico. Please read our [affiliate policy][6]._
### Yubico 5C NFC: Overview
![][7]
YubiKey 5C NFC is the latest offering that uses both USB-C and NFC. So, you can easily plug it in on Windows, macOS, and Linux computers. In addition to the computers, you can also use it with your Android or iOS smartphones or tablets.
Not just limited to USB-C and NFC support (which is a great thing), it also happens to be the worlds first multi-protocol security key with smart card support as well.
Hardware security keys arent that common because of their cost for an average consumer. But, amidst the pandemic, with the rise of remote work, a safer authentication system will definitely come in handy.
Heres what Yubico mentioned in their press release:
> “The way that people work and go online is vastly different today than it was a few years ago, and especially within the last several months. Users are no longer tied to just one device or service, nor do they want to be. Thats why the YubiKey 5C NFC is one of our most sought-after security keys — its compatible with a majority of modern-day computers and mobile phones and works well across a range of legacy and modern applications. At the end of the day, our customers crave security that just works no matter what.”  said Guido Appenzeller, Chief Product Officer, Yubico.
The protocols that YubiKey 5C NFC supports are FIDO2, WebAuthn, FIDO U2F, PIV (smart card), OATH-HOTP and OATH-TOTP (hash-based and time-based one-time passwords), [OpenPGP][8], YubiOTP, and challenge-response.
Considering all those protocols, you can easily secure any online account that supports hardware authentication while also having the ability to access identity access management (IAM) solutions. So, its a great option for both individual users and enterprises.
### Pricing &amp; Availability
The YubiKey 5C NFC costs $55. You can order it directly from their [online store][5] or get it from any authorized resellers in your country. The cost might also vary depending on the shipping charges but $55 seems to be a sweet spot for serious users who want the best-level of security for their online accounts.
Its also worth noting that you get volume discounts if you order more than two YubiKeys.
[Order YubiKey 5C NFC][5]
### Wrapping Up
No matter whether you want to secure your cloud storage account or any other online account, Yubicos latest offering is something thats worth taking a look at if you dont mind spending some money to secure your data.
Have you ever used YubiKey or some other secure key like LibremKey etc? How is your experience with it? Do you think these devices are worth spending the extra money?
--------------------------------------------------------------------------------
via: https://itsfoss.com/yubikey-5c-nfc/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/recommends/yubikey/
[2]: https://ssd.eff.org/en/glossary/two-factor-authentication
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/yubikey-5c-nfc-desktop.jpg?resize=800%2C671&ssl=1
[4]: https://itsfoss.com/password-managers-linux/
[5]: https://itsfoss.com/recommends/yubico-5c-nfc/
[6]: https://itsfoss.com/affiliate-policy/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/yubico-5c-nfc.jpg?resize=800%2C671&ssl=1
[8]: https://www.openpgp.org/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,422 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Analyze Linux startup performance)
[#]: via: (https://opensource.com/article/20/9/systemd-startup-configuration)
[#]: author: (David Both https://opensource.com/users/dboth)
Analyze Linux startup performance
======
Use systemd-analyze to get insights and solve problems with Linux
startup performance.
![Magnifying glass on code][1]
Part of the system administrator's job is to analyze the performance of systems and to find and resolve problems that cause poor performance and long startup times. Sysadmins also need to check other aspects of systemd configuration and usage.
The systemd init system provides the `systemd-analyze` tool that can help uncover performance problems and other important systemd information. In a previous article, [_Analyzing systemd calendar and timespans_][2], I used `systemd-analyze` to analyze timestamps and timespans in systemd timers, but this tool has many other uses, some of which I will explore in this article.
### Startup overview
The Linux startup sequence is a good place to begin exploring because many `systemd-analyze` tool functions are targeted at startup. But first, it is important to understand the difference between boot and startup. The boot sequence starts with the BIOS power-on self test (POST) and ends when the kernel is finished loading and takes control of the host system, which is the beginning of startup and the point when the systemd journal begins.
In the second article in this series, [_Understanding systemd at startup on Linux_][3], I discuss startup in a bit more detail with respect to what happens and in what sequence. In this article, I want to examine the startup sequence to look at the amount of time it takes to go through startup and which tasks take the most time.
The results I'll show below are from my primary workstation, which is much more interesting than a virtual machine's results. This workstation consists of an ASUS TUF X299 Mark 2 motherboard, an Intel i9-7960X CPU with 16 cores and 32 CPUs (threads), and 64GB of RAM. Some of the commands below can be run by a non-root user, but I will use root in this article to prevent having to switch between users.
There are several options for examining the startup sequence. The simplest form of the `systemd-analyze` command displays an overview of the amount of time spent in each of the main sections of startup, the kernel startup, loading and running `initrd` (i.e., initial ramdisk, a temporary system image that is used to initialize some hardware and mount the `/` [root] filesystem), and userspace (where all the programs and daemons required to bring the host up to a usable state are loaded). If no subcommand is passed to the command, `systemd-analyze time` is implied:
```
[root@david ~]$ systemd-analyze
Startup finished in 53.921s (firmware) + 2.643s (loader) + 2.236s (kernel) + 4.348s (initrd) + 10.082s (userspace) = 1min 13.233s
graphical.target reached after 10.071s in userspace
[root@david ~]#
```
The most notable data in this output is the amount of time spent in firmware (BIOS): almost 54 seconds. This is an extraordinary amount of time, and none of my other physical systems take anywhere near as long to get through BIOS.
My System76 Oryx Pro laptop spends only 8.506 seconds in BIOS, and all of my home-built systems take a bit less than 10 seconds. After some online searches, I found that this motherboard is known for its inordinately long BIOS boot time. My motherboard never "just boots." It always hangs, and I need to do a power off/on cycle, and then BIOS starts with an error, and I need to press F1 to enter BIOS configuration, from where I can select the boot drive and finish the boot. This is where the extra time comes from.
Not all hosts show firmware data. My unscientific experiments lead me to believe that this data is shown only for Intel generation 9 processors or above. But that could be incorrect.
This overview of the boot startup process is interesting and provides good (though limited) information, but there is much more information available about startup, as I'll describe below.
### Assigning blame
You can use `systemd-analyze blame` to discover which systemd units take the most time to initialize. The results are displayed in order by the amount of time they take to initialize, from most to least:
```
[root@david ~]$ systemd-analyze blame                                                                        
       5.417s NetworkManager-wait-online.service                                                      
       3.423s dracut-initqueue.service                                                                
       2.715s systemd-udev-settle.service                                                              
       2.519s fstrim.service                                                                          
       1.275s udisks2.service                                                                          
       1.271s smartd.service                                                                          
        996ms upower.service                                                                          
        637ms lvm2-monitor.service                                                                    
        533ms lvm2-pvscan@8:17.service                                                                
        520ms dmraid-activation.service                                                                
        460ms vboxdrv.service                                                                          
        396ms initrd-switch-root.service
&lt;SNIP removed lots of entries with increasingly small times&gt;
```
Because many of these services start in parallel, the numbers may add up to significantly more than the total given by `systemd-analyze time` for everything after the BIOS. All of these are small numbers, so I cannot find any significant savings here.
The data from this command can provide indications about which services you might consider to improve boot times. Services that are not used can be disabled. There does not appear to be any single service that is taking an excessively long time during this startup sequence. You may see different results for each boot and startup.
### Critical chains
Like the critical path in project management, a _critical chain_ shows the time-critical chain of events that take place during startup. These are the systemd units you want to look at if startup is slow, as they are the ones that would cause delays. This tool does not display all the units that start, only those in this critical chain of events:
```
[root@david ~]# systemd-analyze critical-chain
The time when unit became active or started is printed after the "@" character.
The time the unit took to start is printed after the "+" character.
graphical.target @10.071s
└─lxdm.service @10.071s
  └─plymouth-quit.service @10.047s +22ms
    └─systemd-user-sessions.service @10.031s +7ms
      └─remote-fs.target @10.026s
        └─remote-fs-pre.target @10.025s
          └─nfs-client.target @4.636s
            └─gssproxy.service @4.607s +28ms
              └─network.target @4.604s
                └─NetworkManager.service @4.383s +219ms
                  └─dbus-broker.service @4.434s +136ms
                    └─dbus.socket @4.369s
                      └─sysinit.target @4.354s
                        └─systemd-update-utmp.service @4.345s +9ms
                          └─auditd.service @4.301s +42ms
                            └─systemd-tmpfiles-setup.service @4.254s +42ms
                              └─import-state.service @4.233s +19ms
                                └─local-fs.target @4.229s
                                  └─Virtual.mount @4.019s +209ms
                                    └─systemd-fsck@dev-mapper-vg_david2\x2dVirtual.service @3.742s +274ms
                                      └─local-fs-pre.target @3.726s
                                        └─lvm2-monitor.service @356ms +637ms
                                          └─dm-event.socket @319ms
                                            └─-.mount
                                              └─system.slice
                                                └─-.slice
[root@david ~]#
```
The numbers preceded with `@` show the absolute number of seconds since startup began when the unit becomes active. The numbers preceded by `+` show the amount of time it takes for the unit to start.
### System state
Sometimes you need to determine the system's current state. The `systemd-analyze dump` command dumps a _massive_ amount of data about the current system state. It starts with a list of the primary boot timestamps, a list of each systemd unit, and a complete description of the state of each:
```
[root@david ~]# systemd-analyze dump
Timestamp firmware: 1min 7.983523s
Timestamp loader: 3.872325s
Timestamp kernel: Wed 2020-08-26 12:33:35 EDT
Timestamp initrd: Wed 2020-08-26 12:33:38 EDT
Timestamp userspace: Wed 2020-08-26 12:33:42 EDT
Timestamp finish: Wed 2020-08-26 16:33:56 EDT
Timestamp security-start: Wed 2020-08-26 12:33:42 EDT
Timestamp security-finish: Wed 2020-08-26 12:33:42 EDT
Timestamp generators-start: Wed 2020-08-26 16:33:42 EDT
Timestamp generators-finish: Wed 2020-08-26 16:33:43 EDT
Timestamp units-load-start: Wed 2020-08-26 16:33:43 EDT
Timestamp units-load-finish: Wed 2020-08-26 16:33:43 EDT
Timestamp initrd-security-start: Wed 2020-08-26 12:33:38 EDT
Timestamp initrd-security-finish: Wed 2020-08-26 12:33:38 EDT
Timestamp initrd-generators-start: Wed 2020-08-26 12:33:38 EDT
Timestamp initrd-generators-finish: Wed 2020-08-26 12:33:38 EDT
Timestamp initrd-units-load-start: Wed 2020-08-26 12:33:38 EDT
Timestamp initrd-units-load-finish: Wed 2020-08-26 12:33:38 EDT
-&gt; Unit system.slice:
        Description: System Slice
        Instance: n/a
        Unit Load State: loaded
        Unit Active State: active
        State Change Timestamp: Wed 2020-08-26 12:33:38 EDT
        Inactive Exit Timestamp: Wed 2020-08-26 12:33:38 EDT
        Active Enter Timestamp: Wed 2020-08-26 12:33:38 EDT
        Active Exit Timestamp: n/a
        Inactive Enter Timestamp: n/a
        May GC: no
&lt;SNIP Deleted a bazillion lines of output&gt;
```
On my main workstation, this command generated a stream of 49,680 lines and about 1.66MB. This command is very fast, so you don't need to wait for the results.
I do like the wealth of detail provided for the various connected devices, such as storage. Each systemd unit has a section with details such as modes for various runtimes, cache, and log directories, the command line used to start the unit, the process ID (PID), the start timestamp, as well as memory and file limits.
The man page for `systemd-analyze` shows the `systemd-analyze --user dump` option, which is intended to display information about the internal state of the user manager. This fails for me, and internet searches indicate that there may be a problem with it. In systemd, `--user` instances are used to manage and control the resources for the hierarchy of processes belonging to each user. The processes for each user are part of a control group, which I'll cover in a future article.
### Analytic graphs
Most pointy-haired-bosses (PHBs) and many good managers find pretty graphs easier to read and understand than the text-based system performance data I usually prefer. Sometimes, though, even I like a good graph, and `systemd-analyze` provides the capability to display boot/startup data in an [SVG][4] vector graphics chart.
The following command generates a vector graphics file that displays the events that take place during boot and startup. It only takes a few seconds to generate this file:
```
`[root@david ~]# systemd-analyze plot > /tmp/bootup.svg`
```
This command creates an SVG, which is a text file that defines a series of graphic vectors that applications, including Image Viewer, Ristretto, Okular, Eye of Mate, LibreOffice Draw, and others, use to generate a graph. These applications process SVG files to create an image.
I used LibreOffice Draw to render a graph. The graph is huge, and you need to zoom in considerably to make out any detail. Here is a small portion of it:
![The bootup.svg file displayed in LibreOffice Draw.][5]
(David Both, [CC BY-SA 4.0][6])
The bootup sequence is to the left of the zero (0) on the timeline in the graph, and the startup sequence is to the right of zero. This small portion shows the kernel, `initrd`, and the processes `initrd` started.
This graph shows at a glance what started when, how long it took to start up, and the major dependencies. The critical path is highlighted in red.
Another command that generates graphical output is `systemd-analyze plot`. It generates textual dependency graph descriptions in [DOT][7] format. The resulting data stream is then piped through the `dot` utility, which is part of a family of programs that can be used to generate vector graphic files from various types of data. These SVG files can also be processed by the tools listed above.
First, generate the file. This took almost nine minutes on my primary workstation:
```
[root@david ~]# time systemd-analyze dot | dot -Tsvg &gt; /tmp/test.svg
   Color legend: black     = Requires
                 dark blue = Requisite
                 dark grey = Wants
                 red       = Conflicts
                 green     = After
real    8m37.544s
user    8m35.375s
sys     0m0.070s
[root@david ~]#
```
I won't reproduce the output here because the resulting graph is pretty much spaghetti. But you should try it and view the result to see what I mean.
### Conditionals
One of the more interesting, yet somewhat generic, capabilities I discovered while reading the `systemd-analyze(1)` man page is the `condition` subcommand. (Yes—I do read the man pages, and it is amazing what I have learned this way!) This `condition` subcommand can be used to test the conditions and asserts that can be used in systemd unit files.
It can also be used in scripts to evaluate one or more conditions—it returns a zero (0) if all are met or a one (1) if any condition is not met. In either case, it also spews text about its findings.
The example below, from the man page, is a bit complex. It tests for a kernel version between 4.0 and 5.1, that the host is running on AC power, that the system architecture is anything but ARM, and that the directory `/etc/os-release` exists. I added the `echo $?` statement to print the return code.
```
[root@david ~]# systemd-analyze condition 'ConditionKernelVersion = ! &lt;4.0' \
                    'ConditionKernelVersion = &gt;=5.1' \
                    'ConditionACPower=|false' \
                    'ConditionArchitecture=|!arm' \
                    'AssertPathExists=/etc/os-release' ; \
echo $?
test.service: AssertPathExists=/etc/os-release succeeded.
Asserts succeeded.
test.service: ConditionArchitecture=|!arm succeeded.
test.service: ConditionACPower=|false failed.
test.service: ConditionKernelVersion=&gt;=5.1 succeeded.
test.service: ConditionKernelVersion=!&lt;4.0 succeeded.
Conditions succeeded.
0
[root@david ~]#
```
The list of conditions and asserts starts around line 600 on the `systemd.unit(5)` man page.
### Listing configuration files
The `systemd-analyze` tool provides a way to send the contents of various configuration files to `STDOUT`, as shown here. The base directory is `/etc/`:
```
[root@david ~]# systemd-analyze cat-config systemd/system/display-manager.service
# /etc/systemd/system/display-manager.service
[Unit]
Description=LXDM (Lightweight X11 Display Manager)
#Documentation=man:lxdm(8)
Conflicts=[getty@tty1.service][8]
After=systemd-user-sessions.service [getty@tty1.service][8] plymouth-quit.service livesys-late.service
#Conflicts=plymouth-quit.service
[Service]
ExecStart=/usr/sbin/lxdm
Restart=always
IgnoreSIGPIPE=no
#BusName=org.freedesktop.lxdm
[Install]
Alias=display-manager.service
[root@david ~]#
```
This is a lot of typing to do nothing more than a standard `cat` command does. I find the next command a tiny bit helpful. It can search out files with the specified pattern within the standard systemd locations:
```
[root@david ~]# systemctl cat backup*
# /etc/systemd/system/backup.timer
# This timer unit runs the local backup program
# (C) David Both
# Licensed under GPL V2
#
[Unit]
Description=Perform system backups
Requires=backup.service
[Timer]
Unit=backup.service
OnCalendar=*-*-* 00:15:30
[Install]
WantedBy=timers.target
# /etc/systemd/system/backup.service
# This service unit runs the rsbu backup program
# By David Both
# Licensed under GPL V2
#
[Unit]
Description=Backup services using rsbu
Wants=backup.timer
[Service]
Type=oneshot
Environment="HOME=/root"
ExecStart=/usr/local/bin/rsbu -bvd1
ExecStart=/usr/local/bin/rsbu -buvd2
[Install]
WantedBy=multi-user.target
[root@david ~]#
```
Both of these commands preface the contents of each file with a comment line containing the file's full path and name.
### Unit file verification
After creating a new unit file, it can be helpful to verify that its syntax is correct. This is what the `verify` subcommand does. It can list directives that are spelled incorrectly and call out missing service units:
```
`[root@david ~]# systemd-analyze verify /etc/systemd/system/backup.service`
```
Adhering to the Unix/Linux philosophy that "silence is golden," a lack of output messages means that there are no errors in the scanned file.
### Security
The `security` subcommand checks the security level of specified services. It only works on service units and not on other types of unit files:
```
[root@david ~]# systemd-analyze security display-manager
  NAME                                                        DESCRIPTION                                                     &gt;
✗ PrivateNetwork=                                             Service has access to the host's network                        &gt;
✗ User=/DynamicUser=                                          Service runs as root user                                       &gt;
✗ CapabilityBoundingSet=~CAP_SET(UID|GID|PCAP)                Service may change UID/GID identities/capabilities              &gt;
✗ CapabilityBoundingSet=~CAP_SYS_ADMIN                        Service has administrator privileges                            &gt;
✗ CapabilityBoundingSet=~CAP_SYS_PTRACE                       Service has ptrace() debugging abilities                        &gt;
✗ RestrictAddressFamilies=~AF_(INET|INET6)                    Service may allocate Internet sockets                           &gt;
✗ RestrictNamespaces=~CLONE_NEWUSER                           Service may create user namespaces                              &gt;
✗ RestrictAddressFamilies=~…                                  Service may allocate exotic sockets                             &gt;
✗ CapabilityBoundingSet=~CAP_(CHOWN|FSETID|SETFCAP)           Service may change file ownership/access mode/capabilities unres&gt;
✗ CapabilityBoundingSet=~CAP_(DAC_*|FOWNER|IPC_OWNER)         Service may override UNIX file/IPC permission checks            &gt;
✗ CapabilityBoundingSet=~CAP_NET_ADMIN                        Service has network configuration privileges                    &gt;
✗ CapabilityBoundingSet=~CAP_SYS_MODULE                       Service may load kernel modules
&lt;SNIP&gt;
✗ CapabilityBoundingSet=~CAP_SYS_TTY_CONFIG                   Service may issue vhangup()                                     &gt;
✗ CapabilityBoundingSet=~CAP_WAKE_ALARM                       Service may program timers that wake up the system              &gt;
✗ RestrictAddressFamilies=~AF_UNIX                            Service may allocate local sockets                              &gt;
→ Overall exposure level for backup.service: 9.6 UNSAFE 😨
lines 34-81/81 (END)
```
Yes, the emoji is part of the output. But, of course, many services need pretty much complete access to everything in order to do their work. I ran this program against several services, including my own backup service; the results may differ, but the bottom line seems to be mostly the same.
This tool would be very useful for checking and fixing userspace service units in security-critical environments. I don't think it has much to offer for most of us.
### Final thoughts
This powerful tool offers some interesting and amazingly useful options. Much of what this article explores is about using `systemd-analyze` to provide insights into Linux's startup performance using systemd. It can also analyze other aspects of systemd.
Some of these tools are of limited use, and a couple should be forgotten completely. But most can be used to good effect when resolving problems with startup and other systemd functions.
### Resources
There is a great deal of information about systemd available on the internet, but much is terse, obtuse, or even misleading. In addition to the resources mentioned in this article, the following webpages offer more detailed and reliable information about systemd startup. This list has grown since I started this series of articles to reflect the research I have done.
* The [systemd.unit(5) manual page][9] contains a nice list of unit file sections and their configuration options along with concise descriptions of each.
* The Fedora Project has a good, practical [guide to systemd][10]. It has pretty much everything you need to know in order to configure, manage, and maintain a Fedora computer using systemd.
* The Fedora Project also has a good [cheat sheet][11] that cross-references the old SystemV commands to comparable systemd ones.
* Red Hat documentation contains a good description of the [Unit file structure][12] as well as other important information.  
* For detailed technical information about systemd and the reasons for creating it, check out Freedesktop.org's [description of systemd][13].
* [Linux.com][14]'s "More systemd fun" offers more advanced systemd [information and tips][15].
There is also a series of deeply technical articles for Linux sysadmins by Lennart Poettering, the designer and primary developer of systemd. These articles were written between April 2010 and September 2011, but they are just as relevant now as they were then. Much of everything else good that has been written about systemd and its ecosystem is based on these papers.
* [Rethinking PID 1][16]
* [systemd for Administrators, Part I][17]
* [systemd for Administrators, Part II][18]
* [systemd for Administrators, Part III][19]
* [systemd for Administrators, Part IV][20]
* [systemd for Administrators, Part V][21]
* [systemd for Administrators, Part VI][22]
* [systemd for Administrators, Part VII][23]
* [systemd for Administrators, Part VIII][24]
* [systemd for Administrators, Part IX][25]
* [systemd for Administrators, Part X][26]
* [systemd for Administrators, Part XI][27]
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/systemd-startup-configuration
作者:[David Both][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0 (Magnifying glass on code)
[2]: https://opensource.com/article/20/7/systemd-calendar-timespans
[3]: https://opensource.com/article/20/5/systemd-startup?utm_campaign=intrel
[4]: https://en.wikipedia.org/wiki/Scalable_Vector_Graphics
[5]: https://opensource.com/sites/default/files/uploads/bootup.svg-graph.png (The bootup.svg file displayed in LibreOffice Draw.)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://en.wikipedia.org/wiki/DOT_(graph_description_language)
[8]: mailto:getty@tty1.service
[9]: https://man7.org/linux/man-pages/man5/systemd.unit.5.html
[10]: https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html
[11]: https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet
[12]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_basic_system_settings/managing-services-with-systemd_configuring-basic-system-settings#Managing_Services_with_systemd-Unit_File_Structure
[13]: https://www.freedesktop.org/wiki/Software/systemd/
[14]: http://Linux.com
[15]: https://www.linux.com/training-tutorials/more-systemd-fun-blame-game-and-stopping-services-prejudice/
[16]: http://0pointer.de/blog/projects/systemd.html
[17]: http://0pointer.de/blog/projects/systemd-for-admins-1.html
[18]: http://0pointer.de/blog/projects/systemd-for-admins-2.html
[19]: http://0pointer.de/blog/projects/systemd-for-admins-3.html
[20]: http://0pointer.de/blog/projects/systemd-for-admins-4.html
[21]: http://0pointer.de/blog/projects/three-levels-of-off.html
[22]: http://0pointer.de/blog/projects/changing-roots
[23]: http://0pointer.de/blog/projects/blame-game.html
[24]: http://0pointer.de/blog/projects/the-new-configuration-files.html
[25]: http://0pointer.de/blog/projects/on-etc-sysinit.html
[26]: http://0pointer.de/blog/projects/instances.html
[27]: http://0pointer.de/blog/projects/inetd.html

View File

@ -0,0 +1,229 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Manage your Raspberry Pi fleet with Ansible)
[#]: via: (https://opensource.com/article/20/9/raspberry-pi-ansible)
[#]: author: (Ken Fallon https://opensource.com/users/kenfallon)
Manage your Raspberry Pi fleet with Ansible
======
A solution to the problem of updating difficult-to-reach Raspberry Pis
in the enterprise.
![Raspberries with pi symbol overlay][1]
The Raspberry Pi is a small, versatile device that makes interfacing with the real world a breeze for mere mortals. The Raspberry Pi Foundation's idea was to sell the devices at such a low cost that breaking one would be sad—but not a disaster. This is one reason it has been a massive success as an [educational tool][2]. But their usefulness has not escaped the business world, where they are becoming a valuable tool for automating the physical world.
Whether they are used for powering information displays, automating testing, controlling machinery, monitoring an environment, or doing other tasks, enterprises see Raspberry Pis as serious devices for doing serious tasks. Each model has a long product lifecycle—even the older models ([1B+][3], [2B][4], [3A+][5], [3B][6], and [3B+][7]) will remain in production until at least January 2026. There is little risk that they will go obsolete, so you can maintain a sufficiently large stock and treat them as modular components that you replace rather than fix.
### Stable hardware vs. changing software
While you can rely on the hardware to remain constant, the same is not true for the software. The Raspberry Pi's official supported operating system is [Raspberry Pi OS][8] (previously called Raspbian), and it should be updated regularly to get the latest [security and bug fixes][9].
This presents a problem. Because Raspberry Pis provide a bridge between the physical and virtual worlds, they are often installed in difficult-to-reach locations. They also tend to be installed by hardware folks, typically electricians for plants and assembly technicians for products. You do not want to waste their time by requiring them to connect a keyboard and monitor, log in [to run `raspi-config`][10], install software with `apt-get`, and then configure the software.
Since Raspberry Pi OS boots off an SD card, one approach is to always maintain an up-to-date version of the software on the SD card that the installer can just plug (and hot glue) in. A good quality assurance (QA) department will keep the SD cards under version control, so you can be assured that all new installations are on the latest release. But this solution is costly to maintain since every software update requires preparing a new image and burning it to all the SD cards. It also doesn't address how to fix all your existing devices. In some cases, you may need to create custom images for specific Raspberry Pis doing specific jobs, and it may be unavoidable that you need an installer to connect a keyboard and monitor to configure something.
A better approach is to use the same minimal base operating system install and then use [network boot][11] to maintain all the customizations and updates on the network. This requires maintaining just one base image, which is easier to manage, so it is a good approach if you have a reliable network infrastructure. Unfortunately, not all networks support this method; as the Raspberry Pi's network boot documentation says: "Due to the huge range of networking devices available, we can't guarantee that network booting will work with any device." Sadly, it is no longer an option on the [Raspberry Pi 4][12]. Furthermore, this is not an option when devices are disconnected from the network for a long period of time.
The better goal, therefore, is to produce a common base Raspberry Pi OS image that doesn't change often but, once it's installed, can automatically be customized, maintained, and managed remotely.
### Create the base image
Your base image will almost certainly need small changes from the default Raspberry Pi OS image. Fortunately, you only need to recreate the base image if the Raspberry Pi OS image is updated or you need to change something in your configuration. The typical time between major versions of Raspberry Pi OS is about two years, which is a good target maintenance lifecycle. It gives you plenty of time to swap out older devices for new ones while keeping things manageable for the QA department to maintain releases. Older versions will still be supported for security and bug fixes for [some time][13] after that.
On my Hacker Public Radio episode _[Safely enabling SSH in the default Raspbian image][14]_ in 2017, I walked through the steps to automate updating the base image. The script I created:
* Downloads the latest image ZIP file
* Verifies it is valid
* Extracts the image itself
* Enables SSH for secure remote management
* Changes the default passwords for the root and Pi users
* Secures the SSH server on the Pi
Since then, I have augmented the script to:
* Enable connections to a WiFi network (`wpa_supplicant.conf`)
* Load its configuration from an INI file, keeping sensitive information separate from the main script.
* Use [`losetup` to simplify mounting][15] the image
* Create a [firstboot script][16]
These changes ensure that the devices are locked down before deploying them. You can find an updated version of the [fix-ssh-on-pi script][17] on GitHub.
Now is a good time to modify the script for your environment and especially to add any security keys or digital certificates necessary for authentication. However, it's best to hold off adding any custom applications or configurations at this point, as they can be added later. For the most part, the image will behave like a generic Raspberry Pi OS image, meaning it will boot and resize the SD card as usual and install the typical default software and firmware.
The notable addition is support for a firstboot script. This is the glue that makes the Raspberry Pi run your custom configuration after the first time it configures itself. Again, I encourage you to modify the script for your environment. For example, you can have the device register itself, run through a system test and diagnostic procedures, pull down a client application, etc.
If you don't want to customize it, it will do the bare minimum needed to get your Raspberry Pi on the network so that it can be uniquely identified by the network-management software.
### Set up automatic management
If you're managing servers in a [DevOps][18] environment, you won't blink an eye at the idea of using [configuration management software][19] to control your Raspberry Pi devices. If you use a tool that requires an agent, you can include the agent software as part of the base image. Given the resources on the Raspberry Pi, though, an agentless solution such as [Ansible][20] might be the best option. It just uses SSH and Python, doesn't require any additional software on the client, the control software is easy to install, and it is easy to use.
All you need is the [Ansible software][21], a list of devices you want to manage saved in an [inventory file][22], and a [playbook][23], which is a set of instructions that you want carried out. For example, you can [update][9] the base Raspberry Pi OS image using the `apt update && apt full-upgrade` equivalent [apt module][24]. The playbook would be:
```
 - name: Run the equivalent of "apt-get update" as a separate step
    apt:
      update_cache: true
      cache_valid_time: 3600
  - name: Update all packages to the latest version
    apt:
      upgrade: dist
```
You may think installing Ansible for Raspberry Pi is overkill, but I find it is worthwhile if you need to manage more than two or three computers. Using Ansible also gives you a more hygienic network—your inventory is audited and listed in its host file, software installations are documented through its playbooks, and data and configurations are kept away from their devices, so they are easier to back up regularly.
According to [Wikipedia][25], Ansible's design goals include:
> * **Minimal in nature**. Management systems should not impose additional dependencies on the environment.
> * **Consistent**. With Ansible, one should be able to create consistent environments.
> * **Secure**. Ansible does not deploy agents to nodes. Only OpenSSH and Python are required on the managed nodes.
> * **Highly reliable**. When carefully written, an Ansible playbook can be [idempotent][26] to prevent unexpected side effects on the managed systems. It is entirely possible to have a poorly written playbook that is not idempotent.
> * **Minimal learning required**. Playbooks use an easy and descriptive language based on YAML and Jinja templates.
>
Anyone with the correct authorization can configure a device, but you can limit authorization using standard Unix permissions. You can apply granular access to playbooks so that, for example, test operators can access just the test and diagnostic tools you install.
### How it works
Imagine you have a widget factory that includes a Raspberry Pi in its product. Your facilities team also uses them to monitor the environmental plant and security. Likewise, the engineering team uses the devices on the production lines within the manufacturing monitoring process. And the IT department uses them as disposable dumb terminals to access the head office enterprise resource planning ([ERP][27]) system. In all of these cases, downtime needs to be kept to a minimum.
We aim to deliver the exact same device with the exact same image to each of the teams.
#### Preparing the image
Common to all stages is preparing the image itself. After cloning the [fix-ssh-on-pi.bash script from github][17], a one time action is needed to edit and rename the files `fix-ssh-on-pi.ini_example` to `fix-ssh-on-pi.ini`, and `wpa_supplicant.conf_example` to `wpa_supplicant.conf`.
You only need to run the script any time that [Raspberry Pi OS][8](Raspbian) is updated, or when you changed your configuration files. I would recommend including this as part of your devops workflow. If you dont have that in place yet then it can be automated using a simple cron job.
I would recommend having a Raspberry Pi Station dedicated to burning the latest SDCards in the store room. This would automatically burn the latest image from the network once a new card is inserted into the [external SD Card Reader][28]. With some imagination and a 3D printer, a nice unit could be manufactured for giving feedback on progress.
When a Raspberry Pi is requisitioned, the store keepers can then remove one of the finished SDCards and include it with the work order.
#### Inventory/Hosts File
In our fictitious example, the role of the device will be determined by the location of the network that it has connected to. Therefore we need to be able to identify Raspberry Pies once they come onto the network. How you approach this will entirely depend on how your network is configured, and what tools is available to you. I would advise listening to the episode by [operat0r][29] called [hpr3090 :: Locating Computers on a Enterprise Network][30] for some great tips on how to do this.
Each department would have their own provisioning server running the Ansible Software, which of course could be another Raspberry Pi. It is the standard unix/ssh permissions that dictate who has access to what within your organization. In episode [hpr3080 :: Ansible ping][31] I walked through the absolute basics of installing and troubleshooting [Ansible][20]. Since then [klaatu][32] added [hpr3162 :: Introduction to Ansible][33] which is a great introduction to the topic in general.
How the provisioning server becomes aware of the new devices can be active or passive.
You could have the [First Boot script][16] actively calling a url to register itself. You would need to have a web application listening and using the received information to register the new host in the Ansible Inventory.
This might be a good approach for departments where devices are replaced infrequently and you want them provisioned as soon as possible. As an example when a water quality monitoring station gets replaced, it would be a good idea to have it register. The Electrician could then select the exact Playbook to deploy to the device via a smart phone app.
On the other hand a passive approach may be better if you are going to be installing devices constantly, like on a production line. In that case we can assume that any new devices found on the production line network will have our test and diagnostic software installed at the beginning of the line. This can also be removed automatically prior to shipping.
One of the changes that `fix-ssh-on-pi.bash` does is that it renames the hostname of each Raspberry Pi to a version based on its [Ethernet MAC address][34]. As an example a [Ethernet MAC address][34] of `dc:a6:32:01:23:45` will result in a [hostname][35] of `dca632012345`.
When the Raspberry Pi finishes its first time boot sequence, the 3rd automatic reboot will request a IP Address from your [DHCP][36] server, that hostname will (probably) become available in the office [DNS][37] network.
At this point your Raspberry Pi is accessible using something like `ssh dca632012345`, `ssh dca632012345.local`, `ssh dca632012345.lan`, or in our example `ssh dca632012345.production.example.com`.
I included a small script on [github][17] to locate Raspberry Pies based on [Ethernet MAC address][34]. I discussed this recently on my Hacker Public Radio episode _[Locating computers on a network][38]_:
```
# ./put-pi-in-ansible-host.bash | tee all_pies.ini
[all_pies]
b827eb012345 ansible_host=192.168.1.123
dca632012345 ansible_host=192.168.1.127
b827eb897654 ansible_host=192.168.1.143
dca632897654 ansible_host=192.168.1.223
```
In my _[Ansible ping][31]_ episode on Hacker Public Radio, I used a YAML inventory file instead of the INI version above.
#### Execute a playbook
Regardless of how the provisioning server becomes aware of the devices, you now know they exist. In this example, you would deploy different playbooks based on the subnet the device is in.
Perhaps the simplest playbook you can try is this one (from _Ansible ping_ and available on [GitHub][17]):
```
\- name: Test Ping
  hosts: all
  tasks:
  - action: ping
```
You should now have everything you need to communicate with the new devices:
```
`ansible-playbook --inventory-file all_pies.ini ping-example-playbook.yaml`
```
By modifying the playbook, you can update and configure your devices any way you like. I use this to create users, update the system to the latest version, add and remove software, and do other configurations. There are several good examples available about updating your systems, such as the [Ansible apt update all packages on Ubuntu / Debian Linux][39] tutorial.
At this point, the devices cease to be generic. You will know the exact role each Raspberry Pi should have, and you can provision it as such. How custom it is will depend on the playbook, but I advise having a specific [Ansible role][40] for each task you need a Pi to do. For example, even if your widget factory has only one water-quality monitoring station, you should still define a role for it. Not only will this allow you to quickly deploy an identical replacement if necessary, but you are also documenting the process, which may be required for certifications such as [ISO 9000][41].
You now have the means to audit that updates to your network are in place and being done regularly. Hopefully, this will keep your devices secure for many years of service. This method also applies to products you ship, as they can be updated via hotspots operated by field service engineers. During regular system maintenance, the Raspberry Pi is automatically updated using credentials supplied in the `wpa_supplicant.conf` file.
### Make management easier
I hope this has opened your mind about how to tackle managing many devices more easily. All you need to get started is your PC or laptop and a Raspberry Pi. The principles of burning a generic image, creating a device inventory, and deploying a playbook are the same whether you're working on a small scale or scaling up to hundreds of devices.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/raspberry-pi-ansible
作者:[Ken Fallon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kenfallon
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life-raspberrypi_0.png?itok=Kczz87J2 (Raspberries with pi symbol overlay)
[2]: https://www.raspberrypi.org/education/
[3]: https://www.raspberrypi.org/products/raspberry-pi-1-model-b-plus/
[4]: https://www.raspberrypi.org/products/raspberry-pi-2-model-b/
[5]: https://www.raspberrypi.org/products/raspberry-pi-3-model-a-plus/
[6]: https://www.raspberrypi.org/products/raspberry-pi-3-model-b/
[7]: https://www.raspberrypi.org/products/raspberry-pi-3-model-b-plus/
[8]: https://www.raspbian.org/
[9]: https://www.raspberrypi.org/documentation/raspbian/updating.md
[10]: https://www.raspberrypi.org/documentation/configuration/raspi-config.md
[11]: https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/net_tutorial.md
[12]: https://www.raspberrypi.org/blog/raspberry-pi-4-on-sale-now-from-35/#comment-1510410
[13]: https://wiki.debian.org/DebianReleases
[14]: http://hackerpublicradio.org/eps.php?id=2356
[15]: http://man7.org/linux/man-pages/man8/losetup.8.html
[16]: https://github.com/nmcclain/raspberian-firstboot
[17]: https://github.com/kenfallon/fix-ssh-on-pi
[18]: https://en.wikipedia.org/wiki/DevOps
[19]: https://en.wikipedia.org/wiki/Comparison_of_open-source_configuration_management_software
[20]: https://www.ansible.com/
[21]: https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html
[22]: https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html
[23]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html
[24]: https://docs.ansible.com/ansible/latest/modules/apt_module.html
[25]: https://en.wikipedia.org/wiki/Ansible_%28software%29%23Design_goals
[26]: https://en.wikipedia.org/wiki/Idempotent
[27]: https://en.wikipedia.org/wiki/Enterprise_resource_planning
[28]: https://www.amazon.com/StarTech-com-4-Slot-USB-C-Card-Reader/dp/B07HVPNQRQ/
[29]: http://hackerpublicradio.org/correspondents.php?hostid=36
[30]: http://hackerpublicradio.org/eps.php?id=3090
[31]: http://hackerpublicradio.org/eps.php?id=3080
[32]: http://hackerpublicradio.org/correspondents.php?hostid=78
[33]: http://hackerpublicradio.org/eps.php?id=3162
[34]: https://en.wikipedia.org/wiki/MAC_address
[35]: https://en.wikipedia.org/wiki/Hostname
[36]: https://en.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol
[37]: https://en.wikipedia.org/wiki/Domain_Name_System
[38]: http://hackerpublicradio.org/eps.php?id=3052
[39]: https://www.cyberciti.biz/faq/ansible-apt-update-all-packages-on-ubuntu-debian-linux/
[40]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html
[41]: https://en.wikipedia.org/wiki/ISO_9000

View File

@ -0,0 +1,330 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Teach Python with Jupyter Notebooks)
[#]: via: (https://opensource.com/article/20/9/teach-python-jupyter)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
Teach Python with Jupyter Notebooks
======
With Jupyter, PyHamcrest, and a little duct tape of a testing harness,
you can teach any Python topic that is amenable to unit testing.
![Person reading a book and digital copy][1]
Some things about the Ruby community have always impressed me. Two examples are the commitment to testing and the emphasis on making it easy to get started. The best example of both is [Ruby Koans][2], where you learn Ruby by fixing tests.
With the amazing tools we have for Python, we should be able to do something even better. We can. Using [Jupyter Notebook][3], [PyHamcrest][4], and just a little bit of duct tape-like code, we can make a tutorial that includes teaching, code that works, and code that needs fixing.
First, some duct tape. Usually, you do your tests using some nice command-line test runner, like [pytest][5] or [virtue][6]. Usually, you do not even run it directly. You use a tool like [tox][7] or [nox][8] to run it. However, for Jupyter, you need to write a little harness that can run the tests directly in the cells.
Luckily, the harness is short, if not simple: ` `
```
import unittest
def run_test(klass):
    suite = unittest.TestLoader().loadTestsFromTestCase(klass)
    unittest.TextTestRunner(verbosity=2).run(suite)
    return klass
```
Now that the harness is done, it's time for the first exercise.
In teaching, it is always a good idea to start small with an easy exercise to build confidence.
So why not fix a really simple test?
```
@run_test
class TestNumbers(unittest.TestCase):
   
    def test_equality(self):
        expected_value = 3 # Only change this line
        self.assertEqual(1+1, expected_value)
[/code] [code]
    test_equality (__main__.TestNumbers) ... FAIL
   
    ======================================================================
    FAIL: test_equality (__main__.TestNumbers)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "&lt;ipython-input-7-5ebe25bc00f3&gt;", line 6, in test_equality
        self.assertEqual(1+1, expected_value)
    AssertionError: 2 != 3
   
    ----------------------------------------------------------------------
    Ran 1 test in 0.002s
   
    FAILED (failures=1)
```
`Only change this line` is a useful marker for students. It shows exactly what needs to be changed. Otherwise, students could fix the test by changing the first line to `return`.
In this case, the fix is easy: ` `
```
@run_test
class TestNumbers(unittest.TestCase):
   
    def test_equality(self):
        expected_value = 2 # Fixed this line
        self.assertEqual(1+1, expected_value)
[/code] [code]
    test_equality (__main__.TestNumbers) ... ok
   
    ----------------------------------------------------------------------
    Ran 1 test in 0.002s
   
    OK
```
Quickly, however, the `unittest` library's native assertions will prove lacking. In `pytest`, this is fixed with rewriting the bytecode in `assert` to have magical properties and all kinds of heuristics. This would not work easily in a Jupyter notebook. Time to dig out a good assertion library: PyHamcrest:
```
`from hamcrest import *`[/code] [code]
@run_test
class TestList(unittest.TestCase):
   
    def test_equality(self):
        things = [1,
                  5, # Only change this line
                  3]
        assert_that(things, has_items(1, 2, 3))
[/code] [code]
    test_equality (__main__.TestList) ... FAIL
   
    ======================================================================
    FAIL: test_equality (__main__.TestList)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "&lt;ipython-input-11-96c91225ee7d&gt;", line 8, in test_equality
        assert_that(things, has_items(1, 2, 3))
    AssertionError:
    Expected: (a sequence containing &lt;1&gt; and a sequence containing &lt;2&gt; and a sequence containing &lt;3&gt;)
         but: a sequence containing &lt;2&gt; was &lt;[1, 5, 3]&gt;
   
   
    ----------------------------------------------------------------------
    Ran 1 test in 0.004s
   
    FAILED (failures=1)
```
PyHamcrest is not just good at flexible assertions; it is also good at clear error messages. Because of that, the problem is plain to see: `[1, 5, 3]` does not contain `2`, and it looks ugly besides:
```
@run_test
class TestList(unittest.TestCase):
   
    def test_equality(self):
        things = [1,
                  2, # Fixed this line
                  3]
        assert_that(things, has_items(1, 2, 3))
[/code] [code]
    test_equality (__main__.TestList) ... ok
   
    ----------------------------------------------------------------------
    Ran 1 test in 0.001s
   
    OK
```
With Jupyter, PyHamcrest, and a little duct tape of a testing harness, you can teach any Python topic that is amenable to unit testing.
For example, the following can help show the differences between the different ways Python can strip whitespace from a string:
```
source_string = "  hello world  "
@run_test
class TestList(unittest.TestCase):
   
    # This one is a freebie: it already works!
    def test_complete_strip(self):
        result = source_string.strip()
        assert_that(result,
                   all_of(starts_with("hello"), ends_with("world")))
    def test_start_strip(self):
        result = source_string # Only change this line
        assert_that(result,
                   all_of(starts_with("hello"), ends_with("world  ")))
    def test_end_strip(self):
        result = source_string # Only change this line
        assert_that(result,
                   all_of(starts_with("  hello"), ends_with("world")))
[/code] [code]
    test_complete_strip (__main__.TestList) ... ok
    test_end_strip (__main__.TestList) ... FAIL
    test_start_strip (__main__.TestList) ... FAIL
   
    ======================================================================
    FAIL: test_end_strip (__main__.TestList)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "&lt;ipython-input-16-3db7465bd5bf&gt;", line 19, in test_end_strip
        assert_that(result,
    AssertionError:
    Expected: (a string starting with '  hello' and a string ending with 'world')
         but: a string ending with 'world' was '  hello world  '
   
   
    ======================================================================
    FAIL: test_start_strip (__main__.TestList)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "&lt;ipython-input-16-3db7465bd5bf&gt;", line 14, in test_start_strip
        assert_that(result,
    AssertionError:
    Expected: (a string starting with 'hello' and a string ending with 'world  ')
         but: a string starting with 'hello' was '  hello world  '
   
   
    ----------------------------------------------------------------------
    Ran 3 tests in 0.006s
   
    FAILED (failures=2)
```
Ideally, students would realize that the methods `.lstrip()` and `.rstrip()` will do what they need. But if they do not and instead try to use `.strip()` everywhere:
```
source_string = "  hello world  "
@run_test
class TestList(unittest.TestCase):
   
    # This one is a freebie: it already works!
    def test_complete_strip(self):
        result = source_string.strip()
        assert_that(result,
                   all_of(starts_with("hello"), ends_with("world")))
    def test_start_strip(self):
        result = source_string.strip() # Changed this line
        assert_that(result,
                   all_of(starts_with("hello"), ends_with("world  ")))
    def test_end_strip(self):
        result = source_string.strip() # Changed this line
        assert_that(result,
                   all_of(starts_with("  hello"), ends_with("world")))
[/code] [code]
    test_complete_strip (__main__.TestList) ... ok
    test_end_strip (__main__.TestList) ... FAIL
    test_start_strip (__main__.TestList) ... FAIL
   
    ======================================================================
    FAIL: test_end_strip (__main__.TestList)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "&lt;ipython-input-17-6f9cfa1a997f&gt;", line 19, in test_end_strip
        assert_that(result,
    AssertionError:
    Expected: (a string starting with '  hello' and a string ending with 'world')
         but: a string starting with '  hello' was 'hello world'
   
   
    ======================================================================
    FAIL: test_start_strip (__main__.TestList)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "&lt;ipython-input-17-6f9cfa1a997f&gt;", line 14, in test_start_strip
        assert_that(result,
    AssertionError:
    Expected: (a string starting with 'hello' and a string ending with 'world  ')
         but: a string ending with 'world  ' was 'hello world'
   
   
    ----------------------------------------------------------------------
    Ran 3 tests in 0.007s
   
    FAILED (failures=2)
```
They would get a different error message that shows too much space has been stripped:
```
source_string = "  hello world  "
@run_test
class TestList(unittest.TestCase):
   
    # This one is a freebie: it already works!
    def test_complete_strip(self):
        result = source_string.strip()
        assert_that(result,
                   all_of(starts_with("hello"), ends_with("world")))
    def test_start_strip(self):
        result = source_string.lstrip() # Fixed this line
        assert_that(result,
                   all_of(starts_with("hello"), ends_with("world  ")))
    def test_end_strip(self):
        result = source_string.rstrip() # Fixed this line
        assert_that(result,
                   all_of(starts_with("  hello"), ends_with("world")))
[/code] [code]
    test_complete_strip (__main__.TestList) ... ok
    test_end_strip (__main__.TestList) ... ok
    test_start_strip (__main__.TestList) ... ok
   
    ----------------------------------------------------------------------
    Ran 3 tests in 0.005s
   
    OK
```
In a more realistic tutorial, there would be more examples and more explanations. This technique using a notebook with some examples that work and some that need fixing can work for real-time teaching, a video-based class, or even, with a lot more prose, a tutorial the student can complete on their own.
Now go out there and share your knowledge!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/teach-python-jupyter
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/read_book_guide_tutorial_teacher_student_apaper.png?itok=_GOufk6N (Person reading a book and digital copy)
[2]: https://github.com/edgecase/ruby_koans
[3]: https://jupyter.org/
[4]: https://github.com/hamcrest/PyHamcrest
[5]: https://docs.pytest.org/en/stable/
[6]: https://github.com/Julian/Virtue
[7]: https://tox.readthedocs.io/en/latest/
[8]: https://nox.thea.codes/en/stable/

View File

@ -0,0 +1,119 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (GNOME 3.38 is Here With Customizable App Grid, Performance Improvements and Tons of Other Changes)
[#]: via: (https://itsfoss.com/gnome-3-38-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
GNOME 3.38 is Here With Customizable App Grid, Performance Improvements and Tons of Other Changes
======
[GNOME 3.36][1] brought some much-needed improvements along with a major performance boost. Now, after 6 months, were finally here with GNOME 3.38 with a big set of changes.
### GNOME 3.38 Key Features
Here are the main highlight of GNOME 3.38 codenamed Orbis:
[Subscribe to our YouTube channel for more Linux videos][2]
#### Customizable App Menu
The app grid or the app menu will now be customizable as part of a big change in GNOME 3.38.
Now, you can create folders by dragging application icons over each other and move them to/from folders and set it right back in the app grid. You can also just reposition the icons as you want in the app grid.
![][3]
Also, these changes are some basic building blocks for upcoming design changes planned for future updates — so itll be exciting to see what we can expect.
#### Calendar Menu Updates
![][4]
The notification area is a lot cleaner with the recent GNOME updates but now with GNOME 3.38, you can finally access calendar events right below the calendar area to make things convenient and easy to access.
Its not a major visual overhaul, but theres a few improvements to it.
#### Parental Controls Improvement
You will observe a parental control service as a part of GNOME 3.38. It supports integration with various components of the desktop, the shell, the settings, and others to help you limit what a user can access.
#### The Restart Button
Some subtle improvements lead to massive changes and this is exactly one of those changes. Its always annoying to click on the “Power Off” / “Shut down” button first and then hit the “Restart” button to reboot the system.
So, with GNOME 3.38, you will finally notice a “Restart” entry as a separate button which will save you click and give you a peace of mind.
#### Screen Recording Improvements
[GNOME shells built-in screen record][5] is now a separate system service which should potentially make recording the screen a smooth experience.
Also, window screencasting had several improvements to it along with some bug fixes:
#### GNOME apps Updates
The GNOME calculator has received a lot of bug fixes. In addition to that, you will also find some major changes to the [epiphany GNOME browser][6].
GNOME Boxes now lets you pick the OS from a list of operating systems and GNOME Maps was updated with some UI changes as well.
Not just limited to these, you will also find subtle updates and fixes to GNOME control center, Contacts, Photos, Nautilus, and some other packages.
#### Performance &amp; multi-monitor support improvements
Theres a bunch of under-the-hood improvements to improve GNOME 3.38 across the board. For instance, there were some serious fixes to [Mutter][7] which now lets two monitors run at different refresh rates.
![][8]
Previously, if you had one monitor with a 60 Hz refresh rate and another with 144 Hz, the one with the slower rate will limit the second monitor. But, with the improvements in GNOME 3.38, it will handle multi-monitors without limiting any of them.
Also, some changes reported by [Phoronix][9] pointed out around 10% lower render time in some cases. So, thats definitely a great performance optimization.
#### Miscellaneous other changes
* Battery percentage indicator
* Restart option in the power menu
* New welcome tour
* Fingerprint login
* QR code scanning for sharing Wi-Fi hotspot
* Privacy and other improvements to GNOME Browser
* GNOME Maps is now responsive and changes its size based on the screen
* Revised icons
You can find a details list of changes in their official [changelog][10].
### Wrapping Up
GNOME 3.38 is indeed an impressive update to improve the GNOME experience. Even though the performance was greatly improved with GNOME 3.36, more optimizations is a very good thing for GNOME 3.38.
GNOME 3.38 will be available in Ubuntu 20.10 and [Fedora 33][11]. Arch and Manjaro users should be getting it soon.
I think there are plenty of changes in right direction. What do you think?
--------------------------------------------------------------------------------
via: https://itsfoss.com/gnome-3-38-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/gnome-3-36-release/
[2]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/gnome-app-arranger.jpg?resize=799%2C450&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/gnome-3-38-calendar-menu.png?resize=800%2C721&ssl=1
[5]: https://itsfoss.com/gnome-screen-recorder/
[6]: https://en.wikipedia.org/wiki/GNOME_Web
[7]: https://en.wikipedia.org/wiki/Mutter_(software)
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/gnome-multi-monitor-refresh-rate.jpg?resize=800%2C369&ssl=1
[9]: https://www.phoronix.com/scan.php?page=news_item&px=GNOME-3.38-Last-Min-Mutter
[10]: https://help.gnome.org/misc/release-notes/3.38
[11]: https://itsfoss.com/fedora-33/

View File

@ -0,0 +1,83 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open Usage Commons: Googles Initiative to Manage Trademark for Open Source Projects Runs into Controversy)
[#]: via: (https://itsfoss.com/open-usage-commons-controversy/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Open Usage Commons: Googles Initiative to Manage Trademark for Open Source Projects Runs into Controversy
======
Back in July, Google [announced][1] a new organization named Open Usage Commons. The aim of the organization is to help “projects protect their project identity through programs such as trademark management and usage guidelines”.
Google believes that “creating a neutral, independent ownership for these trademarks gives contributors and consumers peace of mind regarding their use of project names in a fair and transparent way”.
### Open Usage Commons and the controversy with IBM
![][2]
Everything seems good in theory, right? But soon after the Googles announcement of the Open Usage Commons, [IBM made an objection][3].
The problem is that Google included [Istio][4] project under [Open Usage Commons][5]. IBM is one of the founding members of the Istio project and it wants the project to be under open governance with [CNCF][6].
On behalf of Its FOSS, I had a quick interaction with [Heikki Nousiainen][7], CTO at [Aiven][8] to clear some air on the entire Open Usage Commons episode.
#### What is the Open Usage Commons trying to do?
**Heikki Nousiainen**: The stated purpose of Googles Open Usage Commons (OUC) is to provide a neutral and independent organization for open source projects to host and manage their trademarks. By applying open source software principles to trademarks, this will provide transparency and consistency. The idea is that this will lead to a more vibrant ecosystem for end users because vendors and developers can confidently build something that relies on projects brands. 
Although other foundations, such as the Cloud Native Computing Foundation (CNCF) and [Apache Foundation][9], provide some direction on trademarks, OUC provides more precision and consistency in what constitutes fair use for vendors. This avoids what has generally been left to the individual projects to decide, which has resulted in a confusing patchwork of guidelines.
Additionally, it is likely an attempt by Google to avoid situations similar to what [Amazon Web Services (AWS) has faced with Elasticsearch][10], e.g. where trademarks have appeared to be increasingly used to prevent exactly what Google is attempting to accomplish with this foundation, _**relatively open use of project brand identifiers by competing vendors**_.
#### What are the problems surrounding the Commons?
**Heikki Nousiainen**: The main controversy surrounds the question as to why [Istio][11] was not placed under CNCF governance as IBM was clearly expecting it to be placed under an [Open Governance model][12] once it matured.
However, Open Usage Commons does not touch the governance model at all. Google, of course, has incentive to be able to trust they can utilize the recognized brands and trademarks to help customers recognize the services built on top of these familiar technologies.
#### How will it impact the open source world, both positive and negative impacts?
**Heikki Nousiainen**: It will remain to be seen what the long-term impact will be due to the fact that the only member projects are currently driven by Google. Although controversial, it doesnt seem like the fears that Google would be able to enact effective control over member projects will materialize.
A more telling question is, “Who will be likely to participate?” One thing is for sure, this will spark a long overdue discussion on how Open Source trademarks should be used when moving from software bundles to services offered in the cloud.
#### Does it sound like some big players will have control over the definition of open source trademarks? 
**Heikki Nousiainen**: Despite all the controversy over licensing, big players in this space have been and will remain key in securing the resources and support needed for the open source community to thrive.
Although there is some self-interest here, the creation of vehicles such as this do not necessarily constitute an attempt at imposing unjustified control over projects. As a community-driven software, all must work alongside one another to achieve success.
* * *
Personally, I think Googles long term game plan is to protect its Google Cloud Platform from possible lawsuits over the use of popular source projects trademarks and branding.
What do you think of the entire Open Usage Commons episode?
--------------------------------------------------------------------------------
via: https://itsfoss.com/open-usage-commons-controversy/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://opensource.googleblog.com/2020/07/announcing-new-kind-of-open-source.html
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/google-open-usage-commons.png?resize=800%2C450&ssl=1
[3]: https://developer.ibm.com/components/istio/blogs/istio-google-open-usage-commons/
[4]: https://istio.io
[5]: https://openusage.org
[6]: https://www.cncf.io/
[7]: https://www.linkedin.com/in/heikki-nousiainen/
[8]: https://aiven.io
[9]: http://apache.org
[10]: https://news.bloomberglaw.com/ip-law/amazon-sued-for-allegedly-infringing-elasticsearch-trademarks
[11]: https://istio.io/
[12]: https://developer.ibm.com/articles/open-governance-community/

View File

@ -1,125 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (LazyWolfLin)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (6 best practices for teams using Git)
[#]: via: (https://opensource.com/article/20/7/git-best-practices)
[#]: author: (Ravi Chandran https://opensource.com/users/ravichandran)
团队中使用 Git 的 6 个最佳实践
======
> 采用这些 Git 协作策略,让团队工作更高效。
![技术会议上得女性们][1]
Git 非常有助于小团队管理自身的软件开发进度,但有些方法能让你把它变得更高效。我发现了许多有助于我的团队的最佳实践,尤其是当不同 Git 水平的新人加入时。
### 在你的团队中正式确立 Git 约定
每个人都应当遵循对于分支命名,标记和编码的规范。每个组织都有自己的规范或者最佳实践,并且很多建议都可以从网上免费获取。而重要的是尽早选择合适的规范并在团队中遵循。
同时,不同的团队成员的 Git 水平参差不齐。你需要创建并维护一组符合团队规范的基础指令,用于执行通用 Git 操作。
### 正确地合并变更
每个团队成员都需要在独立的功能分支上开发。但是尽管使用了独立的分支,每个人最终都会修改到一些通用文件。当把更改合并回 `master` 分支时,合并通常无法自动进行。可能需要手动解决不同作者对同一文件不同变更的冲突。这就是你必须学会如何处理 Git 合并技术的原因。
现代编辑器具有协助解决 [Git 合并冲突][2]的功能。它们对同一文件合并的每一个部分提供多种选择,例如,是否保留你的更改,或者是保留另一分支的更改,亦或者是全部保留。如果你的编辑器不支持这些功能,那么可能是时候换一个代码编辑器了。
### 经常重整你的功能分支
当你持续地开发你的功能分支时,请经常对它做 `rebase master`。这意味着要经常执行以下步骤:
```
git checkout master
git pull
git checkout feature-xyz  # 假设的功能分支名称
git rebase master  # 可能需要解决 feature-xyz 上的合并冲突
```
这些步骤会在你的功能分支上[重写历史][3](这并不是件坏事)。首先,它会使你的功能分支获得 `master` 分支上当前的所有更新。其次,你在功能分支上的所有提交都会在分支历史的顶部重写,因此它们会顺序地出现在日志中。你可能需要一路解决遇到地合并冲突,这也许是个挑战。但是,这是解决冲突最好的时间,因为它只影响你的功能分支。
在解决完所有冲突并进行回归测试后,如果你准备好将功能分支合并回 `master`,那么就可以在再次执行上述的 `rebase` 步骤后进行合并:
```
git checkout master
git pull
git merge feature-xyz
```
在次期间,如果其他人也将和你有冲突的更改推送到 `master`,那么 Git 合并将再次发生冲突。你需要解决它们并重新进行回归测试。
还有一些其他的合并哲学(例如,只使用合并不使用 rebase 以防止重写历史),其中一些甚至可能更简单易用。但是,我发现上述方法是一个干净可靠的策略。提交历史日志将以有意义的功能序列进行排列。
如果使用“纯合并”策略(上面所说的,不定期 rebase那么 `master` 分支的历史将穿插着所有同时开发的功能的提交。这样混乱的历史很难回顾。确切的提交时间通常并不是那么重要。最好是有一个易于查看的历史日志。
### 在合并前压缩提交
当你在功能分支上开发时,即使再小的修改也可以作为一个 commit。但是如果每个个功能分支都要产生五十个 commit那么随着不断地增添新功能`master` 分支的 commit 数终将无谓地膨胀。通常来说,每个功能分支只应该往 `master` 中增加一个或几个 commit。为此你需要将多个 commit 压缩成一个或者几个带有更详细信息的提交中。通常使用以下命令来完成:
```
git rebase -i HEAD~20  # 查看可进行压缩的二十个 commit
```
当这条命令执行后,将弹出一个 commit 列表的编辑器,你可以通过包括 _pick__squash_ 内的数种方式编辑它。_pick_ 一个 commit 即保留这个 commit。_squash_ 一个 commit 则是将这个 commit 合并到前一个中。使用这些方法,你就可以将多个 commit 合并到一个并做编辑和清理。这也是一个清理不重要的 commit 信息的机会(例如,带错字的提交)。
总之,保留所有与 commit 相关的操作,但在合并到 `master` 分支前,合并并编辑相关信息以明确意图。注意,不要在 rebase 的过程中不小心删掉 commit。
在执行完诸如 rebase 之类的操作后,我会再次看看 `git log` 并做最终的修改:
```
git commit --amend
```
最后,由于重写了分支的 Git 提交历史,必需强制更新远程分支:
```
git push -f
```
### 使用标签
当你完成测试并准备从 `master` 分支部署软件到线上时,又或者当你出于某种原因想要保留当前状态作为一个里程碑时,那么可以创建一个 Git 的标签。对于一个积累了一些变更和相应提交的分支而言,标签就是该分支在那一时刻的快照。一个标签可以看作是没有历史记录的分支,也可以看作是直接指向标签创建前那个提交的命名指针。
配置控制是关于保留各个里程碑上代码的状态。大多数项目都有一个需求允许重现任一里程碑上的软件源码以便在需要时重新构建。Git 标签为每个代码的里程碑提供了一个唯一标识。打标签非常简单:
```
git tag milestone-id -m "short message saying what this milestone is about"
git push --tags   # 不要忘记将标签显式推送到远程
```
考虑一种情况Git 标签对应的软件版本已经分发给客户,而且客户提了个问题。尽管代码库中的代码可能已经继续在开发,但通常情况下为了准确地重现客户问题以便做出修复,必须回退到 Git 标签对应的代码状态。偶尔,新代码已经修复了那个问题,但并非一直如此。通常你需要切换到特定的标签并从那个标签创建一个分支:
```
git checkout milestone-id        # 切换到分发给客户的标签
git checkout -b new-branch-name  # 创建新的分支用于重现 bug
```
此外,如果带注释的标记和带符号的标记有助于你的项目,可以考虑使用它们。
### 让软件运行时打印标签
在大多数嵌入式项目中,从代码版本构建出二进制文件有固定的名称。但无法从它的名称推断出对应的 Git 标签。在构建时“嵌入标签”有助于将未来的问题精准地关联到特定的构建。在构建过程中可以自动地嵌入标签。通常,`git describe` 生成地标签代码会在代码编译前插入到代码中,以便生成的可执行文件能够在启时输出标签代码。当客户报告问题时,可以指导他们给你发送启动时输出的数据。
### 总结
Git 是一个需要花时间去掌握的复杂工具。使用这些实践可以帮助团队成功地使用 Git 协作,无论他们的知识水平。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/git-best-practices
作者:[Ravi Chandran][a]
选题:[lujun9972][b]
译者:[LazyWolfLin](https://github.com/LazyWolfLin)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ravichandran
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/christina-wocintechchat-com-rg1y72ekw6o-unsplash_1.jpg?itok=MoIv8HlK (Women in tech boardroom)
[2]: https://opensource.com/article/20/4/git-merge-conflict
[3]: https://opensource.com/article/20/4/git-rebase-i

View File

@ -1,126 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Design a book cover with an open source alternative to InDesign)
[#]: via: (https://opensource.com/article/20/9/open-source-publishing-scribus)
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
用 InDesign 的开源替代方案设计书籍封面
======
使用开源的出版软件 Scribus 来制作你的下一本自己出版书籍的封面。
![Stack of books for reading][1]
我最近写完了一本关于 [C 语言编程][2]的书,我通过 [Lulu.com][3] 自行出版。我已经用 Lulu 做了好几个图书项目它是一个很棒的平台。今年早些时候Lulu 做了一些改变让作者在创作图书封面时有了更大的控制权。以前你只需上传一对大尺寸图片作为书的封面和封底。现在Lulu 允许作者上传完全按照你的书的尺寸定制的 PDF。
你可以使用 [Scribus][4] 这个开源页面布局程序来创建封面。下面是我的做法。
### 下载一个模板
当你在 Lulu 上输入图书的信息时,最终会进入**设计**栏。在该页面的**设计封面**部分,你会发现一个方便的**下载模板**按钮,它为你的图书封面提供了一个 PDF 模板。
![Lulu Design your Cover page][5]
(Jim Hall, [CC BY-SA 4.0][6])
下载此模板,它为你提供了在 Scribus 中创建自己的书籍封面所需的信息。
![Lulu's cover template][7]
(Jim Hall, [CC BY-SA 4.0][6])
最重要的细节是:
* 文件总大小含出血bleed
* 出血区(从裁切边缘)
* 书脊区
**出血**Bleed是一个印刷术语在准备**打印就绪**文件时,这个术语很重要。它与普通文件中的页边距不同。打印文件时,你会为顶部、底部和侧面设置一个页边距。在大多数文档中,页边距通常为一英寸左右。
但在打印就绪的文件中,文档的尺寸需要比成品书大一些,因为书籍的封面通常包括颜色或图片,一直到封面的边缘。为了创建这种设计,你要使颜色或图片超出你的边距,印刷厂就会把多余的部分修剪掉,使封面缩小到准确的尺寸。因此,**修剪**就是印刷厂将封面精确地裁剪成相应尺寸。而**出血区** bleed area 就是印刷厂裁掉的多余部分。
如果你没有出血,印刷厂就很难完全按照尺寸印刷封面。如果打印机只偏离了一点点,你的封面最终会在边缘留下一个微小的、白色的、没有印刷的边缘。使用出血和修剪意味着你的封面每次都能看起来正确。
### 在 Scribus 中设置书籍的封面文档
要在 Scribus 中创建新文档,请从定义文档尺寸的 **New Document** 对话框开始。单击 “**Bleeds**” 选项卡,并输入 PDF 模板所说的出血尺寸。Lulu 图书通常在所有边缘使用 0.125 英寸的出血量。
对于 Scribus 中的文档总尺寸,你不能只使用 PDF 模板上的文档总尺寸。如果这样做,你的 Scribus 文档将有错误的尺寸。相反,你需要做一些数学计算来获取正确的尺寸。
看下 PDF 模板中的 **Total Document Size (with bleed)**。这是将要发送给打印机的 PDF 的总尺寸,它包括封底、书脊和封面(包含出血量)。要在 Scribus 中输入正确的尺寸,你必须从所有边缘中减去出血量。例如,我最新的书的尺寸是 Crown Quarto装订后书脊宽度为 7.44" x 9.68",宽度为 0.411"。加上 0.125" 的出血量,**Total Document Size (with bleed)** 是 15.541"×9.93"。因此,我在 Scribus 中的文档尺寸是:
* 宽15.541-(2 x 0.125)=15.291"
* 高9.93-(2 x 0.125)=9.68"
![Scribus document setup][8]
(Jim Hall, [CC BY-SA 4.0][6])
这将设置一个新的适合我的书的封面尺寸的 Scribus 文档。新的 Scribus 文档尺寸应与 PDF 模板上列出的 **Total Document Size (with bleed)** 完全匹配。
### 从书脊开始
在 Scribus 中创建新的书籍封面时,我喜欢从书脊区域开始。这可以帮助我验证我是否在 Scribus 中正确定义了文档。
使用**矩形**工具在文档上绘制一个彩色方框,书脊需要在那里。你不必完全按照正确的尺寸和位置来绘制,只要接近并使用**属性**来设置正确的值即可。在形状的**属性**中,选择左上角基点,然后输入书脊需要去的 xy 位置和尺寸。同样,你需要做一些数学计算,并使用 PDF 模板上的尺寸作为参考。
![Empty Scribus document][9]
(Jim Hall, [CC BY-SA 4.0][6])
例如,我的书的修边尺寸是 7.44"×9.68",这是印刷厂修边后的封面和封底的尺寸。我的书的书脊大小是 0.411",出血量是 0.125"。也就是说,书脊的左上角 X,Y 的正确位置是:
* X-Pos (出血量+裁剪宽度)0.411+7.44=7.8510"
* Y-Pos减去出血量-0.125"
矩形的尺寸是我的书封面的全高(包括出血)和 PDF 模板中标明的书脊宽度。
* 宽度: 0.411"
* 高度9.93"
将矩形的 **Fill** 设置为你喜欢的颜色,将 **Stroke** 设置为 **None** 以隐藏边界。如果你正确地定义了 Scribus 文档,你应该最终得到一个矩形,它可以延伸到位于文档中心的图书封面的顶部和底部边缘。
![Book spine in Scribus][10]
(Jim Hall, [CC BY-SA 4.0][6])
如果矩形与文档不完全匹配,可能是你在创建 Scribus 文档时设置了错误的尺寸。由于你还没有在书的封面上花太多精力,所以可能最容易的做法是重新开始,而不是尝试修复你的错误。
### 剩下的就看你自己了
接下来,你可以创建你的书的封面的其余部分。始终使用 PDF 模板作为指导。封底在左边,封面在右边
我可以管理一个简单的书籍封面,但我缺乏艺术能力,无法创造出真正醒目的设计。在自己设计了几个书的封面后,我对那些能设计出好封面的人产生了敬意。但如果你只是需要制作一个简单的封面,你可以通过开源软件自己动手。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/open-source-publishing-scribus
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_read_list_stack_study.png?itok=GZxb9OAv (Stack of books for reading)
[2]: https://opensource.com/article/20/8/c-programming-cheat-sheet
[3]: https://www.lulu.com/
[4]: https://www.scribus.net/
[5]: https://opensource.com/sites/default/files/uploads/lulu-download-template.jpg (Lulu Design your Cover page)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://opensource.com/sites/default/files/uploads/lulu-pdf-template.jpg (Lulu's cover template)
[8]: https://opensource.com/sites/default/files/uploads/scribus-new-document.jpg (Scribus document setup)
[9]: https://opensource.com/sites/default/files/uploads/scribus-empty-document.jpg (Empty Scribus document)
[10]: https://opensource.com/sites/default/files/uploads/scribus-spine-rectangle.jpg (Book spine in Scribus)

View File

@ -0,0 +1,79 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The New YubiKey 5C NFC Security Key Lets You Use NFC to Easily Authenticate Your Secure Devices)
[#]: via: (https://itsfoss.com/yubikey-5c-nfc/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
全新的 YubiKey 5C NFC 安全密钥让你可以使用 NFC 轻松认证你的安全设备
======
如果你对用最好的认证方法来保护你的在线帐户的安全格外谨慎,你可能知道 [Yubico][1]。他们制作的硬件认证安全密钥可以取代[双因素认证][2],并摆脱在线账户的密码认证系统。
基本上,你只需将安全密钥插在电脑上,或者使用智能手机上的 NFC 来解锁访问账户。这样一来,你的认证方式就会完全保持离线状态。
![][3]
当然,你可以随时使用 [Linux 中好用的密码管理器][4]。但如果你拥有或为企业工作,或者只是对自己的隐私和安全格外谨慎,想增加一层额外的安全保护,这些硬件安全密钥可能值得一试。这些设备最近获得了一些人气。
Yubico 的最新产品 ”[YubiKey 5C NFC][5]“ 可能是令人印象深刻的东西,因为它既可以作为 Type-C 的 USB 密钥使用,也可以作为 NFC 使用(只要用密钥碰触你的设备)。
下面,让我们来看看这款安全密钥的概况。
_请注意It's FOSS 是 Yubico 的附属合作伙伴。请阅读我们的[联盟政策][6]_。
### Yubico 5C NFC概述
![][7]
YubiKey 5C NFC 是最新的产品,它同时使用 USB-C 和 NFC。因此你可以轻松地将它插入 Windows、macOS 和 Linux 电脑。除了电脑,你也可以用你的 Android 或 iOS 智能手机或平板电脑来使用它。
不仅仅局限于 USB-C 和 NFC 的支持(这是件好事),它也恰好是世界上第一个支持智能卡的多协议安全密钥。
对于普通消费者来说,硬件安全密钥并不那么常见,因为它的成本很高。但在疫情流行的过程中,随着远程办公的兴起,一个更安全的认证系统肯定会派上用场。
以下是 Yubico 在其新闻稿中提到的内容:
> Yubico 首席产品官 Guido Appenzeller 表示:“如今人们工作和上网的方式与几年前大不相同,尤其是在过去几个月内。用户不再仅仅被一种设备或服务所束缚,他们也不希望如此。这就是为什么 YubiKey 5C NFC 是我们最受欢迎的安全密钥之一。它与大多数现代电脑和手机兼容,并在一系列传统和现代应用中运行良好。归根结底,我们的客户渴望的是无论如何都能”正常工作“的安全性。”
YubiKey 5C NFC 支持的协议有 FIDO2、WebAuthn、FIDO U2F、PIV 智能卡、OATH-HOTP 和 OATH-TOTP (基于哈希和时间的一次性密码)、[OpenPGP][8]、YubiOTP 和挑战应答认证。
考虑到所有这些协议,你可以轻松地保护任何支持硬件认证的在线帐户,同时还可以访问身份访问管理 IAM 解决方案。因此,这对个人用户和企业来说都是一个很好的选择。
### 定价和渠道
YubiKey 5C NFC 的价格为 55 美元。你可以直接从他们的[在线商店][5]订购或者从你所在国家的任何授权经销商处购买。花费可能也会根据运输费用的不同而有所不同但对于那些想要为他们的在线账户提供最佳安全级别的用户而言55 美元似乎是个不错的价格。
值得注意的是,如果你订购两个以上的 YubiKeys你可以获得批量折扣。
[Order YubiKey 5C NFC][5]
### 总结
无论你是想保护你的云存储帐户还是其他在线帐户的安全如果你不介意花点钱来保护你的数据安全Yubico 的最新产品是值得一试的。
你是否使用过 YubiKey 或其他安全密钥,如 LibremKey 等?你对它的体验如何?你认为这些设备值得花钱吗?
--------------------------------------------------------------------------------
via: https://itsfoss.com/yubikey-5c-nfc/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/recommends/yubikey/
[2]: https://ssd.eff.org/en/glossary/two-factor-authentication
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/yubikey-5c-nfc-desktop.jpg?resize=800%2C671&ssl=1
[4]: https://itsfoss.com/password-managers-linux/
[5]: https://itsfoss.com/recommends/yubico-5c-nfc/
[6]: https://itsfoss.com/affiliate-policy/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/yubico-5c-nfc.jpg?resize=800%2C671&ssl=1
[8]: https://www.openpgp.org/