Merge pull request #2 from LCTT/master

update
This commit is contained in:
amwps290 2018-06-22 08:42:34 +08:00 committed by GitHub
commit c69db9e737
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
32 changed files with 2113 additions and 894 deletions

View File

@ -1,12 +1,13 @@
# 将 DEB 软件包转换成 Arch Linux 软件包
将 DEB 软件包转换成 Arch Linux 软件包
============
![](https://www.ostechnix.com/wp-content/uploads/2017/10/Debtap-720x340.png)
我们已经学会了如何[**为多个平台构建包**][1],以及如何从[**源代码构建包**][2]。 今天,我们将学习如何将 DEB 包转换为 Arch Linux 包。 您可能会问AUR 是这个星球上的大型软件存储库,几乎所有的软件都可以在其中使用。 为什么我需要将 DEB 软件包转换为 Arch Linux 软件包? 这的确没错! 但是,某些软件包无法编译(封闭源代码软件包),或者由于各种原因(如编译时出错或文件不可用)而无法从 AUR 生成。 或者,开发人员懒得在 AUR 中构建一个包,或者他/她不想创建 AUR 包。 在这种情况下,我们可以使用这种快速但有点粗糙的方法将 DEB 包转换成 Arch Linux 包。
我们已经学会了如何[为多个平台构建包][1],以及如何从[源代码构建包][2]。 今天,我们将学习如何将 DEB 包转换为 Arch Linux 包。 您可能会问AUR 是这个星球上的大型软件存储库,几乎所有的软件都可以在其中使用。 为什么我需要将 DEB 软件包转换为 Arch Linux 软件包? 这的确没错! 但是,由于某些软件包无法编译(封闭源代码软件包),或者由于各种原因(如编译时出错或文件不可用)而无法从 AUR 生成。 或者,开发人员懒得在 AUR 中构建一个包,或者他/她不想创建 AUR 包。 在这种情况下,我们可以使用这种快速但有点粗糙的方法将 DEB 包转换成 Arch Linux 包。
### Debtap - 将 DEB 包转换成 Arch Linux 包
为此,我们将使用名为 “Debtap” 的实用程序。 它代表了 **DEB** **T** o **A** rch Linux **P** ackage。 Debtap 在 AUR 中可以使用,因此您可以使用 AUR 辅助工具(如 [Pacaur][3][Packer][4] 或 [Yaourt][5] )来安装它。
为此,我们将使用名为 “Debtap” 的实用程序。 它代表了 **DEB** **T** o **A** rch Linux **P** ackage。 Debtap 在 AUR 中可以使用,因此您可以使用 AUR 辅助工具(如 [Pacaur][3][Packer][4] 或 [Yaourt][5] )来安装它。
使用 pacaur 安装 debtap 运行:
@ -26,7 +27,7 @@ packer -S debtap
yaourt -S debtap
```
同时,你的 Arch 系统也应该已经安装好了 **bash** **binutils** **pkgfile** 和 **fakeroot** 包。
同时,你的 Arch 系统也应该已经安装好了 `bash` `binutils` `pkgfile` 和 `fakeroot` 包。
在安装 Debtap 和所有上述依赖关系之后,运行以下命令来创建/更新 pkgfile 和 debtap 数据库。
@ -73,11 +74,11 @@ sudo debtap -u
==> All steps successfully completed!
```
你至少需要运行上述命令一次
你至少需要运行上述命令一次
现在是时候开始转换包了。
比如说要使用 debtap 转换包 **Quadrapassel**,你可以这样做:
比如说要使用 debtap 转换包 Quadrapassel你可以这样做
```
debtap quadrapassel_3.22.0-1.1_arm64.deb
@ -95,17 +96,17 @@ debtap quadrapassel_3.22.0-1.1_arm64.deb
==> Generating .PKGINFO file...
:: Enter Packager name:
**quadrapassel**
quadrapassel
:: Enter package license (you can enter multiple licenses comma separated):
**GPL**
GPL
*** Creation of .PKGINFO file in progress. It may take a few minutes, please wait...
Warning: These dependencies (depend = fields) could not be translated into Arch Linux packages names:
gsettings-backend
== > Checking and generating .INSTALL file (if necessary)...
==> Checking and generating .INSTALL file (if necessary)...
:: If you want to edit .PKGINFO and .INSTALL files (in this order), press (1) For vi (2) For nano (3) For default editor (4) For a custom editor or any other key to continue:
@ -118,25 +119,25 @@ gsettings-backend
**注**Quadrapassel 在 Arch Linux 官方的软件库中早已可用,我只是用它来说明一下。
如果在包转化的过程中,你不想回答任何问题,使用 **-q** 略过除了编辑元数据的所有问题。
如果在包转化的过程中,你不想回答任何问题,使用 `-q` 略过除了编辑元数据之外的所有问题。
```
debtap -q quadrapassel_3.22.0-1.1_arm64.deb
```
为了略过所有的问题(不推荐),使用 -Q。
为了略过所有的问题(不推荐),使用 `-Q`
```
debtap -Q quadrapassel_3.22.0-1.1_arm64.deb
```
转换完成后,您可以使用 “pacman” 在 Arch 系统中安装新转换的软件包,如下所示。
转换完成后,您可以使用 `pacman` 在 Arch 系统中安装新转换的软件包,如下所示。
```
sudo pacman -U <package-name>
```
显示帮助文档,使用 -h
显示帮助文档,使用 `-h`
```
$ debtap -h
@ -154,7 +155,7 @@ Options:
-P --P -Pkgbuild --Pkgbuild Generate a PKGBUILD file only
```
这就是现在要讲的。希望这个工具有所帮助。如果你发现我们的指南有用,请花一点时间在你的社交、专业网络分享并在 OSTechNix 支持我们!
这就是现在要讲的。希望这个工具有所帮助。如果你发现我们的指南有用,请花一点时间在你的社交、专业网络分享并支持我们!
更多的好东西来了。请继续关注!
@ -168,7 +169,7 @@ via: https://www.ostechnix.com/convert-deb-packages-arch-linux-packages/
作者:[SK][a]
译者:[amwps290](https://github.com/amwps290)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,60 +1,59 @@
为什么 Linux 比 Windows 和 macOS 的安全性好
为什么 Linux 比 Windows 和 macOS 更安全?
======
> 多年前做出的操作系统选型终将影响到如今的企业安全。在三大主流操作系统当中,有一个能被称作最安全的。
![](https://images.idgesg.net/images/article/2018/02/linux_security_vs_macos_and_windows_locks_data_thinkstock-100748607-large.jpg)
企业投入了大量时间、精力和金钱来保障系统的安全性。最强的安全意识可能就是有一个安全的网络运行中心,肯定用上了防火墙以及反病毒软件,可能花费大量时间监控他们的网络,寻找可能表明违规的异常信号,就像 IDS、SIEM 和 NGFW 一样,他们部署了一个名副其实的防御阵列。
企业投入了大量时间、精力和金钱来保障系统的安全性。最强的安全意识可能就是有一个安全运行中心SOC,肯定用上了防火墙以及反病毒软件,或许还花费了大量时间去监控他们的网络,以寻找可能表明违规的异常信号,用那些 IDS、SIEM 和 NGFW 之类的东西,他们部署了一个名副其实的防御阵列。
然而又有多少人想过数字化操作的基础之一:部署在员工的个人电脑上的操作系统?当选择桌面操作系统时,安全性是一个考虑的因素吗?
然而又有多少人想过数字化操作的基础之一——部署在员工的个人电脑上的操作系统呢?当选择桌面操作系统时,安全性是一个考虑的因素吗?
这就产生了一个 IT 人士都应该能回答的问题:一般部署哪种操作系统最安全呢?
我们问了一些专家他们对于以下三种选的看法Windows最复杂的平台也是最受欢迎的桌面操作系统macOS X基于 FreeBSD 的 Unix 操作系统,驱动着苹果的 Macintosh 系统运行;还有 Linux这里我们指的是所有的 Linux 发行版以及与基于 Unix 的操作系统相关的系统。
我们问了一些专家他们对于以下三种选的看法Windows最复杂的平台也是最受欢迎的桌面操作系统macOS X基于 FreeBSD 的 Unix 操作系统,驱动着苹果的 Macintosh 系统运行;还有 Linux这里我们指的是所有的 Linux 发行版以及与基于 Unix 的操作系统相关的系统。
### 怎么会这样
企业可能没有评估他们部署工作人员的操作系统的安全性的一个原因是,他们多年前就已经做出了选择。退一步讲,所有操作系统都还算安全,因为侵入它们,窃取数据或安装恶意软件的业务还处于起步阶段。而且一旦选择了操作系统,就很难再想改变。很少有 IT 组织希望将全球分散的员工队伍转移到全新的操作系统上。唉,他们已经受够了把用户搬到一个选好的新版本操作系统时的负面反响。
企业可能没有评估他们部署工作人员的操作系统的安全性的一个原因是,他们多年前就已经做出了选择。退一步讲,所有操作系统都还算安全,因为侵入它们、窃取数据或安装恶意软件的牟利方式还处于起步阶段。而且一旦选择了操作系统,就很难再改变。很少有 IT 组织想要面对将全球分散的员工队伍转移到全新的操作系统上的痛苦。唉,他们已经受够了把用户搬到一个现有的操作系统的新版本时的负面反响。
还有,重新考虑是高明的吗?这三款领先的桌面操作系统在安全方面的差异是否足以值得我们去做出改变呢?
还有,重新考虑操作系统是高明的吗?这三款领先的桌面操作系统在安全方面的差异是否足以值得我们去做出改变呢?
当然商业系统面临的威胁近几年已经改变了。攻击变得成熟多了。曾经支配了公众想象力的单枪匹马的青少年黑客已经被组织良好的犯罪分子网络以及具有庞大计算资源的政府资助组织的网络所取代。
像你们许多人一样,我有过很多那时的亲身经历:我曾经在许多 Windows 电脑上被恶意软件和病毒感染,我甚至被 宏病毒感染了 Mac 上的文件。最近,一个广泛传播的自动黑客绕开了网站的保护程序并用恶意软件感染了它。这种恶意软件的影响一开始是隐形的,甚至有些东西你没注意,直到恶意软件最终深深地植入系统以至于它的性能开始变差。一件有关病毒蔓延的震惊之事是不法之徒从来没有特定针对过我;当今世界,用僵尸网络攻击 100,000 台电脑容易得就像一次攻击几台电脑一样。
像你们许多人一样,我有过很多那时的亲身经历:我曾经在许多 Windows 电脑上被恶意软件和病毒感染,我甚至被宏病毒感染了 Mac 上的文件。最近,一个广泛传播的自动黑客攻击绕开了网站的保护程序并用恶意软件感染了它。这种恶意软件的影响一开始是隐形的,甚至有些东西你没注意,直到恶意软件最终深深地植入系统以至于它的性能开始变差。一件有关病毒蔓延的震惊之事是不法之徒从来没有特定针对过我;当今世界,用僵尸网络攻击 100,000 台电脑容易得就像一次攻击几台电脑一样。
### 操作系统真的很重要吗?
给你的用户部署的那个操作系统确实对你的安全态度产生了影响,但那并不是一个可靠的安全措施。首先,现在的攻击很可能会发生,因为攻击者探测了你的用户,而不是你的系统。一项对参与过 DEFCON 会议黑客的[调查][1]表明“84的人使用社交工程作为攻击策略的一部分。”部署安全操作系统只是一个重要的起点,但如果没有用户培训强大的防火墙和持续的警惕性,即使是最安全的网络也会受到入侵。当然,用户下载的软件,扩展程序,实用程序,插件和其他看起来还好的软件总是有风险的,成为了恶意软件出现在系统上的一种途径.
给你的用户部署的哪个操作系统确实对你的安全态势产生了影响,但那并不是一个可靠的安全措施。首先,现在的攻击很可能会发生,因为攻击者探测的是你的用户,而不是你的系统。一项对参加了 DEFCON 会议的黑客的[调查][1]表明“84 的人使用社交工程作为攻击策略的一部分。”部署安全操作系统只是一个重要的起点,但如果没有用户培训强大的防火墙和持续的警惕性,即使是最安全的网络也会受到入侵。当然,用户下载的软件、扩展程序、实用程序、插件和其他看起来还好的软件总是有风险的,成为了恶意软件出现在系统上的一种途径
无论你选择哪种平台,保持你系统安全最好的方法之一就是确保立即应用了软件更新。一旦补丁正式发布,黑客就可以对其进行反向工程并找到一种新的漏洞,以便在下一波攻击中使用。
而且别忘了最基本的操作。别用 root 权限,别授权用户连接到网络中的老服务器上。教您的用户如何挑选一个真正的好密码并且使用例如 [1Password][2] 这样的工具,以便在每个他们使用的帐户和网站上拥有不同的密码
而且别忘了最基本的操作。别用 root 权限,别授权访客连接到网络中的老服务器上。教您的用户如何挑选一个真正的好密码并且使用例如 [1Password][2] 这样的工具,以便在每个他们使用的帐户和网站上拥有不同的密码
因为底线是您对系统做出的每一个决定都会影响您的安全性,即使您的用户工作使用的操作系统也是如此。
### Windows流行之选
若你是一个安全管理人员,很可能文章中提出的问题就会变成这样:是否我们远离微软的 Windows 会更安全呢?说 Windows 主导业市场都是低估事实了。[NetMarketShare][4] 估计互联网上 88% 的电脑令人震惊地运行着 Windows 的某个版本。
若你是一个安全管理人员,很可能文章中提出的问题就会变成这样:是否我们远离微软的 Windows 会更安全呢?说 Windows 主导业市场都是低估事实了。[NetMarketShare][4] 估计互联网上 88% 的电脑令人震惊地运行着 Windows 的某个版本。
如果你的系统在这 88 之中,你可能知道微软会继续加强 Windows 系统的安全性。不断重写其改进或者重新改写了其代码库,增加了它的反病毒软件系统,改进了防火墙以及实现了沙箱架构,这样在沙箱里的程序就不能访问系统的内存空间或者其他应用程序。
如果你的系统在这 88 之中,你可能知道微软会继续加强 Windows 系统的安全性。这些改进被不断重写,或者重新改写了其代码库,增加了它的反病毒软件系统,改进了防火墙以及实现了沙箱架构,这样在沙箱里的程序就不能访问系统的内存空间或者其他应用程序。
但可能 Windows 的流行本身就是个问题操作系统的安全性可能很大程度上依赖于装机用户量的规模。对于恶意软件作者来说Windows 提供了大的施展平台。专注其中可以让他们的努力发挥最大作用。
像 Troy WilkinsonAxiom Cyber Solutions 的 CEO 解释的那样“Windows 总是因为很多原因而导致安全性保障来的最晚,主要是因为消费者的采用率。由于市场上大量基于 Windows 的个人电脑,黑客历来最有针对性地将这些系统作为目标。”
像 Troy WilkinsonAxiom Cyber Solutions 的 CEO 解释的那样“Windows 总是因为很多原因而导致安全性保障来的最晚,主要是因为消费者的采用率。由于市场上大量基于 Windows 的个人电脑,黑客历来最有针对性地将这些系统作为目标。”
可以肯定地说,从梅丽莎病毒到 WannaCry 或者更强的,许多世界上已知的恶意软件早已对准了 Windows 系统.
### macOS X 以及通过隐匿实现的安全
如果最流行的操作系统总是成为大目标,那么用一个不流行的操作系统能确保安全吗?这个主意是老法新用——而且是完全不可信的概念——“通过隐匿实现的安全”,这秉承了让软件内部运作保持专有,从而不为人知是抵御攻击的最好方法的理念。
如果最流行的操作系统总是成为大目标,那么用一个不流行的操作系统能确保安全吗?这个主意是老法新用——而且是完全不可信的概念——“通过隐匿实现的安全”,这秉承了让软件内部运作保持专有,从而不为人知是抵御攻击的最好方法的理念。
Wilkinson 坦言macOS X “比 Windows 更安全”,但他急于补充说“macOS 曾被认为是一个安全漏洞很小的完全安全的操作系统,但近年来,我们看到黑客制造了攻击苹果系统的额外漏洞。”
Wilkinson 坦言macOS X “比 Windows 更安全”,但他马上补充说“macOS 曾被认为是一个安全漏洞很小的完全安全的操作系统,但近年来,我们看到黑客制造了攻击苹果系统的额外漏洞。”
换句话说,攻击者会扩大活动范围而不会无视 Mac 领域。
Comparitech 的安全研究员 Lee Muson 说在选择更安全的操作系统时“macOS 很可能是被选中的目标”,但他提醒说,这一想法并不令人费解。它的优势在于“它仍然受益于通过隐匿实现的安全感和微软提供的更大的目标。”
Comparitech 的安全研究员 Lee Muson 说在选择更安全的操作系统时“macOS 很可能是被选中的目标”,但他提醒说,这一想法并不令人费解。它的优势在于“它仍然受益于通过隐匿实现的安全感和微软提供的操作系统是个更大的攻击目标。”
Wolf Solutions 公司的 Joe Moore 给予了苹果更多的信任,称“现成的 macOS X 在安全方面有着良好的记录,部分原因是它不像 Windows 那么广泛,而且部分原因是苹果公司在安全问题上干的不错。”
@ -66,17 +65,17 @@ Wolf Solutions 公司的 Joe Moore 给予了苹果更多的信任,称“现成
像 Moore 解释的那样“Linux 有可能是最安全的,但要求用户是资深用户。”所以,它不是针对所有人的。
将安全性作为主要功能的 Linux 发行版包括 Parrot Linux这是一个基于 Debian 的发行版Moore 说,它提供了许多与安全相关开箱即用的工具。
将安全性作为主要功能的 Linux 发行版包括 [Parrot Linux][5],这是一个基于 Debian 的发行版Moore 说,它提供了许多与安全相关开箱即用的工具。
当然,一个重要的区别是 Linux 是开源的。Simplex Solutions 的 CISO Igor Bidenko 说,编码人员可以阅读和评论彼此工作的现实看起来像是一场安全噩梦,但这确实是让 Linux 如此安全的重要原因。 “Linux 是最安全的操作系统,因为它的源代码是开放的。任何人都可以查看它,并确保没有错误或后门。”
当然,一个重要的区别是 Linux 是开源的。Simplex Solutions 的 CISO Igor Bidenko 说,编码人员可以阅读和审查彼此工作的现实看起来像是一场安全噩梦,但这确实是让 Linux 如此安全的重要原因。 “Linux 是最安全的操作系统,因为它的源代码是开放的。任何人都可以查看它,并确保没有错误或后门。”
Wilkinson 阐述说“Linux 和基于 Unix 的操作系统具有较少的信息安全领域已知的、可利用的安全缺陷。技术社区对 Linux 代码进行了审查,该代码有助于提高安全性:通过进行这么多的监督,易受攻击之处、漏洞和威胁就会减少。”
Wilkinson 阐述说“Linux 和基于 Unix 的操作系统具有较少的信息安全领域已知的、可利用的安全缺陷。技术社区对 Linux 代码进行了审查,该代码有助于提高安全性:通过进行这么多的监督,易受攻击之处、漏洞和威胁就会减少。”
这是一个微妙而违反直觉的解释,但是通过让数十人(有时甚至数百人)通读操作系统中的每一行代码,代码实际上更加健壮,并且发布漏洞错误的机会减少了。这与 PC World 为何出来说 Linux 更安全有很大关系。正如 Katherine Noyes 解释的那样,“微软可能吹捧它的大型付费开发者团队,但团队不太可能与基于全球的 Linux 用户开发者进行比较。 安全只能通过所有额外的关注获益。”
这是一个微妙而违反直觉的解释,但是通过让数十人(有时甚至数百人)通读操作系统中的每一行代码,代码实际上更加健壮,并且发布漏洞错误的机会减少了。这与 PC World 为何出来说 Linux 更安全有很大关系。正如 Katherine Noyes [解释][6]的那样,“微软可能吹捧它的大型付费开发者团队,但团队不太可能与基于全球的 Linux 用户开发者进行比较。 安全只能通过所有额外的关注获益。”
另一个被 《PC World》举例的原因是 Linux 更好的用户特权模式Windows 用户“一般被默认授予管理员权限,那意味着他们几乎可以访问系统中的一切Noye 的文章讲到。Linux反而很好地限制了“root”权限。
另一个被 《PC World》举例的原因是 Linux 更好的用户特权模式:Noye 的文章讲到,Windows 用户“一般被默认授予管理员权限那意味着他们几乎可以访问系统中的一切”。Linux反而很好地限制了“root”权限。
Noyes 还指出Linux 环境下的多样性可能比典型的 Windows 单一文化更好地对抗攻击Linux 有很多不同的发行版。其中一些以其特别的安全关注点进行差异化。Comparitech 的安全研究员 Lee Muson 为 Linux 发行版提供了这样的建议“Qubes OS 对于 Linux 来说是一个很好的出发点,现在你可以发现,爱德华·斯诺登的认可大大地掩盖了它自己极其不起眼的主张。”其他安全性专家指出了专门的安全 Linux 发行版,如 Tails Linux它旨在直接从 USB 闪存驱动器或类似的外部设备安全地匿名运行。
Noyes 还指出Linux 环境下的多样性可能比典型的 Windows 单一文化更好地对抗攻击Linux 有很多不同的发行版。其中一些以其特别的安全关注点差异化。Comparitech 的安全研究员 Lee Muson 为 Linux 发行版提供了这样的建议:“[Qubes OS][7] 对于 Linux 来说是一个很好的出发点,现在你可以发现,[爱德华·斯诺登的认可][8]极大地超过了其谦逊的宣传。”其他安全性专家指出了专门的安全 Linux 发行版,如 [Tails Linux][9],它旨在直接从 USB 闪存驱动器或类似的外部设备安全地匿名运行。
### 构建安全趋势

View File

@ -1,12 +1,15 @@
献给写作者的 Linux 工具
======
> 这些易用的开源应用可以帮助你打磨你的写作技巧、使研究更高效、更具有条理。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDUCATION_pencils.png?itok=U2FwL2LA)
如果你已经阅读过[我关于如何切换到 Linux 的文章][1],那么你就知道我是一个超级用户。另外,我不是任何方面的“专家”,这点仍然可以相信。但是在过去几年里我学到了很多有用的东西,我想将这些技巧传给其他新的 Linux 用户。
如果你已经阅读过[我关于如何切换到 Linux 的文章][1],那么你就知道我是一个超级用户。另外,我不是任何方面的“专家”,目前仍然如此。但是在过去几年里我学到了很多有用的东西,我想将这些技巧传给其他新的 Linux 用户。
今天,我将讨论我写作时使用的工具,基于三个标准来选择:
1. 当我提交故事或文章时,我的主要写作工具必须与任何发布者兼容。
1. 当我提交作品或文章时,我的主要写作工具必须与任何发布者兼容。
2. 该软件使用起来必须简单快捷。
3. 免费(自由)是很棒的。
@ -16,30 +19,29 @@
2. [Manuskript][3]
3. [oStorybook][4]
但是,当我试图寻找信息时,我往往会迷失方向并失去思路,所以我选择了适合我需求的多个应用程序。另外,我不想依赖互联网,以免服务下线。我把这些程序放在显示器桌面上,以便我一下全看到它们。
但是,当我试图寻找信息时,我往往会迷失方向并失去思路,所以我选择了适合我需求的多个应用程序。另外,如果服务停止的话,我不想依赖互联网。我在监视器上设置了这些程序,以便我可以马上看到它们。
请考虑以下工具建议 每个人的工作方式都不相同,并且你可能会发现一些更适合你工作方式的其他应用程序。以下这些工具是目前的写作工具:
请考虑以下工具建议 - 每个人的工作方式都不相同,并且你可能会发现一些更适合你工作方式的其他应用程序。这些工具是目前的写作工具:
### 文字处理器
### Word 处理器
[LibreOffice 6.0.1][5]。直到最近,我使用了 [WPS][6]但由于字体渲染问题Times New Roman 总是以粗体显示而否定了它。LibreOffice 的最新版本非常适合 Microsoft Office事实上它是开源的这对我来说很重要。
[LibreOffice 6.0.1][5]。直到最近,我使用了 [WPS][6]但由于字体渲染问题Times New Roman 总是以粗体显示而否定了它。LibreOffice 的最新版本非常适应 Microsoft Office而且事实上它是开源的这对我来说很重要。
### 词库
[Artha][7] 可以给出同义词,反义词,派生词等等。它外观干净,速度快。例如,输入 “fast” 这个词你会得到字典定义以及上面列出的其他选项。Artha 是送给开源社区的一个巨大的礼物,更多的人应该尝试它因为它似乎是一个模糊to 校正者:这里模糊一次感觉不太恰当,或许是不太出名的)的小程序。如果你使用 Linux请立即安装此应用程序你不会后悔的。
[Artha][7] 可以给出同义词、反义词、派生词等等。它外观整洁、速度快。例如,输入 “fast” 这个词你会得到字典定义以及上面列出的其他选项。Artha 是送给开源社区的一个巨大的礼物,人们应该试试它,因为它似乎是一个冷僻的小程序。如果你使用 Linux请立即安装此应用程序你不会后悔的。
### 记笔记
[Zim][8] 标榜自己是一个桌面维基,但它也是你在任何地方都能找到的最简单的多层笔记应用程序。还有其它更漂亮的笔记程序,但 Zim 正是那种我需要管理角色,地点,情节和次要情节的程序。
[Zim][8] 标榜自己是一个桌面维基,但它也是你所能找到的最简单的多层级笔记应用程序。还有其它更漂亮的笔记程序,但 Zim 正是那种我需要管理角色、地点、情节和次要情节的程序。
### Submission tracking
### 投稿跟踪
我曾经使用过一款名为 [FileMaker Pro][9] 的专有软件,它让我心烦to 校正者:这句话注意一下)。有很多数据库应用程序,但在我看来,最简单的一个就是 [Glom][10]。它以图形方式满足我的需求,让我以表单形式输入信息而不是表格。在 Glom 中,你可以创建你需要的表单,这样你就可以立即看到相关信息(对于我来说,通过电子表格来查找信息就像将我的眼球拖到玻璃碎片上)。尽管 Glom 不再处于开发阶段,但它仍然是很棒的。
我曾经使用过一款名为 [FileMaker Pro][9] 的专有软件,它惯坏了我。有很多数据库应用程序,但在我看来,最容易使用的某过于 [Glom][10] 了。它以图形方式满足我的需求,让我以表单形式输入信息而不是表格。在 Glom 中,你可以创建你需要的表单,这样你就可以立即看到相关信息(对于我来说,通过电子表格来查找信息就像将我的眼球拖到玻璃碎片上)。尽管 Glom 不再处于开发阶段,但它仍然是很棒的。
### 搜索
我已经开始使用 [StartPage.com][11] 作为我的默认搜索引擎。当然,当你写作时,[Google][12] 可以成为你最好的朋友之一。但我不喜欢每次我想了解特定人物地点或事物时Google 都会跟踪我。所以我使用 StartPage.com 来代替。它速度很快,并且不会跟踪你的搜索。我也使用 [DuckDuckGo.com][13] 作为 Google 的替代品。
我已经开始使用 [StartPage.com][11] 作为我的默认搜索引擎。当然,当你写作时,[Google][12] 可以成为你最好的朋友之一。但我不喜欢每次我想了解特定人物地点或事物时Google 都会跟踪我。所以我使用 StartPage.com 来代替。它速度很快,并且不会跟踪你的搜索。我也使用 [DuckDuckGo.com][13] 作为 Google 的替代品。
### 其他的工具
@ -47,7 +49,7 @@
尽管来自 [Mozilla][17] 的 [Thunderbird][16] 是一个很棒的程序,但我发现 [Geary][18] 是一个更快更轻的电子邮件应用程序。有关开源电子邮件应用程序的更多信息,请阅读 [Jason Baker][19] 的优秀文章:[6 个开源的桌面电子邮件客户端][20]。
正如你可能已经注意到,我对应用程序的喜爱趋向于在 WindowsMacOS 都能运行to 校正者:此处小心)以及此处提到的开源 Linux 替代品。我希望这些建议能帮助你发现有用的新方法来撰写并跟踪你的写作谢谢你Artha
正如你可能已经注意到,我对应用程序的喜爱趋向于将最好的 Windows、MacOS 都能运行,以及此处提到的开源 Linux 替代品融合在一起。我希望这些建议能帮助你发现有用的新方法来撰写并跟踪你的写作谢谢你Artha
写作愉快!
@ -57,7 +59,7 @@ via: https://opensource.com/article/18/3/top-Linux-tools-for-writers
作者:[Adam Worth][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,58 +1,57 @@
如何使用树莓派制作一个数字针孔摄像头
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rasp-pi-pinhole-howto.png?itok=ubmevVZB)
在 2015 年底的时候,树莓派基金会发布了一个非常小的 [Raspberry Pi Zero][1],这让大家感到很意外。更夸张的是,他们随 MagPi 杂志一起 [免费赠送][2]。我看到这个消息后立即冲出去到处找报刊亭,直到我在这一区域的某处找到最后两份。实际上我还没有想好如何去使用它们,但是我知道,因为它们非常小,所以,它们可以做很多全尺寸树莓派没法做的一些项目。
> 学习如何使用一个树莓派 Zero、高清网络摄像头和一个空的粉盒来搭建一个简单的相机。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rasp-pi-pinhole-howto.png?itok=ubmevVZB)
在 2015 年底的时候,树莓派基金会发布了一个让大家很惊艳的非常小的 [树莓派 Zero][1]。更夸张的是,他们随 MagPi 杂志一起 [免费赠送][2]。我看到这个消息后立即冲出去到处找报刊亭,直到我在这一地区的某处找到最后两份。实际上我还没有想好如何去使用它们,但是我知道,因为它们非常小,所以,它们可以做很多全尺寸树莓派没法做的一些项目。
![Raspberry Pi Zero][4]
从 MagPi 杂志上获得的树莓派 Zero。CC BY-SA.4.0。
*从 MagPi 杂志上获得的树莓派 Zero。CC BY-SA.4.0。*
因为我对天文摄影非常感兴趣,我以前还改过一台微软出的 LifeCam Cinema 高清网络摄像头,拆掉了它的外壳、镜头、以及红外滤镜,露出了它的 [CCD 芯片][5]。我把它定制为我的 Celestron 天文望远镜的目镜。用它我捕获到了令人难以置信的木星照片、月球上的陨石坑、以及太阳黑子的特写镜头(使用了适当的 Baader 安全保护膜)。
因为我对天文摄影非常感兴趣,我以前还改过一台微软出的 LifeCam Cinema 高清网络摄像头,拆掉了它的外壳、镜头、以及红外滤镜,露出了它的 [CCD 芯片][5]。我把它定制为我的 Celestron 天文望远镜的目镜。用它我捕获到了令人难以置信的木星照片、月球上的陨石坑、以及太阳黑子的特写镜头(使用了适当的 Baader 安全保护膜)。
在那之前,我甚至还在我的使用胶片的 SLR 摄像机上,通过在镜头盖(这个盖子就是在摄像机上没有安装镜头时,用来保护摄像机的内部元件的那个盖子)上钻一个小孔来变成一个 [针孔摄像][6],将这个钻了小孔的盖子,盖到一个汽水罐上切下来的小圆盘上,以提供一个针孔。碰巧有一天,在我的桌子上发现了一个可以用来做针孔体的盖子,随后我将它改成了用于天文摄像的网络摄像头。我很好奇这个网络摄像头是否有从针孔盖子后面捕获低照度图像的能力。我花了一些时间使用 [GNOME Cheese][7] 应用程序,去验证这个针孔摄像头确实是个可行的创意。
在那之前,我甚至还在我的使用胶片的 SLR 摄像机上,通过在镜头盖(这个盖子就是在摄像机上没有安装镜头时,用来保护摄像机的内部元件的那个盖子)上钻一个小孔来变成一个 [针孔摄像][6],将这个钻了小孔的盖子,盖到一个汽水罐上切下来的小圆盘上,以提供一个针孔。碰巧有一天,这个放在我的桌子上的针孔镜头盖被改成了用于天文摄像的网络摄像头。我很好奇这个网络摄像头是否有从针孔盖子后面捕获低照度图像的能力。我花了一些时间使用 [GNOME Cheese][7] 应用程序,去验证这个针孔摄像头确实是个可行的创意。
自从有了这个想法,我就有了树莓派 Zero 的一个用法!针孔摄像机一般都非常小,除了曝光时间和胶片的 ISO 速率外,一般都不提供其它的控制选项。数字摄像机就不一样了,它至少有 20 多个按钮和成百上千的设置菜单。我的数字针孔摄像头的目标是真实反映天文摄像的传统风格,设计一个没有控制选项的极简设备,甚至连曝光时间控制也没有。
用树莓派 Zero、高清网络镜头和空粉盒设计的数字针孔摄像头是我设计的 [一系列][9] 针孔摄像头的 [第一个项目][8]。现在,我们开始来制作它。
用树莓派 Zero、高清网络镜头和空粉盒设计的数字针孔摄像头,是我设计的 [一系列][9] 针孔摄像头的 [第一个项目][8]。现在,我们开始来制作它。
### 硬件
因为我手头已经有了一个树莓派 Zero为完成这个项目我还需要一个网络摄像头。这个树莓派 Zero 在英国的零售价是 4 英磅,这个项目其它部件的价格,我希望也差不多是这样的价格水平。花费 30 英磅买来的摄像头安装在一个 4 英磅的计算机主板上,让我感觉有些不平衡。显而易见的答案是前往一个知名的拍卖网站上,去找到一些二手的网络摄像头。不久之后,我仅花费了 1 英磅再加一些运费,获得了一个普通的高清摄像头。之后,在 Fedora 上做了一些测试操作,以确保它是可用正常使用的,我拆掉了它的外壳,以检查它的电子元件的大小是否适合我的项目。
![Hercules DualPix HD webcam][11]
Hercules DualPix 高清网络摄像头,它将被解剖以提取它的电路板和 CCD 图像传感器。CC BY-SA 4.0.
*Hercules DualPix 高清网络摄像头,它将被解剖以提取它的电路板和 CCD 图像传感器。CC BY-SA 4.0.*
接下来,我需要一个安放摄像头的外壳。树莓派 Zero 电路板大小仅仅为 65mm x 30mm x 5mm。虽然网络摄像头的 CCD 芯片周围有一个用来安装镜头的塑料支架,但是,实际上它的电路板尺寸更小。我在家里找了一圈,希望能够找到一个适合盛放这两个极小的电路板的容器。最后,我发现我妻子的粉盒足够去安放树莓派的电路板。稍微做一些小调整,似乎也可以将网络摄像头的电路板放进去。
![Powder compact][13]
变成我的针孔摄像头外壳的粉盒。CC BY-SA 4.0.
*变成我的针孔摄像头外壳的粉盒。CC BY-SA 4.0.*
我拆掉了网络摄像头外壳上的一些螺丝,取出了它的内部元件。网络摄像头外壳的大小反映了它的电路板的大小或 CCD 的位置。我很幸运,这个网络摄像头很小而且它的电路板的布局也很方便。因为我要做一个针孔摄像头,我需要取掉镜,露出 CCD 芯片。
我拆掉了网络摄像头外壳上的一些螺丝,取出了它的内部元件。网络摄像头外壳的大小反映了它的电路板的大小或 CCD 的位置。我很幸运,这个网络摄像头很小而且它的电路板的布局也很方便。因为我要做一个针孔摄像头,我需要取掉镜,露出 CCD 芯片。
它的塑料外壳大约有 1 厘米高,它太高了没有办法放进粉盒里。我拆掉了电路板后面的螺丝,将它完全拆开,我认为将它放在盒子里有助于阻挡从缝隙中来的光线,因此,我用一把工艺刀将它修改成 4 毫米高,然后将它重新安装。我折弯了 LED 的支脚以降低它的高度。最后,我切掉了安装麦克风的塑料管,因为我不想采集声音。
![Bare CCD chip][15]
取下镜头以后,就可以看到裸露的 CCD 芯片了。圆柱形的塑料柱将镜头固定在合适的位置上,并阻挡 LED 光进入镜头破坏图像。CC BY-SA 4.0
*取下镜头以后,就可以看到裸露的 CCD 芯片了。圆柱形的塑料柱将镜头固定在合适的位置上,并阻挡 LED 光进入镜头破坏图像。CC BY-SA 4.0*
网络摄像头有一个很长的带全尺寸插头的 USB 线缆,而树莓派 Zero 使用的是一个 Micro-USB 插座,因此,我需要一个 USB-to-Micro-USB 适配器。但是,使用适配器插入,这个树莓派将放不进这个粉盒中,更不用说还有将近一米长的 USB 线缆。因此,我用刀将 Micro-USB 适配器削开,切掉了它的 USB 插座并去掉这些塑料,露出连接到 Micro-USB 插头上的金属材料。同时也切掉了网络摄像头上大约 6 厘米长的 USB 电缆,并剥掉裹在它外面的锡纸,露出它的四根电线。我把它们直接焊接到 Micro-USB 插头上。现在网络摄像头可以插入到树莓派 Zero 上了,并且电线也可以放到粉盒中了。
网络摄像头有一个很长的带全尺寸插头的 USB 线缆,而树莓派 Zero 使用的是一个 Micro-USB 插座,因此,我需要一个 USB 转 Micro-USB 的适配器。但是,使用适配器插入,这个树莓派将放不进这个粉盒中,更不用说还有将近一米长的 USB 线缆。因此,我用刀将 Micro-USB 适配器削开,切掉了它的 USB 插座并去掉这些塑料,露出连接到 Micro-USB 插头上的金属材料。同时也把网络摄像头的 USB 电缆切到大约 6 厘米长,并剥掉裹在它外面的锡纸,露出它的四根电线。我把它们直接焊接到 Micro-USB 插头上。现在网络摄像头可以插入到树莓派 Zero 上了,并且电线也可以放到粉盒中了。
![Modified USB plugs][17]
网络摄像头使用的 Micro-USB 插头已经剥掉了线,并直接焊接到触点上。这个插头现在插入到树莓派 Zero 后大约仅高出树莓派 1 厘米。CC BY-SA 4.0
*网络摄像头使用的 Micro-USB 插头已经剥掉了线,并直接焊接到触点上。这个插头现在插入到树莓派 Zero 后大约仅高出树莓派 1 厘米。CC BY-SA 4.0*
最初,我认为到此为止,已经全部完成了电子设计部分,但是在测试之后,我意识到,如果摄像头没有捕获图像或者甚至没有加电我都不知道。我决定使用树莓派的 GPIO 针脚去驱动 LED 指示灯。一个黄色的 LED 表示网络摄像头控制软件已经运行,而一个绿色的 LED 表示网络摄像头正在捕获图像。我在 BCM 的 17 号和 18 号针脚上各自串接一个 300 欧姆的电阻,并将它们各自连接到 LED 的正极上,然后将 LED 的负极连接到一起并接入到公共地针脚上。
![LEDs][19]
LED 使用一个 300 欧姆的电阻连接到 GPIO 的 BCM 17 号和 BCM 18 号针脚上负极连接到公共地针脚。CC BY-SA 4.0.
*LED 使用一个 300 欧姆的电阻连接到 GPIO 的 BCM 17 号和 BCM 18 号针脚上负极连接到公共地针脚。CC BY-SA 4.0.*
接下来,该去修改粉盒了。首先,我去掉了卡在粉盒上的托盘以腾出更多的空间,并且用刀将连接处切开。我打算在一个便携式充电宝上运行树莓派 Zero充电宝肯定是放不到这个盒子里面因此我挖了一个孔这样就可以引出 USB 连接头。LED 的光需要能够从盒子外面看得见,因此,我在盖子上钻了两个 3 毫米的小孔。
@ -60,29 +59,29 @@ LED 使用一个 300 欧姆的电阻连接到 GPIO 的 BCM 17 号和 BCM 18 号
![Bottom of the case with the pinhole aperture][21]
带针孔的盒子底部。CC BY-SA 4.0
*带针孔的盒子底部。CC BY-SA 4.0*
剩下的工作就是将这些已经改造完成的设备封装起来。首先我使用蓝色腻子将摄像头的电路板固定在盒子中合适的位置,这样针孔就直接处于 CCD 之上了。使用蓝色腻子的好处是,如果我需要清理污渍(或者如果放错了位置)时,就可以很容易地重新安装 CCD 了。将树莓派 Zero 直接放在摄像头电路板上面。为防止这两个电路板之间可能出现的短路情况,我在这两个电路板之间放了几层防静电袋
剩下的工作就是将这些已经改造完成的设备封装起来。首先我使用蓝色腻子将摄像头的电路板固定在盒子中合适的位置,这样针孔就直接处于 CCD 之上了。使用蓝色腻子的好处是,如果我需要清理污渍(或者如果放错了位置)时,就可以很容易地重新安装 CCD 了。将树莓派 Zero 直接放在摄像头电路板上面。为防止这两个电路板之间可能出现的短路情况,我在树莓派的背面贴了几层防静电胶带
[树莓派 Zero][22] 非常适合放到这个粉盒中,并且不需要任何固定,而从小孔中穿出去连接充电宝的 USB 电缆需要将它粘住固定。最后,我将 LED 塞进了前面在盒子上钻的孔中,并用胶水将它们固定住。我在 LED 的针脚之中放了一些防静电,以防止盒子盖上时,它与树莓派电路板接触而发生短路。
[树莓派 Zero][22] 非常适合放到这个粉盒中,并且不需要任何固定,而从小孔中穿出去连接充电宝的 USB 电缆需要将它粘住固定。最后,我将 LED 塞进了前面在盒子上钻的孔中,并用胶水将它们固定住。我在 LED 的针脚之中放了一些防静电胶带,以防止盒子盖上时,它与树莓派电路板接触而发生短路。
![Raspberry Pi Zero slotted into the case][24]
树莓派 Zero 塞入到这个盒子中后,周围的空隙不到 1mm。从盒子中引出的连接到网络摄像头上的 Micro-USB 插头接下来需要将它连接到充电宝上。CC BY-SA 4.0
*树莓派 Zero 塞入到这个盒子中后,周围的空隙不到 1mm。从盒子中引出的连接到网络摄像头上的 Micro-USB 插头接下来需要将它连接到充电宝上。CC BY-SA 4.0*
### 软件
当然,计算机硬件离开控制它的软件是不能使用的。树莓派 Zero 同样可以运行全尺寸树莓派能够运行的软件,但是,因为树莓派 Zero 的 CPU 速度比较慢,让它去引导传统的 [Raspbian OS][25] 镜像非常耗时。打开摄像头都要花费差不多一分钟的时间,这样的速度在现实中是没有什么用处的。而且,一个完整的树莓派操作系统对我的这个摄像头项目来说也没有必要。甚至是,我禁用了引导时启动的所有可禁用的服务,启动仍然需要很长的时间。因此,我决定仅使用需要的软件,我将用一个 [U-Boot][26] 引导加载器和 Linux 内核。自定义 `init` 二进制文件从 microSD 卡上加载 root 文件系统读入驱动网络摄像头所需要的内核模块,并将它放在 `/dev` 目录下,然后运行二进制的应用程序。
当然,计算机硬件离开控制它的软件是不能使用的。树莓派 Zero 同样可以运行全尺寸树莓派能够运行的软件,但是,因为树莓派 Zero 的 CPU 速度比较慢,让它去引导传统的 [Raspbian OS][25] 镜像非常耗时。打开摄像头都要花费差不多一分钟的时间,这样的速度在现实中是没有什么用处的。而且,一个完整的树莓派操作系统对我的这个摄像头项目来说也没有必要。甚至是,我禁用了引导时启动的所有可禁用的服务,启动仍然需要很长的时间。因此,我决定仅使用需要的软件,我将用一个 [U-Boot][26] 引导加载器和 Linux 内核。自定义 `init` 二进制文件从 microSD 卡上加载 root 文件系统读入驱动网络摄像头所需要的内核模块,并将它放在 `/dev` 目录下,然后运行二进制的应用程序。
这个二进制的应用程序是另一个定制的 C 程序,它做的核心工作就是管理摄像头。首先,它等待内核驱动程序去初始化网络摄像头、打开它、以及通过低级的 `v4l ioctl` 调用去初始化它。GPIO 针是通过 `/dev/mem` 寄存器被配置为驱动 LED。
这个二进制的应用程序是另一个定制的 C 程序,它做的核心工作就是管理摄像头。首先,它等待内核驱动程序去初始化网络摄像头、打开它、以及通过低级的 `v4l ioctl` 调用去初始化它。GPIO 针配置用来通过 `/dev/mem` 寄存器去驱动 LED。
初始化完成之后,摄像头进入一个 loop 循环。每个图像捕获循环是摄像头使用缺省配置,以 JPEG 格式捕获一个单一的图像帧、保存这个图像帧到 SD 卡、然后休眠三秒。这个循环持续运行直到断电为止。这已经很完美地实现了我的最初目标,那就是用一个传统的模拟的针孔摄像头设计一个简单的数字摄像头。
初始化完成之后,摄像头进入一个循环。每个图像捕获循环是摄像头使用缺省配置,以 JPEG 格式捕获一个单一的图像帧、保存这个图像帧到 SD 卡、然后休眠三秒。这个循环持续运行直到断电为止。这已经很完美地实现了我的最初目标,那就是用一个传统的模拟的针孔摄像头设计一个简单的数字摄像头。
定制的用户空间 [代码][27] 在遵守 [GPLv3][28] 或者更新版许可下自由使用。树莓派 Zero 需要 ARMv6 的二进制文件,因此,我使用了 [QEMU ARM][29] 模拟器在一台 x86_64 主机上进行编译,它使用了 [Pignus][30] 发行版(一个针对 ARMv6 移植/重构建的 Fedora 23 版本)下的工具链,在 `chroot` 下进行编译。所有的二进制文件都静态链接了 [glibc][31],因此,它们都是自包含的。我构建了一个定制的 RAMDisk 去包含这些二进制文件和所需的内核模块,并将它拷贝到 SD 卡,这样引导加载器就可以找到它们了。
定制的用户空间 [代码][27] 在遵守 [GPLv3][28] 或者更新版许可下自由使用。树莓派 Zero 需要 ARMv6 的二进制文件,因此,我使用了 [QEMU ARM][29] 模拟器在一台 x86_64 主机上进行编译,它使用了 [Pignus][30] 发行版(一个针对 ARMv6 移植/重构建的 Fedora 23 版本)下的工具链,在 `chroot` 环境下进行编译。所有的二进制文件都静态链接了 [glibc][31],因此,它们都是自包含的。我构建了一个定制的 RAMDisk 去包含这些二进制文件和所需的内核模块,并将它拷贝到 SD 卡,这样引导加载器就可以找到它们了。
![Completed camera][33]
最终完成的摄像机完全隐藏在这个粉盒中了。唯一露在外面的东西就是 USB 电缆。CC BY-SA 4.0
*最终完成的摄像机完全隐藏在这个粉盒中了。唯一露在外面的东西就是 USB 电缆。CC BY-SA 4.0*
### 照像
@ -92,28 +91,28 @@ LED 使用一个 300 欧姆的电阻连接到 GPIO 的 BCM 17 号和 BCM 18 号
![Picture of houses taken with pinhole webcam][35]
在伦敦大街上的屋顶。CC BY-SA 4.0
*在伦敦大街上的屋顶。CC BY-SA 4.0*
![Airport photo][37]
范堡罗机场的老航站楼。CC BY-SA 4.0
*范堡罗机场的老航站楼。CC BY-SA 4.0*
最初,我只是想使用摄像头去捕获一些静态图像。后面,我降低了 loop 循环的延迟时间,从三秒改为一秒,然后用它捕获一段时间内的一系列图像。我使用 [GStreamer][38] 将这些图像做成了延时视频。
最初,我只是想使用摄像头去捕获一些静态图像。后面,我降低了循环的延迟时间,从三秒改为一秒,然后用它捕获一段时间内的一系列图像。我使用 [GStreamer][38] 将这些图像做成了延时视频。
以下是我创建视频的过程:
[][38]
视频是我在某天下班之后,从银行沿着泰晤式河到滑铁卢的画面。以每分钟 40 帧捕获的 1200 帧图像被我制作成了每秒 20 帧的动画。
*视频是我在某天下班之后,从银行沿着泰晤式河到滑铁卢的画面。以每分钟 40 帧捕获的 1200 帧图像被我制作成了每秒 20 帧的动画。*
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/how-build-digital-pinhole-camera-raspberry-pi
作者:[Daniel Berrange][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
译者:[qhwdw](https://github.com/qhwdw)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -5,7 +5,7 @@ Oracle Linux 系统如何去注册使用坚不可摧 Linux 网络ULN
甚至我也不知道关于它的信息,我是最近才了解了有关它的信息,想将这些内容共享给其他人。因此写了这篇文章,它将指导你去注册 Oracle Linux 系统去使用坚不可摧 Linux 网络ULN
这将允许你去注册系统以获得软件更新和其它的 ASAP 补丁。
这将允许你去注册系统以尽快获得软件更新和其它的补丁。
### 什么是坚不可摧 Linux 网络

View File

@ -10,12 +10,13 @@
### 在 Linux 中禁用内置摄像头
首先,使用如下命令找到网络摄像头驱动:
```
$ sudo lsmod | grep uvcvideo
```
**示例输出:**
示例输出:
```
uvcvideo 114688 1
videobuf2_vmalloc 16384 1 uvcvideo
@ -24,7 +25,6 @@ videobuf2_common 53248 2 uvcvideo,videobuf2_v4l2
videodev 208896 4 uvcvideo,videobuf2_common,videobuf2_v4l2
media 45056 2 uvcvideo,videodev
usbcore 286720 9 uvcvideo,usbhid,usb_storage,ehci_hcd,ath3k,btusb,uas,ums_realtek,ehci_pci
```
这里,**uvcvideo** 是我的网络摄像头驱动。
@ -32,45 +32,45 @@ usbcore 286720 9 uvcvideo,usbhid,usb_storage,ehci_hcd,ath3k,btusb,uas,ums_realte
现在,让我们禁用网络摄像头。
为此,请编辑以下文件(如果文件不存在,只需创建它):
```
$ sudo nano /etc/modprobe.d/blacklist.conf
```
添加以下行:
```
##Disable webcam.
blacklist uvcvideo
```
**“##Disable webcam”** 这行不是必需的。为了便于理解,我添加了它。
`##Disable webcam` 这行不是必需的。为了便于理解,我添加了它。
保存并退出文件。重启系统以使更改生效。
要验证网络摄像头是否真的被禁用,请打开任何即时通讯程序或网络摄像头软件,如 Cheese 或 Guvcview。你会看到如下的空白屏幕。
**Cheese 输出:**
Cheese 输出:
![][2]
**Guvcview 输出:**
Guvcview 输出:
![][3]
看见了么?网络摄像头被禁用而无法使用。
要启用它,请编辑:
```
$ sudo nano /etc/modprobe.d/blacklist.conf
```
注释掉你之前添加的行。
```
##Disable webcam.
#blacklist uvcvideo
```
保存并关闭文件。然后,重启计算机以启用网络摄像头。
@ -90,7 +90,7 @@ via: https://www.ostechnix.com/how-to-disable-built-in-webcam-in-ubuntu/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,16 +1,18 @@
8 个基本的 Docker 容器管理命令
======
利用这 8 个命令可以学习 Docker 容器的基本管理方式。这是一个为 Docker 初学者准备的,带有示范命令输出的指南。
> 利用这 8 个命令可以学习 Docker 容器的基本管理方式。这是一个为 Docker 初学者准备的,带有示范命令输出的指南。
![Docker 容器管理命令][1]
在这篇文章中,我们将带你学习 8 个基本的 Docker 容器命令,它们操控着 Docker 容器的基本活动,例如 <ruby>运行<rt>run</rt></ruby>, <ruby>列举<rt>list</rt></ruby>, <ruby>停止<rt>stop</rt></ruby>, <ruby>查看历史纪录<rt>view logs</rt></ruby>, <ruby>删除<rt>delete</rt></ruby>, 等等。如果你对 Docker 的概念很陌生,推荐你看看我们的 [介绍指南][2],来了解 Docker 的基本内容以及 [如何][3] 在 Linux 上安装 Docker. 现在让我们赶快进入要了解的命令:
在这篇文章中,我们将带你学习 8 个基本的 Docker 容器命令,它们操控着 Docker 容器的基本活动,例如 <ruby>运行<rt>run</rt></ruby> <ruby>列举<rt>list</rt></ruby> <ruby>停止<rt>stop</rt></ruby>、 查看<ruby>历史纪录<rt>logs</rt></ruby> <ruby>删除<rt>delete</rt></ruby> 等等。如果你对 Docker 的概念很陌生,推荐你看看我们的 [介绍指南][2],来了解 Docker 的基本内容以及 [如何][3] 在 Linux 上安装 Docker 现在让我们赶快进入要了解的命令:
### 如何运行 Docker 容器?
众所周知Docker 容器只是一个运行于<ruby>宿主操作系统<rt>host OS</rt></ruby>上的应用进程所以你需要一个镜像来运行它。Docker 镜像运行时的进程就叫做 Docker 容器。你可以加载本地 Docker 镜像,也可以从 Docker Hub 上下载。Docker Hub 是一个提供公有和私有镜像来进行<ruby>拉取<rt>pull</rt></ruby>操作的集中仓库。官方的 Docker Hub 位于 [hub.docker.com][4]. 当你指示 Docker 引擎运行容器时,它会首先搜索本地镜像,如果没有找到,它会从 Docker Hub 上拉取相应的镜像。
众所周知Docker 容器只是一个运行于<ruby>宿主操作系统<rt>host OS</rt></ruby>上的应用进程所以你需要一个镜像来运行它。Docker 镜像以进程的方式运行时就叫做 Docker 容器。你可以加载本地 Docker 镜像,也可以从 Docker Hub 上下载。Docker Hub 是一个提供公有和私有镜像来进行<ruby>拉取<rt>pull</rt></ruby>操作的集中仓库。官方的 Docker Hub 位于 [hub.docker.com][4]。 当你指示 Docker 引擎运行容器时,它会首先搜索本地镜像,如果没有找到,它会从 Docker Hub 上拉取相应的镜像。
让我们运行一个 Apache web 服务器的 Docker 镜像,比如 httpd 进程。你需要运行 `docker container run` 命令。旧的命令为 `docker run` 但后来 Docker 添加了子命令部分,所以新版本支持下列命令:
让我们运行一个 Apache web-server 的 Docker 镜像,比如 httpd 进程。你需要运行 `docker container run` 命令。旧的命令为 `docker run`, 但后来 Docker 添加了子命令部分,所以新版本支持<ruby>附属命令<rt>below command</rt></ruby> -
```
root@kerneltalks # docker container run -d -p 80:80 httpd
@ -28,18 +30,16 @@ Status: Downloaded newer image for httpd:latest
c46f2e9e4690f5c28ee7ad508559ceee0160ac3e2b1688a61561ce9f7d99d682
```
Docker 的 `run` 命令将镜像名作为强制参数,另外还有很多可选参数。常用的参数有 -
Docker 的 `run` 命令将镜像名作为强制参数,另外还有很多可选参数。常用的参数有
* `-d` : Detach container from current shell
* `-p X:Y` : Bind container port Y with hosts port X
* `--name` : Name your container. If not used, it will be assigned randomly generated name
* `-e` : Pass environmental variables and their values while starting container
* `-d`:从当前 shell 脱离容器
* `-p X:Y`:绑定容器的端口 Y 到宿主机的端口 X
* `--name`:命名你的容器。如果未指定,它将被赋予随机生成的名字
* `-e`:当启动容器时传递环境编辑及其值
通过以上输出你可以看到,我们将 `httpd` 作为镜像名来运行容器。接着本地镜像没有找到Docker 引擎从 Docker Hub 拉取了它。注意,它下载了镜像 `httpd:latest` 其中 `:` 后面跟着版本号。如果你需要运行特定版本的容器你可以在镜像名后面注明版本名。如果不提供版本名Docker 引擎会自动拉取最新的版本。
通过以上输出你可以看到,我们将 `httpd` 作为镜像名来运行容器。接着本地镜像没有找到Docker 引擎从 Docker Hub 拉取了它。注意,它下载了镜像 **httpd:latest**, 其中 : 后面跟着版本号。如果你需要运行特定版本的容器你可以在镜像名后面注明版本名。如果不提供版本名Docker 引擎会自动拉取最新的版本。
输出的最后一行显示了你新运行的 httpd 容器的特有 ID。
输出的最后一行显示了你新运行的 httpd 容器的唯一 ID。
### 如何列出所有运行中的 Docker 容器?
@ -51,9 +51,9 @@ CONTAINER ID IMAGE COMMAND CREATED
c46f2e9e4690 httpd "httpd-foreground" 11 minutes ago Up 11 minutes 0.0.0.0:80->80/tcp cranky_cori
```
列出的结果是按列显示的。每一列的值分别为 -
列出的结果是按列显示的。每一列的值分别为
1. Container ID :一开始的几个字符对应你特有的容器 ID
1. Container ID :一开始的几个字符对应你的容器的唯一 ID
2. Image :你运行容器的镜像名
3. Command :容器启动后运行的命令
4. Created :创建时间
@ -61,11 +61,9 @@ c46f2e9e4690 httpd "httpd-foreground" 11 minutes ago
6. Ports :与宿主端口相连接的端口信息
7. Names :容器名(如果你没有命名你的容器,那么会随机创建)
### 如何查看 Docker 容器的历史纪录?
在第一步我们使用了 -d 参数来将容器,在它一开始运行的时候,就从当前的 shell 中离出来。在这种情况下我们不知道容器里面发生了什么。所以为了查看容器的历史纪录Docker 提供了 `logs` 命令。它采用容器名称或 ID 作为参数。
在第一步我们使用了 `-d` 参数来将容器,在它一开始运行的时候,就从当前的 shell 中离出来。在这种情况下我们不知道容器里面发生了什么。所以为了查看容器的历史纪录Docker 提供了 `logs` 命令。它采用容器名称或 ID 作为参数。
```
root@kerneltalks # docker container logs cranky_cori
@ -99,7 +97,7 @@ bin 15731 15702 0 18:35 ? 00:00:00 httpd -DFOREGROUND
root 15993 15957 0 18:59 pts/0 00:00:00 grep --color=auto -i 15702
```
在第一个输出中,列出了容器产生的进程的列表。它包含了所有细节,包括用途,<ruby>进程号<rt>pid</rt></ruby><ruby>父进程号<rt>ppid</rt></ruby>,开始时间,命令,等等。这里所有的进程号你都可以在宿主的进程表里搜索到。这就是我们在第二个命令里做得。这证明了容器确实是宿主系统中的进程。
在第一个输出中,列出了容器产生的进程的列表。它包含了所有细节,包括<ruby>用户号<rt>uid</rt></ruby><ruby>进程号<rt>pid</rt></ruby><ruby>父进程号<rt>ppid</rt></ruby>、开始时间、命令,等等。这里所有的进程号你都可以在宿主的进程表里搜索到。这就是我们在第二个命令里做得。这证明了容器确实是宿主系统中的进程。
### 如何停止 Docker 容器?
@ -128,7 +126,7 @@ CONTAINER ID IMAGE COMMAND CREATED
c46f2e9e4690 httpd "httpd-foreground" 33 minutes ago Exited (0) 2 minutes ago cranky_cori
```
有了 `-a` 参数,现在我们可以查看已停止的容器。注意这些容器的状态被标注为 <ru by>已退出<rt>exited</rt></ruby>。既然容器只是一个进程,那么用“退出”比“停止”更合适!
有了 `-a` 参数,现在我们可以查看已停止的容器。注意这些容器的状态被标注为 <ruby>已退出<rt>exited</rt></ruby>。既然容器只是一个进程,那么用“退出”比“停止”更合适!
### 如何(重新)启动 Docker 容器?
@ -145,7 +143,7 @@ c46f2e9e4690 httpd "httpd-foreground" 35 minutes ago
### 如何移除 Docker 容器?
我们使用 `rm` 命令来移容器。你不可以移除运行中的容器。移除之前需要先停止容器。你可以使用 `-f` 参数搭配 `rm` 命令来强制移除容器,但并不推荐这么做。
我们使用 `rm` 命令来移容器。你不可以移除运行中的容器。移除之前需要先停止容器。你可以使用 `-f` 参数搭配 `rm` 命令来强制移除容器,但并不推荐这么做。
```
root@kerneltalks # docker container rm cranky_cori
@ -162,8 +160,8 @@ via: https://kerneltalks.com/virtualization/8-basic-docker-container-management-
作者:[Shrikant Lavhate][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[lonaparte](https://github.com/译者ID/lonaparte)
校对:[校对者ID](https://github.com/校对者ID)
译者:[lonaparte](https://github.com/lonaparte)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,111 @@
Linux 中一种友好的 find 替代工具
======
> fd 命令提供了一种简单直白的搜索 Linux 文件系统的方式。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
[fd][1] 是一个超快的,基于 [Rust][2] 的 Unix/Linux `find` 命令的替代品。它不提供所有 `find` 的强大功能。但是,它确实提供了足够的功能来覆盖你可能遇到的 80 的情况。诸如良好的规划和方便的语法、彩色输出、智能大小写、正则表达式以及并行命令执行等特性使 `fd` 成为一个非常有能力的后继者。
### 安装
进入 [fd][1] GitHub 页面,查看安装部分。它涵盖了如何在[macOS][3]、 [Debian/Ubuntu][4] [Red Hat][5] 和 [Arch Linux][6] 上安装程序。安装完成后,你可以通过运行帮助来获得所有可用命令行选项的完整概述,通过 `fd -h` 获取简明帮助,或者通过 `fd --help` 获取更详细的帮助。
### 简单搜索
`fd` 旨在帮助你轻松找到文件系统中的文件和文件夹。你可以用 `fd` 带上一个参数执行最简单的搜索,该参数就是你要搜索的任何东西。例如,假设你想要找一个 Markdown 文档,其中包含单词 `services` 作为文件名的一部分:
```
$ fd services
downloads/services.md
```
如果仅带一个参数调用,那么 `fd` 递归地搜索当前目录以查找与莫的参数匹配的任何文件和/或目录。使用内置的 `find` 命令的等效搜索如下所示:
```
$ find . -name 'services'
downloads/services.md
```
如你所见,`fd` 要简单得多,并需要更少的输入。在我心中用更少的输入做更多的事情总是对的。
### 文件和文件夹
您可以使用 `-t` 参数将搜索范围限制为文件或目录,后面跟着代表你要搜索的内容的字母。例如,要查找当前目录中文件名中包含 `services` 的所有文件,可以使用:
```
$ fd -tf services
downloads/services.md
```
以及,找到当前目录中文件名中包含 `services` 的所有目录:
```
$ fd -td services
applications/services
library/services
```
如何在当前文件夹中列出所有带 `.md` 扩展名的文档?
```
$ fd .md
administration/administration.md
development/elixir/elixir_install.md
readme.md
sidebar.md
linux.md
```
从输出中可以看到,`fd` 不仅可以找到并列出当前文件夹中的文件,还可以在子文件夹中找到文件。很简单。
你甚至可以使用 `-H` 参数来搜索隐藏文件:
```
fd -H sessions .
.bash_sessions
```
### 指定目录
如果你想搜索一个特定的目录,这个目录的名字可以作为第二个参数传给 `fd`
```
$ fd passwd /etc
/etc/default/passwd
/etc/pam.d/passwd
/etc/passwd
```
在这个例子中,我们告诉 `fd` 我们要在 `etc` 目录中搜索 `passwd` 这个单词的所有实例。
### 全局搜索
如果你知道文件名的一部分,但不知道文件夹怎么办?假设你下载了一本关于 Linux 网络管理的书,但你不知道它的保存位置。没有问题:
```
fd Administration /
/Users/pmullins/Documents/Books/Linux/Mastering Linux Network Administration.epub
```
### 总结
`fd``find` 命令的极好的替代品,我相信你会和我一样发现它很有用。要了解该命令的更多信息,只需浏览手册页。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/friendly-alternative-find
作者:[Patrick H. Mullins][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/pmullins
[1]:https://github.com/sharkdp/fd
[2]:https://www.rust-lang.org/en-US/
[3]:https://en.wikipedia.org/wiki/MacOS
[4]:https://www.ubuntu.com/community/debian
[5]:https://www.redhat.com/en
[6]:https://www.archlinux.org/

View File

@ -0,0 +1,133 @@
A summer reading list for open organization enthusiasts
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_library_reading_list.jpg?itok=O3GvU1gH)
The books on this year's open organization reading list crystallize so much of what makes "open" work: Honesty, authenticity, trust, and the courage to question those status quo arrangements that prevent us from achieving our potential by working powerfully together.
These nine books—each one a recommendation from a member of our community—represent merely the beginning of an important journey toward greater and better openness.
But they sure are a great place to start.
### Radical Candor
**by Kim Scott** (recommended by [Angela Roberstson][1])
Do you avoid conflict? Love it? Or are you somewhere in between?
Wherever you are on the spectrum, Kim Scott gives you a set of tools for improving your ability to speak your truth in the workplace.
The book is divided into two parts: Part 1 is Scott's perspective on giving feedback, including handling the conflict that might be associated with it. Part 2 focuses on tools and techniques that she recommends.
Radical candor is most impactful for managers when it comes to evaluating and communicating feedback about employee performance. In Chapter 3, "Understand what motivates each person on your team," Scott explains how we can employ radical candor when assessing employees. Included is an explanation of how to have constructive conversations about our assessments.
I also appreciate that Scott spends a few pages sharing her perspective on how gender politics can impact work. With all the emphasis on diversity and inclusion, especially in the tech sector, including this topic in the book is another reason to read.
### Powerful
**by Patty McCord** (recommended by [Jeff Mackanic][2])
Powerful is an inspiring leadership book by Patty McCord, the former chief talent officer at Netflix. It's a fast-paced book with many great examples drawn from the author's career at Netflix.
One of the key characteristics of an open organization is collaboration, and readers will learn a good deal from McCord as she explains a few of Netflix's core practices that can help any company be more collaborative.
For McCord, collaboration clearly begins with honesty. For example, she writes, "We wanted people to practice radical honesty: telling one another, and us, the truth in a timely fashion and ideally face to face." She also explains how, at Netflix, "We wanted people to have strong, fact-based opinions and to debate them avidly and test them rigorously."
This is a wonderful book that will inspire the reader to look at leadership through a new lens.
### The Code of Trust
**by Robin Dreeke** (recommended by [Ron McFarland][3])
Author Robin Dreeke was an FBI agent, which gave him experience getting information from total strangers. To do that, he had to get people to be open to him.
His experience led to this book, which offers five rules he calls "The Code of Trust." Put simply, the rules are: 1) Suspend your ego or pride when you meet someone for the first time, 2) Avoid being judgmental about that person, 3) Validate the person's position and feelings, 4) Honor basic reason, and 5) Be generous and encourage building the value of the relationship.
Dreeke argues that you can achieve the above by 1) Aligning your goals with others' after learning what ther goals are, 2) Understanding the power of context and their situations, 3) Crafting the meeting to get them to open up to you, and 4) Connecting with deep communication (something over and above language that includes feelings as well).
The book teaches how to do the above, so I learned a great deal. Overall, though, it makes some important points for anyone interested in open organizations. If people are cooperative, engaged, interactive, and open, an organization with many outside contributors can be very successful. But if people are uninterested, non-cooperative, protective, reluctant to interact, and closed, an organization will suffer.
### Team of Teams
**by Gen. Stanley McChrystal, Chris Fussell, and Tantum Collins** (recommended by [Matt Micene][4])
Does the highly specialized and hierarchical United States military strike you as a source for advice on building agile, integrated, highly disparate teams? This book traces General McChrystal's experiences transforming a team "moving from playing football to basketball, and finding that habits and preconceptions had to be discarded along with pads and cleats."
With lives literally on the line, circumstances forced McChrystal's Joint Special Operations Task Force walks through some radical changes. But as much as this book takes place during a war, it's not a story about a war. It's a story that traces Frederick Winslow Taylor's legacy and impact on the way we think about operational efficiency. It's about the radical restructuring of communications in a siloed organization. It distinguishes the "complex" and the "complicated," and explains the different forces those two concepts exert on organizations. Readers will note many themes that resonate with open organization thinking—like resilience thinking, the OODA loop, systems thinking, and empowered execution in leadership.
Perhaps most importantly, you'll see more than discourse and discussion on these topics. You'll get to see an example of a highly siloed organization successfuly changing its culture and embracing a more transparent and "organic" system of organization that fostered success.
### Liminal Thinking
**by Dave Gray** (recommended by [Angela Roberstson][1])
When I read this book's title, the word "liminal" throws me every time. I think "limit." But as Dave Gray patiently explains, "The word liminal comes from the Latin root limen, which means threshold." Gray shares his perspective on ways that readers can push past the boundaries of our thinking to become more creative, impactul leaders.
I love how Gray quickly explains how beliefs impact our lives. We can reframe beliefs, he says, if we're willing to stop clinging to them. The concise text means that you can read and reread this book as you work to implement the practices for enacting change that Gray provides.
The book is divided into two parts: Principles and Practices. After describing each of the six principles and nine practices, Gray offers a short exercise you can complete. Throughout the book are also great visuals and quotes to ensure you're staying engaged.
Read this book if you're looking for fresh ideas about how to manage change.
### Orbiting the Giant Hairball
**by Gordon MacKenzie** (recommended by [Allison Matlack][5])
Sometimes—even in open organizations—we can struggle to maintain our creativity and authenticity in the face of the bureaucratic processes that live at the heart of every company of certain size. Gordon MacKenzie offers a refreshing alternative to corporate normalcy in this charming book that has been something of a cult classic since it was self-published in the 1980s.
There's a masterpiece in each of us, MacKenzie posits—one that is singular and unique. We can choose to paint by the corporate numbers, or we can find real joy in using bold strokes to create an original work of art.
### Tribal Leadership
**by Dave Logan, John King, and Halee Fischer-Wright** (recommended by [Chris Baynham-Hughes][6])
Too often, technology rather than culture an organization's starting point for transformation, innovation, and speed to market. I've lost count of the times I've used this book to frame conversations around company culture and challenge leaders on what they are doing to foster innovation and loyalty, and to create a workplace in which people. It's been a game-changer for me.
Tribal Leadership is essential reading for anybody interested in workplace culture or a leadership role—especially those wanting to develop open, innovative, and collaborative cultures. It provides an evidence-based approach to developing corporate culture detailing: 1) five distinct stages of tribal culture, 2) a framework to develop yourself and others as tribal leaders, and 3) characteristics and coaching tips to ensure practitioners can identify the levels each individual is at and nudge them to the next level. Each chapter presents a case study narrative before identifying coaching tips and summarizing key points. I found it enjoyable to read and easy to remember.
### Wikipedia and the Politics of Openness
**by Nathaniel Tkacz** (recommended by [Bryan Behrenshausen][7])
This thing we call "open" isn't something natural or eternal—some kind of fixed and immutable essence or quality that somehow exists outside time. It's flexible, contingent, context-specific, and the site of so much negotiation and contestation. What does "open" mean to and for the parties most invested in the term? And what happens when we organize groups and institutions around those very definitions? What (and who) do they enable? And what (and who) do they preclude?
Tkacz explores these questions with historical richness and critical flair by examining one of the world's largest and most notable open organizations: Wikipedia, that paragon of ostensibly participatory and collaborative behavior. Tkacz is perhaps less sanguine: "While the force of the open must be acknowledged, the real energy of the people who rally behind it, the way innumerable projects have been transformed in its name, the new projects and positive outcomes it has produced—I suggest that the concept itself has some crucial problems," he writes. Read on to see if you agree.
### WTF? What's the Future and Why It's Up to Us
**by Tim O'Reilly** (recommended by [Jason Hibbets][8])
Since I first saw Tim O'Reilly speak at a conference many years ago, I've always felt he had a good grasp of what's happening not only in open source but also in the broader space of digital technology. O'Reilly possesses the great ability to read the tea leaves, to make connections, and (based on those observations), to "predict" potential outcomes. In the book, he calls this map making.
While this book is about what the future could hold (with a particular filter on the impacts of artificial intelligence), it really boils down to the fact that humans are shaping the future. The book opens with a pretty extensive history of free and open source software, which I think many in the community will enjoy. Then it dives directly into the race for automated vehicles—and why Uber, Lyft, Tesla, and Google are all pushing to win.
And closely related to open organizations, the book description posted on [Harper Collins][9] poses the following questions:
* What will happen to business when technology-enabled networks and marketplaces are better at deploying talent than traditional companies?
* How should companies organize themselves to take advantage of these new tools?
As many of our readers know, the future will be based on open source. O'Reilly provides you with some thought-provoking ideas on how AI and automation are closer than you might think.
Do yourself a favor. Turn to your favorite AI-driven home automation unit and say: "Order Tim O'Reilly 'What's the Future.'"
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/18/6/summer-reading-2018
作者:[Bryan Behrenshausen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/remyd
[1]:https://opensource.com/users/arobertson98
[2]:https://opensource.com/users/mackanic
[3]:https://opensource.com/users/ron-mcfarland
[4]:https://opensource.com/users/matt-micene
[5]:https://opensource.com/users/amatlack
[6]:https://opensource.com/users/onlychrisbh
[7]:https://opensource.com/users/bbehrens
[8]:https://opensource.com/users/jhibbets
[9]:https://www.harpercollins.com/9780062565716/wtf/

View File

@ -0,0 +1,90 @@
How To Record Everything You Do In Terminal
======
![](https://www.ostechnix.com/wp-content/uploads/2017/03/Record-Everything-You-Do-In-Terminal-720x340.png)
A few days ago, we published a guide that explained how to [**save commands in terminal itself and use them on demand**][1]. It is very useful for those who dont want to memorize a lengthy Linux command. Today, in this guide, we are going to see how to record everything you do in Terminal using **script** command. You might have run a command, or created a directory, or installed an application in Terminal. Script command simply saves whatever you do in the Terminal. You can then view them if you want to know what you did few hours or few days ago. I know I know, we can use UP/DOWN arrow keys or history command to view previously running commands. However, you cant view the output of those commands. But, Script command records and displays complete terminal session activities.
The script command creates a typescript of everything you do in the Terminal. It doesnt matter whether you install an application, create a directory/file, remove a folder. Everything will be recorded, including the commands and the respective outputs. This command will be helpful who wants a hard-copy record of an interactive session as proof of an assignment. Whether youre a student or a tutor, you can make a copy of everything you do in the Terminal along with all outputs.
### Record Everything You Do In Terminal using script command in Linux
The script command comes pre-installed on most modern Linux operating systems. So, let us not bother about the installation.
Let us go ahead and see how to use it in real time.
Run the following command to start the Terminal session recording.
```
$ script -a my_terminal_activities
```
Where, **-a** flag is used to append the output to file or to typescript, retaining the prior contents. The above command records everything you do in the Terminal and append the output to a file called **my_terminal_activities** and save it in your current working directory.
Sample output would be:
```
Script started, file is my_terminal_activities
```
Now, run some random Linux commands in your Terminal.
```
$ mkdir ostechnix
$ cd ostechnix/
$ touch hello_world.txt
$ cd ..
$ uname -r
```
After running all commands, end the script commands session using command:
```
$ exit
```
**Sample output:**
```
exit
Script done, file is my_terminal_activities
```
As you see, the Terminal activities have been stored in a file called **my_terminal_activities** and saves it in the current working directory.
To view your Terminal activities, just open this file in any editor or simply display it using the cat command.
```
$ cat my_terminal_activities
```
**Sample output:**
As you see in the above output, script command has recorded all my Terminal activities, including the start and end time of the script command. Awesome, isnt it? The reason to use script command is its not just records the commands, but also the commands output as well. To put this simply, Script command will record everything you do on the Terminal.
### Conclusion
Like I said, script command would be useful for students, teachers and any Linux users who wants to keep the record of their Terminal activities. Even though there are many CLI and GUI to do this, script command is an easiest and quickest way to record the Terminal session activities.
And, thats all. Hope this helps. If you find this guide useful, please share it on your social, professional networks and **support OSTechNix**.
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/record-everything-terminal/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/save-commands-terminal-use-demand/

View File

@ -1,3 +1,6 @@
Translating by MjSeven
Migrating to Linux: Installing Software
======

View File

@ -1,135 +0,0 @@
Translating
Make your first contribution to an open source project
============================================================
> There's a first time for everything.
![women programming](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard2.png?itok=WnKfsl-G "women programming")
Image by : [WOCinTech Chat][16]. Modified by Opensource.com. [CC BY-SA 4.0][17]
It's a common misconception that contributing to open source is difficult. You might think, "Sometimes I can't even understand my own code; how am I supposed to understand someone else's?"
Relax. Until last year, I thought the same thing. Reading and understanding someone else's code and writing your own on top of that can be a daunting task, but with the right resources, it isn't as hard as you might think.
The first step is to choose a project. This decision can be instrumental in turning a newbie programmer into a seasoned open sourcer.
Many amateur programmers interested in open source are advised to check out [Git][18], but that is not the best way to start. Git is maintained by uber-geeks with years of software development experience. It is a good place to find an open source project to contribute to, but it's not beginner-friendly. Most devs contributing to Git have enough experience that they do not need resources or detailed documentation. In this article, I'll provide a checklist of beginner-friendly features and some tips to make your first open source contribution easy.
### Understand the product
Before contributing to a project, you should understand how it works. To understand it, you need to try it for yourself. If you find the product interesting and useful, it is worth contributing to.
Too often, beginners try to contribute to a project without first using the software. They then get frustrated and give up. If you don't use the software, you can't understand how it works. If you don't know how it works, how can you fix a bug or write a new feature?
Remember: Try it, then hack it.
### Check the project's status
How active is the project?
If you send a pull request to an unmaintained or dormant project, your pull request (or PR) may never be reviewed or merged. Look for projects with lots of activity; that way you will get immediate feedback on your code and your contributions will not go to waste.
Here's how to tell if a project is active:
* **Number of contributors:** A growing number of contributors indicates the developer community is interested and willing to accept new contributors.
* **Frequency of commits:** Check the most recent commit date. If it was within the last week, or even month or two, the project is being maintained.
* **Number of maintainers:** A higher number of maintainers means more potential mentors to guide you.
* **Activity level in the chat room/IRC:** A busy chat room means quick replies to your queries.
### Resources for beginners
Coala is an example of an open sour project that has its own resources for tutorials and documentation, where you can also access its API (every class and method). The site also features an attractive UI that makes you want to read more.
**Documentation:** Developers of all levels need reliable, well-maintained documentation to understand the details of a project. Look for projects that offer solid documentation on [GitHub][19] (or wherever it is hosted) and on a separate site like [Read the Docs][20], with lots of examples that will help you dive deep into the code.
### [coala-newcomers_guide.png][2]
![Coala Newcomers' Guide screen](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/coala-newcomers_guide.png?itok=G7mfPbXN "Coala Newcomers' Guide screen")
**Tutorials:** Tutorials that explain how to add features to a project are helpful for beginners (however, you may not find them for all projects). For example, coala offers [tutorials for writing  _bears_][21]  (Python wrappers for linting tools to perform code analysis).
### [coala_ui.png][3]
![Coala UI](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/coala_ui.png?itok=LR02629W "Coala User Interface screenshot")
**Labeled issues:** For beginners who are just figuring out how to choose their first project, selecting an issue can be an even tougher task. Issues labeled "difficulty/low," "difficulty/newcomer," "good first issue," and "low-hanging fruit" can be perfect for newbies.
### [coala_labeled_issues.png][4]
![Coala labeled issues](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/coala_labeled_issues.png?itok=74qSjG_T "Coala labeled issues")
### Miscellaneous factors
### [ci_logs.png][5]
![CI user pipeline log](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/ci_logs.png?itok=J3V8gbc7 "CI user pipeline log")
* **Maintainers' attitudes toward new contributors:** In my experience, most open sourcers are eager to help newcomers onboard their projects. However, you may also encounter some who are less welcoming (maybe even a bit rude) when you ask for help. Don't let them discourage you. Just because someone has more experience doesn't give them the right to be rude. There are plenty of others out there who want to help.
* **Review process/structure:** Your PR will go through a number of reviews and changes by experienced developers and your peers—that's how you learn the most about software development. A project with a stringent review process enables you to grow as a developer by writing production-grade code.
* **A robust CI pipeline:** Open source projects introduce beginners to continuous integration and deployment services. A robust CI pipeline will help you learn how to read and make sense of CI logs. It will also give you experience dealing with failing test cases and code coverage issues.
* **Participation in code programs (Ex. [Google Summer Of Code][1]): **Participating organizations demonstrate a willingness to commit to the long-term development of a project. They also provide an opportunity for newcomers to gain real-world development experience and get paid for it. Most of the organizations that participate in such programs welcome newbies.
### 7 beginner-friendly organizations
* [coala (Python)][7]
* [oppia (Python, Django)][8]
* [DuckDuckGo (Perl, JavaScript)][9]
* [OpenGenus (JavaScript)][10]
* [Kinto (Python, JavaScript)][11]
* [FOSSASIA (Python, JavaScript)][12]
* [Kubernetes (Go)][13]
### About the author
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/img_20180309_001440858.jpg?itok=tG8yvrJF)][22] Palash Nigam - I'm a computer science undergrad from India who loves to hack on open source software and spend most of my time on GitHub. My current interests include backend web development, blockchains, and all things python.[More about me][14]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/get-started-open-source-project
作者:[ Palash Nigam ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/palash25
[1]:https://en.wikipedia.org/wiki/Google_Summer_of_Code
[2]:https://opensource.com/file/391211
[3]:https://opensource.com/file/391216
[4]:https://opensource.com/file/391226
[5]:https://opensource.com/file/391221
[6]:https://opensource.com/article/18/4/get-started-open-source-project?rate=i_d2neWpbOIJIAEjQKFExhe0U_sC6SiQgkm3c7ck8IM
[7]:https://github.com/coala/coala
[8]:https://github.com/oppia/oppia
[9]:https://github.com/duckduckgo/
[10]:https://github.com/OpenGenus/
[11]:https://github.com/kinto
[12]:https://github.com/fossasia/
[13]:https://github.com/kubernetes
[14]:https://opensource.com/users/palash25
[15]:https://opensource.com/user/212436/feed
[16]:https://www.flickr.com/photos/wocintechchat/25171528213/
[17]:https://creativecommons.org/licenses/by/4.0/
[18]:https://git-scm.com/
[19]:https://github.com/
[20]:https://readthedocs.org/
[21]:http://api.coala.io/en/latest/Developers/Writing_Linter_Bears.html
[22]:https://opensource.com/users/palash25
[23]:https://opensource.com/users/palash25
[24]:https://opensource.com/users/palash25
[25]:https://opensource.com/article/18/4/get-started-open-source-project#comments
[26]:https://opensource.com/tags/web-development

View File

@ -1,111 +0,0 @@
Translating by MjSeven
How to share files between Linux and Windows
======
![](https://images.idgesg.net/images/article/2018/04/cats-eating-100755724-large.jpg)
Many people today work on mixed networks, with both Linux and Windows systems playing important roles. Sharing files between the two can be critical at times and is surprisingly easy with the right tools. With fairly little effort, you can copy files from Windows to Linux or Linux to Windows. In this post, we'll look at what is needed to configure your Linux and Windows system to allow you to easily move files from one OS to the other.
### Copying files between Linux and Windows
The first step toward moving files between Windows and Linux is to download and install a tool such as PuTTY's pscp. You can get PuTTY from [putty.org][1] and set it up on your Windows system easily. PuTTY comes with a terminal emulator (putty) as well as tools like **pscp** for securely copying files between Linux and Windows systems. When you go to the PuTTY site, you can elect to install all of the tools or pick just the ones you want to use by choosing either the installer or the individual .exe files.
You will also need to have ssh-server set up and running on your Linux system. This allows it to support the client (Windows side) connection requests. If you don't already have an ssh server set up, the following steps should work on Debian systems (Ubuntu, etc.).
```
sudo apt update
sudo apt install ssh-server
sudo service ssh start
```
For Red Hat and related Linux systems, use similar commands:
```
sudo yum install openssh-server
sudo systemctl start sshd
```
Note that if you are running a firewall such as ufw, you may have to open port 22 to allow the connections.
Using the **pscp** command, you can then move files from Windows to Linux or vice versa. The syntax is quite straightforward with its "copy from to" commands.
#### Windows to Linux
In the command shown below, we are copying a file from a user's account on a Windows system to the /tmp directory on the Linux system.
```
C:\Program Files\PuTTY>pscp \Users\shs\copy_me.txt shs@192.168.0.18:/tmp
shs@192.168.0.18's password:
copy_me.txt | 0 kB | 0.1 kB/s | ETA: 00:00:00 | 100%
```
#### Linux to Windows
Moving the files from Linux to Windows is just as easy. Just reverse the arguments.
```
C:\Program Files\PuTTY>pscp shs@192.168.0.18:/tmp/copy_me.txt \Users\shs
shs@192.168.0.18's password:
copy_me.txt | 0 kB | 0.1 kB/s | ETA: 00:00:00 | 100%
```
The process can be made a little smoother and easier if 1) pscp is in your Windows search path and 2) your Linux system is in your Windows hosts file.
#### Windows search path
If you install the PuTTY tools with the PuTTY installer, you will probably find that **C:\Program files\PuTTY** is on your Windows search path. You can check to see if this is the case by typing **echo %path%** in a Windows command prompt (type "cmd" in the search bar to open the command prompt). If it is, you don't need to be concerned with where you are in the file system relative to the pscp executable. Moving into the folder containing the files you want to move will likely prove easier.
```
C:\Users\shs>pscp copy_me.txt shs@192.168.0.18:/tmp
shs@192.168.0.18's password:
copy_me.txt | 0 kB | 0.1 kB/s | ETA: 00:00:00 | 100%
```
#### Updating your Windows hosts file
Here's the other little fix. With administrator rights, you can add your Linux system to the Windows host file (C:\Windows\System32\drivers\etc\hosts) and then use the host name in place of its IP address. Keep in mind that this will not work indefinitely if the IP address on your Linux system is dynamically assigned.
```
C:\Users\shs>pscp copy_me.txt shs@stinkbug:/tmp
shs@192.168.0.18's password:
hosts | 0 kB | 0.8 kB/s | ETA: 00:00:00 | 100%
```
Note that Windows host files are formatted like the /etc/hosts file on Linux systems — IP address, white space and host name. Comments are prefaced with pound signs.
```
# Linux systems
192.168.0.18 stinkbug
```
#### Those pesky line endings
Keep in mind that lines in text files on Windows end with both a carriage return and a linefeed. The pscp tool will not remove the carriage returns to make the files look like Linux text files. Instead, it simply copies the files intact. You might consider installing the **tofrodos** package to enable you to use the **fromdos** and **todos** commands on your Linux system to adjust the files you are moving between platforms.
### Sharing folders between Windows and Linux
Sharing folders is an entirely different operation. You end up mounting a Windows directory on your Linux system or a Linux directory on your Windows box so that both systems can use the same set of files rather than copying the files from one system to the other. One of the best tools for this is Samba, which emulates Windows protocols and runs on the Linux system.
Once Samba is installed, you will be able to mount a Linux folder on Windows or a Windows folder on Linux. This is, of course, very different than copying files as described earlier in this post. Instead, each of the two systems involved will have access to the same files at the same time.
More tips on choosing the right tool for sharing files between Linux and Windows systems are available [here][2].
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3269189/linux/sharing-files-between-linux-and-windows.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:https://www.putty.org
[2]:https://www.infoworld.com/article/2617683/linux/linux-moving-files-between-unix-and-windows-systems.html
[3]:https://www.facebook.com/NetworkWorld/
[4]:https://www.linkedin.com/company/network-world

View File

@ -1,108 +0,0 @@
pinewall translating
An introduction to cryptography and public key infrastructure
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/locks_keys_bridge_paris.png?itok=Bp0dsEc9)
Secure communication is quickly becoming the norm for today's web. In July 2018, Google Chrome plans to [start showing "not secure" notifications][1] for **all** sites transmitted over HTTP (instead of HTTPS). Mozilla has a [similar plan][2]. While cryptography is becoming more commonplace, it has not become easier to understand. [Let's Encrypt][3] designed and built a wonderful solution to provide and periodically renew free security certificates, but if you don't understand the underlying concepts and pitfalls, you're just another member of a large group of [cargo cult][4] programmers.
### Attributes of secure communication
The intuitively obvious purpose of cryptography is confidentiality: a message can be transmitted without prying eyes learning its contents. For confidentiality, we encrypt a message: given a message, we pair it with a key and produce a meaningless jumble that can only be made useful again by reversing the process using the same key (thereby decrypting it). Suppose we have two friends, [Alice and Bob][5], and their nosy neighbor, Eve. Alice can encrypt a message like "Eve is annoying", send it to Bob, and never have to worry about Eve snooping on her.
For truly secure communication, we need more than confidentiality. Suppose Eve gathered enough of Alice and Bob's messages to figure out that the word "Eve" is encrypted as "Xyzzy". Furthermore, Eve knows Alice and Bob are planning a party and Alice will be sending Bob the guest list. If Eve intercepts the message and adds "Xyzzy" to the end of the list, she's managed to crash the party. Therefore, Alice and Bob need their communication to provide integrity: a message should be immune to tampering.
We have another problem though. Suppose Eve watches Bob open an envelope marked "From Alice" with a message inside from Alice reading "Buy another gallon of ice cream." Eve sees Bob go out and come back with ice cream, so she has a general idea of the message's contents even if the exact wording is unknown to her. Bob throws the message away, Eve recovers it, and then every day for the next week drops an envelope marked "From Alice" with a copy of the message in Bob's mailbox. Now the party has too much ice cream and Eve goes home with free ice cream when Bob gives it away at the end of the night. The extra messages are confidential, and their integrity is intact, but Bob has been misled as to the true identity of the sender. Authentication is the property of knowing that the person you are communicating with is in fact who they claim to be.
Information security has [other attributes][6], but confidentiality, integrity, and authentication are the three traits you must know.
### Encryption and ciphers
What are the components of encryption? We need a message which we'll call the plaintext. We may need to do some initial formatting to the message to make it suitable for the encryption process (padding it to a certain length if we're using a block cipher, for example). Then we take a secret sequence of bits called the key. A cipher then takes the key and transforms the plaintext into ciphertext. The ciphertext should look like random noise and only by using the same cipher and the same key (or as we will see later in the case of asymmetric ciphers, a mathematically related key) can the plaintext be restored.
The cipher transforms the plaintext's bits using the key's bits. Since we want to be able to decrypt the ciphertext, our cipher needs to be reversible too. We can use [XOR][7] as a simple example. It is reversible and is [its own inverse][8] (P ^ K = C; C ^ K = P) so it can both encrypt plaintext and decrypt ciphertext. A trivial use of an XOR can be used for encryption in a one-time pad, but it is generally not [practical][9]. However, it is possible to combine XOR with a function that generates an arbitrary stream of random data from a single key. Modern ciphers like AES and Chacha20 do exactly that.
We call any cipher that uses the same key to both encrypt and decrypt a symmetric cipher. Symmetric ciphers are divided into stream ciphers and block ciphers. A stream cipher runs through the message one bit or byte at a time. Our XOR cipher is a stream cipher, for example. Stream ciphers are useful if the length of the plaintext is unknown (such as data coming in from a pipe or socket). [RC4][10] is the best-known stream cipher but it is vulnerable to several different attacks, and the newest version (1.3) of the TLS protocol (the "S" in "HTTPS") does not even support it. [Efforts][11] are underway to create new stream ciphers with some candidates like [ChaCha20][12] already supported in TLS.
A block cipher takes a fix-sized block and encrypts it with a fixed-sized key. The current king of the hill in the block cipher world is the [Advanced Encryption Standard][13] (AES), and it has a block size of 128 bits. That's not very much data, so block ciphers have a [mode][14] that describes how to apply the cipher's block operation across a message of arbitrary size. The simplest mode is [Electronic Code Book][15] (ECB) which takes the message, splits it into blocks (padding the message's final block if necessary), and then encrypts each block with the key independently.
![](https://opensource.com/sites/default/files/uploads/ecb_encryption.png)
You may spot a problem here: if the same block appears multiple times in the message (a phrase like "GET / HTTP/1.1" in web traffic, for example) and we encrypt it using the same key, we'll get the same result. The appearance of a pattern in our encrypted communication makes it vulnerable to attack.
Thus there are more advanced modes such as [Cipher Block Chaining][16] (CBC) where the result of each block's encryption is XORed with the next block's plaintext. The very first block's plaintext is XORed with an initialization vector of random numbers. There are many other modes each with different advantages and disadvantages in security and speed. There are even modes, such as Counter (CTR), that can turn a block cipher into a stream cipher.
![](https://opensource.com/sites/default/files/uploads/cbc_encryption.png)
In contrast to symmetric ciphers, there are asymmetric ciphers (also called public-key cryptography). These ciphers use two keys: a public key and a private key. The keys are mathematically related but still distinct. Anything encrypted with the public key can only be decrypted with the private key and data encrypted with the private key can be decrypted with the public key. The public key is widely distributed while the private key is kept secret. If you want to communicate with a given person, you use their public key to encrypt your message and only their private key can decrypt it. [RSA][17] is the current heavyweight champion of asymmetric ciphers.
A major downside to asymmetric ciphers is that they are computationally expensive. Can we get authentication with symmetric ciphers to speed things up? If you only share a key with one other person, yes. But that breaks down quickly. Suppose a group of people want to communicate with one another using a symmetric cipher. The group members could establish keys for each unique pairing of members and encrypt messages based on the recipient, but a group of 20 people works out to 190 pairs of members total and 19 keys for each individual to manage and secure. By using an asymmetric cipher, each person only needs to guard their own private key and have access to a listing of public keys.
Asymmetric ciphers are also limited in the [amount of data][18] they can encrypt. Like block ciphers, you have to split a longer message into pieces. In practice then, asymmetric ciphers are often used to establish a confidential, authenticated channel which is then used to exchange a shared key for a symmetric cipher. The symmetric cipher is used for subsequent communications since it is much faster. TLS can operate in exactly this fashion.
### At the foundation
At the heart of secure communication are random numbers. Random numbers are used to generate keys and to provide unpredictability for otherwise deterministic processes. If the keys we use are predictable, then we're susceptible to attack right from the very start. Random numbers are difficult to generate on a computer which is meant to behave in a consistent manner. Computers can gather random data from things like mouse movement or keyboard timings. But gathering that randomness (called entropy) takes significant time and involve additional processing to ensure uniform distributions. It can even involve the use of dedicated hardware (such as [a wall of lava lamps][19]). Generally, once we have a truly random value, we use that as a seed to put into a [cryptographically secure pseudorandom number generator][20] Beginning with the same seed will always lead to the same stream of numbers, but what's important is that the stream of numbers descended from the seed don't exhibit any pattern. In the Linux kernel, [/dev/random and /dev/urandom][21], operate in this fashion: they gather entropy from multiple sources, process it to remove biases, create a seed, and can then provide the random numbers used to generate an RSA key for example.
### Other cryptographic building blocks
We've covered confidentiality, but I haven't mentioned integrity or authentication yet. For that, we'll need some new tools in our toolbox.
The first is the cryptographic hash function. A cryptographic hash function is meant to take an input of arbitrary size and produce a fixed size output (often called a digest). If we can find any two messages that create the same digest, that's a collision and makes the hash function unsuitable for cryptography. Note the emphasis on "find"; if we have an infinite world of messages and a fixed sized output, there are bound to be collisions, but if we can find any two messages that collide without a monumental investment of computational resources, that's a deal-breaker. Worse still would be if we could take a specific message and could then find another message that results in a collision.
As well, the hash function should be one-way: given a digest, it should be computationally infeasible to determine what the message is. Respectively, these [requirements][22] are called collision resistance, second preimage resistance, and preimage resistance. If we meet these requirements, our digest acts as a kind of fingerprint for a message. No two people ([in theory][23]) have the same fingerprints, and you can't take a fingerprint and turn it back into a person.
If we send a message and a digest, the recipient can use the same hash function to generate an independent digest. If the two digests match, they know the message hasn't been altered. [SHA-256][24] is the most popular cryptographic hash function currently since [SHA-1][25] is starting to [show its age][26].
Hashes sound great, but what good is sending a digest with a message if someone can tamper with your message and then tamper with the digest too? We need to mix hashing in with the ciphers we have. For symmetric ciphers, we have message authentication codes (MACs). MACs come in different forms, but an HMAC is based on hashing. An [HMAC][27] takes the key K and the message M and blends them together using a hashing function H with the formula H(K + H(K + M)) where "+" is concatenation. Why this formula specifically? That's beyond this article, but it has to do with protecting the integrity of the HMAC itself. The MAC is sent along with an encrypted message. Eve could blindly manipulate the message, but as soon as Bob independently calculates the MAC and compares it to the MAC he received, he'll realize the message has been tampered with.
For asymmetric ciphers, we have digital signatures. In RSA, encryption with a public key makes something only the private key can decrypt, but the inverse is true as well and can create a type of signature. If only I have the private key and encrypt a document, then only my public key will decrypt the document, and others can implicitly trust that I wrote it: authentication. In fact, we don't even need to encrypt the entire document. If we create a digest of the document, we can then encrypt just the fingerprint. Signing the digest instead of the whole document is faster and solves some problems around the size of a message that can be encrypted using asymmetric encryption. Recipients decrypt the digest, independently calculate the digest for the message, and then compare the two to ensure integrity. The method for digital signatures varies for other asymmetric ciphers, but the concept of using the public key to verify a signature remains.
### Putting it all together
Now that we have all the major pieces, we can implement a [system][28] that has all three of the attributes we're looking for. Alice picks a secret symmetric key and encrypts it with Bob's public key. Then she hashes the resulting ciphertext and uses her private key to sign the digest. Bob receives the ciphertext and the signature, computes the ciphertext's digest and compares it to the digest in the signature he verified using Alice's public key. If the two digests are identical, he knows the symmetric key has integrity and is authenticated. He decrypts the ciphertext with his private key and uses the symmetric key Alice sent him to communicate with her confidentially using HMACs with each message to ensure integrity. There's no protection here against a message being replayed (as seen in the ice cream disaster Eve caused). To handle that issue, we would need some sort of "handshake" that could be used to establish a random, short-lived session identifier.
The cryptographic world is vast and complex, but I hope this article gives you a basic mental model of the core goals and components it uses. With a solid foundation in the concepts, you'll be able to continue learning more.
Thank you to Hubert Kario, Florian Weimer, and Mike Bursell for their help with this article.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/cryptography-pki
作者:[Alex Wood][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/awood
[1]:https://security.googleblog.com/2018/02/a-secure-web-is-here-to-stay.html
[2]:https://blog.mozilla.org/security/2017/01/20/communicating-the-dangers-of-non-secure-http/
[3]:https://letsencrypt.org/
[4]:https://en.wikipedia.org/wiki/Cargo_cult_programming
[5]:https://en.wikipedia.org/wiki/Alice_and_Bob
[6]:https://en.wikipedia.org/wiki/Information_security#Availability
[7]:https://en.wikipedia.org/wiki/XOR_cipher
[8]:https://en.wikipedia.org/wiki/Involution_(mathematics)#Computer_science
[9]:https://en.wikipedia.org/wiki/One-time_pad#Problems
[10]:https://en.wikipedia.org/wiki/RC4
[11]:https://en.wikipedia.org/wiki/ESTREAM
[12]:https://en.wikipedia.org/wiki/Salsa20
[13]:https://en.wikipedia.org/wiki/Advanced_Encryption_Standard
[14]:https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation
[15]:https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#/media/File:ECB_encryption.svg
[16]:https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#/media/File:CBC_encryption.svg
[17]:https://en.wikipedia.org/wiki/RSA_(cryptosystem)
[18]:https://security.stackexchange.com/questions/33434/rsa-maximum-bytes-to-encrypt-comparison-to-aes-in-terms-of-security
[19]:https://www.youtube.com/watch?v=1cUUfMeOijg
[20]:https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
[21]:https://www.2uo.de/myths-about-urandom/
[22]:https://crypto.stackexchange.com/a/1174
[23]:https://www.telegraph.co.uk/science/2016/03/14/why-your-fingerprints-may-not-be-unique/
[24]:https://en.wikipedia.org/wiki/SHA-2
[25]:https://en.wikipedia.org/wiki/SHA-1
[26]:https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html
[27]:https://en.wikipedia.org/wiki/HMAC
[28]:https://en.wikipedia.org/wiki/Hybrid_cryptosystem

View File

@ -1,132 +0,0 @@
translating-----geekpi
How to use the history command in Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguins.png?itok=yKOpaJM_)
As I spend more and more time in terminal sessions, it feels like I'm continually finding new commands that make my daily tasks more efficient. The GNU `history` command is one that really changed my work day.
The GNU `history` command keeps a list of all the other commands that have been run from that terminal session, then allows you to replay or reuse those commands instead of retyping them. If you are an old greybeard, you know about the power of `history`, but for us dabblers or new sysadmin folks, `history` is an immediate productivity gain.
### History 101
To see `history` in action, open a terminal program on your Linux installation and type:
```
$ history
```
Here's the response I got:
```
1  clear
2  ls -al
3  sudo dnf update -y
4  history
```
The `history` command shows a list of the commands entered since you started the session. The joy of `history` is that now you can replay any of them by using a command such as:
```
$ !3
```
The `!3` command at the prompt tells the shell to rerun the command on line 3 of the history list. I could also access that command by entering:
```
linuser@my_linux_box: !sudo dnf
```
`history` will search for the last command that matches the pattern you provided and run it.
### Searching history
You can also use `history` to rerun the last command you entered by typing `!!`. And, by pairing it with `grep`, you can search for commands that match a text pattern or, by using it with `tail`, you can find the last few commands you executed. For example:
```
$ history | grep dnf
3  sudo dnf update -y
5  history | grep dnf
$ history | tail -n 3
4  history
5  history | grep dnf
6  history | tail -n 3
```
Another way to get to this search functionality is by typing `Ctrl-R` to invoke a recursive search of your command history. After typing this, the prompt changes to:
```
(reverse-i-search)`':
```
Now you can start typing a command, and matching commands will be displayed for you to execute by pressing Return or Enter.
### Changing an executed command
`history` also allows you to rerun a command with different syntax. For example, if I wanted to change my previous command `history | grep dnf` to `history | grep ssh`, I can execute the following at the prompt:
```
$ ^dnf^ssh^
```
`history` will rerun the command, but replace `dnf` with `ssh`, and execute it.
### Removing history
There may come a time that you want to remove some or all the commands in your history file. If you want to delete a particular command, enter `history -d <line number>`. To clear the entire contents of the history file, execute `history -c`.
The history file is stored in a file that you can modify, as well. Bash shell users will find it in their Home directory as `.bash_history`.
### Next steps
There are a number of other things that you can do with `history`:
* Set the size of your history buffer to a certain number of commands
* Record the date and time for each line in history
* Prevent certain commands from being recorded in history
For more information about the `history` command and other interesting things you can do with it, take a look at the [GNU Bash Manual][1].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/history-command
作者:[Steve Morris][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/smorris12
[1]:https://www.gnu.org/software/bash/manual/

View File

@ -1,167 +0,0 @@
pinewall translating
MySQL without the MySQL: An introduction to the MySQL Document Store
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_business_sign_store.jpg?itok=g4QibRqg)
MySQL can act as a NoSQL JSON Document Store so programmers can save data without having to normalize data, set up schemas, or even have a clue what their data looks like before starting to code. Since MySQL version 5.7 and in MySQL 8.0, developers can store JSON documents in a column of a table. By adding the new X DevAPI, you can stop embedding nasty strings of structured query language in your code and replace them with API calls that support modern programming design.
Very few developers have any formal training in structured query language (SQL), relational theory, sets, or other foundations of relational databases. But they need a secure, reliable data store. Add in a dearth of available database administrators, and things can get very messy quickly.
The [MySQL Document Store][1] allows programmers to store data without having to create an underlying schema, normalize data, or any of the other tasks normally required to use a database. A JSON document collection is created and can then be used.
### JSON data type
This is all based on the JSON data type introduced a few years ago in MySQL 5.7. This provides a roughly 1GB column in a row of a table. The data has to be valid JSON or the server will return an error, but developers are free to use that space as they want.
### X DevAPI
The old MySQL protocol is showing its age after almost a quarter-century, so a new protocol was developed called [X DevAPI][2]. It includes a new high-level session concept that allows code to scale from one server to many with non-blocking, asynchronous I/O that follows common host-language programming patterns. The focus is put on using CRUD (create, replace, update, delete) patterns while following modern practices and coding styles. Or, to put it another way, you no longer have to embed ugly strings of SQL statements in your beautiful, pristine code.
### Coding examples
A new shell, creatively called the [MySQL Shell][3] , supports this new protocol. It can be used to set up high-availability clusters, check servers for upgrade readiness, and interact with MySQL servers. This interaction can be done in three modes: JavaScript, Python, and SQL.
The coding examples that follow are in the JavaScript mode of the MySQL Shell; it has a `JS>` prompt.
Here, we will log in as `dstokes` with the password `password` to the local system and a schema named `demo`. There is a pointer to the schema demo that is named `db`.
```
$ mysqlsh dstokes:password@localhost/demo
JS> db.createCollection("example")
JS> db.example.add(
      {
        Name: "Dave",
        State:  "Texas",
        foo : "bar"
      }
     )
JS>
```
Above we logged into the server, connected to the `demo` schema, created a collection named `example`, and added a record, all without creating a table definition or using SQL. We can use or abuse this data as our whims desire. This is not an object-relational mapper, as there is no mapping the code to the SQL because the new protocol “speaks” at the server layer.
### Node.js supported
The new shell is pretty sweet; you can do a lot with it, but you will probably want to use your programming language of choice. The following example uses the `world_x` demo database to search for a record with the `_id` field matching "CAN." We point to the desired collection in the schema and issue a `find` command with the desired parameters. Again, theres no SQL involved.
```
var mysqlx = require('@mysql/xdevapi');
mysqlx.getSession({             //Auth to server
        host: 'localhost',
        port: '33060',
        dbUser: 'root',
        dbPassword: 'password'
}).then(function (session) {    // use world_x.country.info
     var schema = session.getSchema('world_x');
     var collection = schema.getCollection('countryinfo');
collection                      // Get row for 'CAN'
  .find("$._id == 'CAN'")
  .limit(1)
  .execute(doc => console.log(doc))
  .then(() => console.log("\n\nAll done"));
  session.close();
})
```
Here is another example in PHP that looks for "USA":
```
<?PHP
// Connection parameters
  $user = 'root';
  $passwd = 'S3cret#';
  $host = 'localhost';
  $port = '33060';
  $connection_uri = 'mysqlx://'.$user.':'.$passwd.'@'.$host.':'.$port;
  echo $connection_uri . "\n";
// Connect as a Node Session
  $nodeSession = mysql_xdevapi\getNodeSession($connection_uri);
// "USE world_x" schema
  $schema = $nodeSession->getSchema("world_x");
// Specify collection to use
  $collection = $schema->getCollection("countryinfo");
// SELECT * FROM world_x WHERE _id = "USA"
  $result = $collection->find('_id = "USA"')->execute();
// Fetch/Display data
  $data = $result->fetchAll();
  var_dump($data);
?>#!/usr/bin/phpmysql_xdevapi\getNodeSession
```
Note that the `find` operator used in both examples looks pretty much the same between the two different languages. This consistency should help developers who hop between programming languages or those looking to reduce the learning curve with a new language.
Other supported languages include C, Java, Python, and JavaScript, and more are planned.
### Best of both worlds
Did I mention that the data entered in this NoSQL fashion is also available from the SQL side of MySQL? Or that the new NoSQL method can access relational data in old-fashioned relational tables? You now have the option to use your MySQL server as a SQL server, a NoSQL server, or both.
Dave Stokes will present "MySQL Without the SQL—Oh My!" at [Southeast LinuxFest][4], June 8-10, in Charlotte, N.C.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/mysql-document-store
作者:[Dave Stokes][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/davidmstokes
[1]:https://www.mysql.com/products/enterprise/document_store.html
[2]:https://dev.mysql.com/doc/x-devapi-userguide/en/
[3]:https://dev.mysql.com/downloads/shell/
[4]:http://www.southeastlinuxfest.org/

View File

@ -1,124 +0,0 @@
translating----geekpi
A friendly alternative to the find tool in Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
[fd][1] is a super fast, [Rust][2]-based alternative to the Unix/Linux `find` command. It does not mirror all of `find`'s powerful functionality; however, it does provide just enough features to cover 80% of the use cases you might run into. Features like a well thought-out and convenient syntax, colorized output, smart case, regular expressions, and parallel command execution make `fd` a more than capable successor.
### Installation
Head over the [fd][1] GitHub page and check out the section on installation. It covers how to install the application on [macOS,][3] [Debian/Ubuntu][4] [Red Hat][5] , and [Arch Linux][6] . Once installed, you can get a complete overview of all available command-line options by runningfor concise help, or `fd -h` for concise help, or `fd --help` for more detailed help.
### Simple search
`fd` is designed to help you easily find files and folders in your operating system's filesystem. The simplest search you can perform is to run `fd` with a single argument, that argument being whatever it is that you're searching for. For example, let's assume that you want to find a Markdown document that has the word `services` as part of the filename:
```
$ fd services
downloads/services.md
```
If called with just a single argument, `fd` searches the current directory recursively for any files and/or directories that match your argument. The equivalent search using the built-in `find` command looks something like this:
```
$ find . -name 'services'
downloads/services.md
```
As you can see, `fd` is much simpler and requires less typing. Getting more done with less typing is always a win in my book.
### Files and folders
You can restrict your search to files or directories by using the `-t` argument, followed by the letter that represents what you want to search for. For example, to find all files in the current directory that have `services` in the filename, you would use:
```
$ fd -tf services
downloads/services.md
```
And to find all directories in the current directory that have `services` in the filename:
```
$ fd -td services
applications/services
library/services
```
How about listing all documents with the `.md` extension in the current folder?
```
$ fd .md
administration/administration.md
development/elixir/elixir_install.md
readme.md
sidebar.md
linux.md
```
As you can see from the output, `fd` not only found and listed files from the current folder, but it also found files in subfolders. Pretty neat. You can even search for hidden files using the `-H` argument:
```
fd -H sessions .
.bash_sessions
```
### Specifying a directory
If you want to search a specific directory, the name of the directory can be given as a second argument to `fd`:
```
$ fd passwd /etc
/etc/default/passwd
/etc/pam.d/passwd
/etc/passwd
```
In this example, we're telling `fd` that we want to search for all instances of the word `passwd` in the `etc` directory.
### Global searches
What if you know part of the filename but not the folder? Let's say you downloaded a book on Linux network administration but you have no idea where it was saved. No problem:
```
fd Administration /
/Users/pmullins/Documents/Books/Linux/Mastering Linux Network Administration.epub
```
### Wrapping up
The `fd` utility is an excellent replacement for the `find` command, and I'm sure you'll find it just as useful as I do. To learn more about the command, simply explore the rather extensive man page.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/friendly-alternative-find
作者:[Patrick H. Mullins][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/pmullins
[1]:https://github.com/sharkdp/fd
[2]:https://www.rust-lang.org/en-US/
[3]:https://en.wikipedia.org/wiki/MacOS
[4]:https://www.ubuntu.com/community/debian
[5]:https://www.redhat.com/en
[6]:https://www.archlinux.org/

View File

@ -1,3 +1,5 @@
translating---geekpi
BLUI: An easy way to create game UI
======

View File

@ -1,3 +1,5 @@
translating----geekpi
How to Mount and Use an exFAT Drive on Ubuntu Linux
======
**Brief: This quick tutorial shows you how to enable exFAT file system support on Ubuntu and other Ubuntu-based Linux distributions. This way you wont see any error while mounting exFAT drives on your system.**

View File

@ -0,0 +1,120 @@
Write fast apps with Pronghorn, a Java framework
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/Collaboration%20for%20health%20innovation.png?itok=s4O5EX2w)
In 1973, [Carl Hewitt][1] had an idea inspired by quantum mechanics. He wanted to develop computing machines that were capable of parallel execution of tasks, communicating with each other seamlessly while containing their own local memory and processors.
Born was the [actor model][2], and with that, a very simple concept: Everything is an actor. This allows for some great benefits: Separating business and other logic is made vastly easier. Security is easily gained because each core component of your application is separate and independent. Prototyping is accelerated due to the nature of actors and their interconnectivity.
### What is Pronghorn?
However, what ties it all together is the ability to pass messages between these actors concurrently. An actor responds based on an input message; it can then send back an acknowledgment, deliver content, and designate behaviors to be used for the next time a message gets received. For example, one actor is loading image files from disk while simultaneously streaming chunks to other actors for further processing; i.e., image analysis or conversion. Another actor then takes these as inputs and writes them back to disk or logs them to the terminal. Independently, these actors alone cant accomplish much—but together, they form an application.
Today there are many implementations of this actor model. At [Object Computing][3], weve been working on a highly scalable, performant, and completely open source Java framework called [Pronghorn][4], named after one of the worlds fastest land animals.
Pronghorn, recently released to 1.0, attempts to address a few of the shortcomings of [Akka][5] and [RxJava][6], two popular actor frameworks for Java and Scala.
As a result, we developed Pronghorn with a comprehensive list of features in mind:
1. We wanted to produce as little garbage as possible. Without the Garbage Collector kicking in regularly, it is able to reach performance levels never seen before.
2. We wanted to make sure that Pronghorn has a minimal memory footprint and is mechanical-sympathetic. Built from the ground up with performance in mind, it leverages CPU prefetch functions and caches for fastest possible throughput. Using zero copy direct access, it loads fields from schemas in nanoseconds and never stall cores, while also being non-blocking and lock-free.
3. Pronghorn ensures that you write correct code securely. Through its APIs and contracts, and by using "[software fortresses][7]" and industry-leading encryption, Pronghorn lets you build applications that are secure and that fail safely.
4. Debugging and testing can be stressful and annoying, especially when you need to hit a deadline. Pronghorn easily integrates with common testing frameworks and simplifies refactoring and debugging through its automatically generated and live-updating telemetry graph, fuzz testing (in work) based on existing message schemas, and warnings when certain actors are misbehaving or consuming too many resources. This helps you rapidly prototype and spend more time focusing on your business needs.
For more details, visit the [Pronghorn Features list][8].
### Why Pronghorn?
Writing concurrent and performant applications has never been easy, and we dont promise to solve the problems entirely. However, to give you an idea of the benefits of Pronghorn and the power of its API, we wrote a small HTTP REST server and benchmarked it against common industry standards such as [Node & Express][9] and [Tomcat][10] & [Spring Boot][11]:
![](https://opensource.com/sites/default/files/uploads/requests_per_second.png)
We encourage you to [run these numbers yourself][12], share your results, and add your own web server.
As you can see, Pronghorn does exceptionally well in this REST example. While almost being 10x faster than conventional solutions, Pronghorn could help cut server costs (such as EC2 or Azure) in half or more through its garbage-free, statically-typed backend. HTTP requests can be parsed, and responses are generated while actors are working concurrently. The scheduling and threading are automatically handled by Pronghorn's powerful default scheduler.
As mentioned above, Pronghorn allows you to rapidly prototype and envision your project, generally by following three basic steps:
1. **Define your data flow graph**
This is a crucial first step. Pronghorn takes a data-first approach; processing large volumes of data rapidly. In your application, think about the type of data that should flow through the "pipes"—for example, if youre building an image analysis tool, you will need actors to read, write, and analyze image files. The format of the data between actors needs also to be established; it could be schemas containing JPG MCUs or raw binary BMP files. Pick the format that works best for your application.
2. **Define the contracts between each stage**
Contracts allow you to easily define your messages using [FAST][13], a proven protocol used by the finance industry for stock trading. These contracts are used in the testing phase to ensure implementation aligns with your message field definitions. This is a contractual approach; it must be respected for actors to communicate with each other.
3. **Test first development by using generative testing as the graph is implemented**
Schemas are code-generated for you as you develop your application. Test-driven development allows for correct and safe code, saving valuable time as you head towards release. As your program grows, the graph grows as well, describing every single interaction between actors and illustrating your message data flow on pipes between stages. Through its automatically telemetry, you can easily keep track of even the most complex applications, as shown below:
![](https://opensource.com/sites/default/files/uploads/tracking_apps.png)
### What does it look like?
You may be curious about what Pronghorn code looks like. Below is some sample code for generating the message schemas in our "[Hello World][14]" example.
To define a message, create a new XML file similar to this:
```
<?xml version="1.0" encoding="UTF-8"?>
<templates xmlns="http://www.fixprotocol.org/ns/fast/td/1.1">
    <template name="HelloWorldMessage" id="1">
        <string name="GreetingName" id="100" charset="unicode"/>
    </template>
</templates>
```
This schema will then be used by the stages described in the Hello World example. Populating a graph in your application using this schema is even easier:
```
private static void populateGraph(GraphManager gm) {
       Pipe<HelloWorldSchema> messagePipe =
HelloWorldSchema.instance.newPipe(10, 10_000);
       new GreeterStage(gm, "Jon Snow", messagePipe);
       new GuestStage(gm, messagePipe);
  }
```
This uses the stages created in the [Hello World tutorial][14].
We use a [Maven][15] archetype to provide you with everything you need to start building Pronghorn applications.
### Start using Pronghorn
We hope this article has offered a taste of how Pronghorn can help you write performant, efficient, and secure applications in Java using Pronghorn, an alternative to Akka and RXJava. Wed love your feedback on how to make this an ideal platform for developers, managers, CFOs, and others.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/writing-applications-java-pronghorn
作者:[Tobi Schweiger][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/tobischw
[1]:https://en.wikipedia.org/wiki/Carl_Hewitt
[2]:https://en.wikipedia.org/wiki/Actor_model
[3]:https://objectcomputing.com/
[4]:https://oci-pronghorn.gitbook.io/pronghorn/chapter-0-what-is-pronghorn/home
[5]:https://akka.io/
[6]:https://github.com/ReactiveX/RxJava
[7]:https://www.amazon.com/Software-Fortresses-Modeling-Enterprise-Architectures/dp/0321166086
[8]:https://oci-pronghorn.gitbook.io/pronghorn/chapter-0-what-is-pronghorn/features
[9]:https://expressjs.com/
[10]:http://tomcat.apache.org/
[11]:https://spring.io/projects/spring-boot
[12]:https://github.com/oci-pronghorn/GreenLoader
[13]:https://en.wikipedia.org/wiki/FAST_protocol
[14]:https://oci-pronghorn.gitbook.io/pronghorn/chapter-1-getting-started-with-pronghorn/1.-hello-world-introduction/0.-getting-started
[15]:https://maven.apache.org/

View File

@ -0,0 +1,89 @@
Getting started with Open edX to host your course
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesgen_rh_032x_0.png?itok=cApG9aB4)
Now in its [seventh major release][1], the [Open edX platform][2] is a free and open source course management system that is used [all over the world][3] to host Massive Open Online Courses (MOOCs) as well as smaller classes and training modules. To date, Open edX software has powered more than 8,000 original courses and 50 million course enrollments. You can install the platform yourself with on-premise equipment or by leveraging any of the industry-leading cloud infrastructure services providers, but it is also increasingly being made available in a Software-as-a-Service (SaaS) model from several of the projects growing list of [service providers][4].
The Open edX platform is used by many of the worlds premier educational institutions as well as private sector companies, public sector institutions, NGOs, non-profits, and educational technology startups, and the projects global community of service providers continues to make the platform accessible to ever-smaller organizations. If you plan to create and offer educational content to a broad audience, you should consider using the Open edX platform.
### Installation
There are multiple ways to install the software, which might be an unwelcome surprise, at least initially. But you get the same application software with the same feature set regardless of how you go about [installing Open edX][5]. The default installation includes a fully functioning learning management system (LMS) for online learners plus a full-featured course management studio (CMS) that your instructor teams can use to author original course content. You can think of the CMS as a “[Wordpress][6]” of course content creation and management, and the LMS as a “[Magento][7]” of course marketing, distribution, and consumption.
Open edX application software is device-agnostic and fully responsive, and with modest effort, you can also publish native iOS and Android apps that seamlessly integrate to your instances backend. The code repositories for the Open edX platform, the native mobile apps, and the installation scripts are all publicly available on [GitHub][8].
#### What to expect
The Open edX platform [GitHub repository][9] contains performant, production-ready code that is suitable for organizations of all sizes. Thousands of programmers from hundreds of institutions regularly contribute to the edX repositories, and the platform is a veritable case study on how to build and manage a complex enterprise application the right way. So even though youre certain to face a multitude of concerns about how to move the platform into production, you should not lose sleep about the general quality and robustness of the Open edX Platform codebase itself.
With minimal training, your instructors will be able to create good online course content. But bear in mind that Open edX is extensible via its [XBlock][10] component architecture, so your instructors will have the potential to turn good course content into great course content with incremental effort on their parts and yours.
The platform works well in a single-server environment, and it is highly modular, making it nearly infinitely horizontally scalable. It is theme-able, localizable, and completely open source, providing limitless possibilities to tailor the appearance and functionality of the platform to your needs. The platform runs reliably as an on-premise installation on your own equipment.
#### Some assembly required
Bear in mind that a handful of the edX software modules are not included in the default installation and that these modules are often on the requirements lists of organizations. Namely, the Analytics module, the e-commerce module, and the Notes/Annotations course feature are not part of the default platform installation, and each of these individually is a non-trivial installation. Additionally, youre entirely on your own with regard to data backup-restore and system administration in general. Fortunately, theres a growing body of community-sourced documentation and how-to articles, all searchable via Google and Bing, to help make your installation production-ready.
Setting up [oAuth][11] and [SSL/TLS][12] as well as getting the platforms [REST API][13] up and running can be challenging, depending on your skill level, even though these are all well-documented procedures. Additionally, some organizations require that MySQL and/or MongoDB databases be managed in an existing centralized environment, and if this is your situation, youll also need to work through the process of hiving these services out of the default platform installation. The edX design team has done everything possible to simplify this for you, but its still a non-trivial change that will likely take some time to implement.
Not to be discouraged—if youre facing resource and/or technical headwinds, Open edX community SaaS providers like [appsembler][14] and [eduNEXT][15] offer compelling alternatives to a do-it-yourself installation, particularly if youre just window shopping.
### Technology stack
Poking around in an Open edX platform installation is a real thrill, and architecturally speaking, the project is a masterpiece. The application modules are [Django][16] apps that leverage a plethora of the open source communitys premier projects, including [Ubuntu][17], [MySQL][18], [MongoDB][19], [RabbitMQ][20], [Elasticsearch][21], [Hadoop][22], and others.
![edx-architecture.png][24]
The Open edX technology stack (CC BY, by edX)
Getting all of these components installed and configured is a feat in and of itself, but packaging everything in such a way that organizations of arbitrary size and complexity can tailor installations to their needs without having to perform heart surgery on the codebase would seem impossible—that is, until you see how neatly and intuitively the major platform configuration parameters have been organized and named. Mind you, theres a learning curve to the platforms organizational structure, but the upshot is that everything you learn is worth knowing, not just for this project but large IT projects in general.
One word of caution: The platform's UI is in flux, with the aim of eventually standardizing on [React][25] and [Bootstrap][26]. Meanwhile, you'll find multiple approaches to implementing styling for the base theme, and this can get confusing.
### Adoption
The edX project has enjoyed rapid international adoption, due in no small measure to how well the software works. Not surprisingly, the projects success has attracted a large and growing list of talented participants who contribute to the project as programmers, project consultants, translators, technical writers, and bloggers. The annual [Open edX Conference][27], the [Official edX Google Group][28], and the [Open edX Service Providers List][4] are good starting points for learning more about this diverse and growing ecosystem. As a relative newcomer myself, Ive found it comparatively easy to engage and to get directly involved with the project in multiple facets.
Good luck with your journey, and feel free to reach out to me as a sounding board while youre conceptualizing your project.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/getting-started-open-edx
作者:[Lawrence Mc Daniel][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mcdaniel0073
[1]:https://openedx.atlassian.net/wiki/spaces/DOC/pages/11108700/Open+edX+Releases
[2]:https://open.edx.org/about-open-edx
[3]:https://www.edx.org/schools-partners
[4]:https://openedx.atlassian.net/wiki/spaces/COMM/pages/65667081/Open+edX+Service+Providers
[5]:https://openedx.atlassian.net/wiki/spaces/OpenOPS/pages/60227779/Open+edX+Installation+Options
[6]:https://wordpress.com/
[7]:https://magento.com/
[8]:https://github.com/edx
[9]:https://github.com/edx/edx-platform
[10]:https://open.edx.org/xblocks
[11]:https://oauth.net/
[12]:https://en.wikipedia.org/wiki/Transport_Layer_Security
[13]:https://en.wikipedia.org/wiki/Representational_state_transfer
[14]:https://www.appsembler.com/
[15]:https://www.edunext.co/
[16]:https://www.djangoproject.com/
[17]:https://www.ubuntu.com/
[18]:https://www.mysql.com/
[19]:https://www.mongodb.com/
[20]:https://www.rabbitmq.com/
[21]:https://www.elastic.co/
[22]:http://hadoop.apache.org/
[23]:/file/400696
[24]:https://opensource.com/sites/default/files/uploads/edx-architecture_0.png (edx-architecture.png)
[25]:%E2%80%9Chttps://reactjs.org/%E2%80%9C
[26]:%E2%80%9Chttps://getbootstrap.com/%E2%80%9C
[27]:https://con.openedx.org/
[28]:https://groups.google.com/forum/#!forum/openedx-ops

View File

@ -0,0 +1,159 @@
How To Check Which Groups A User Belongs To On Linux
======
Adding a user into existing group is one of the regular activity for Linux admin. This is daily activity for some of the administrator whos working one big environments.
Even i am performing such a activity on daily in my environment due to business requirement. Its one of the important command which helps you to identify existing groups on your environment.
Also these commands helps you to identify which groups a user belongs to. All the users are listed in `/etc/passwd` file and groups are listed in `/etc/group`.
Whatever command we use, that will fetch the information from these files. Also, each command has their unique feature which helps user to get the required information alone.
### What Is /etc/passwd?
`/etc/passwd` is a text file that contains each user information, which is necessary to login Linux system. It maintain useful information about users such as username, password, user ID, group ID, user ID info, home directory and shell. The passwd file contain every user details as a single line with seven fields as described above.
```
$ grep "daygeek" /etc/passwd
daygeek:x:1000:1000:daygeek,,,:/home/daygeek:/bin/bash
```
### What Is /etc/group?
`/etc/group` is a text file that defines which groups a user belongs to. We can add multiple users into single group. It allows user to access other users files and folders as Linux permissions are organized into three classes, user, group, and others. It maintain useful information about group such as Group name, Group password, Group ID (GID) and Member list. each on a separate line. The group file contain every group details as a single line with four fields as described above.
This can be performed by using below methods.
* `groups:`Show All Members of a Group.
* `id:`Print user and group information for the specified username.
* `lid:`It display users groups or groups users.
* `getent:`get entries from Name Service Switch libraries.
* `grep`grep stands for “global regular expression print” which prints matching pattern.
### What Is groups Command?
groups command prints the names of the primary and any supplementary groups for each given username.
```
$ groups daygeek
daygeek : daygeek adm cdrom sudo dip plugdev lpadmin sambashare
```
If you would like to check list of groups associated with current user. Just run **“group”** command alone without any username.
```
$ groups
daygeek adm cdrom sudo dip plugdev lpadmin sambashare
```
### What Is id Command?
id stands for identity. print real and effective user and group IDs. To print user and group information for the specified user, or for the current user.
```
$ id daygeek
uid=1000(daygeek) gid=1000(daygeek) groups=1000(daygeek),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),118(lpadmin),128(sambashare)
```
If you would like to check list of groups associated with current user. Just run **“id”** command alone without any username.
```
$ id
uid=1000(daygeek) gid=1000(daygeek) groups=1000(daygeek),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),118(lpadmin),128(sambashare)
```
### What Is lid Command?
It display users groups or groups users. Displays information about groups containing user name, or users contained in group name. This command required privileges to run.
```
$ sudo lid daygeek
adm(gid=4)
cdrom(gid=24)
sudo(gid=27)
dip(gid=30)
plugdev(gid=46)
lpadmin(gid=108)
daygeek(gid=1000)
sambashare(gid=124)
```
### What Is getent Command?
The getent command displays entries from databases supported by the Name Service Switch libraries, which are configured in /etc/nsswitch.conf.
```
$ getent group | grep daygeek
adm:x:4:syslog,daygeek
cdrom:x:24:daygeek
sudo:x:27:daygeek
dip:x:30:daygeek
plugdev:x:46:daygeek
lpadmin:x:118:daygeek
daygeek:x:1000:
sambashare:x:128:daygeek
```
If you would like to print only associated groups name then include **“awk”** command along with above command.
```
$ getent group | grep daygeek | awk -F: '{print $1}'
adm
cdrom
sudo
dip
plugdev
lpadmin
daygeek
sambashare
```
Run the below command to print only primary group information.
```
$ getent group daygeek
daygeek:x:1000:
```
### What Is grep Command?
grep stands for “global regular expression print” which prints matching pattern from the file.
```
$ grep "daygeek" /etc/group
adm:x:4:syslog,daygeek
cdrom:x:24:daygeek
sudo:x:27:daygeek
dip:x:30:daygeek
plugdev:x:46:daygeek
lpadmin:x:118:daygeek
daygeek:x:1000:
sambashare:x:128:daygeek
```
If you would like to print only associated groups name then include **“awk”** command along with above command.
```
$ grep "daygeek" /etc/group | awk -F: '{print $1}'
adm
cdrom
sudo
dip
plugdev
lpadmin
daygeek
sambashare
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-check-which-groups-a-user-belongs-to-on-linux/
作者:[Prakash Subramanian][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/prakash/

View File

@ -0,0 +1,104 @@
How to disable iptables firewall temporarily
======
Learn how to disable iptables firewall in Linux temporarily for troubleshooting purpose. Also learn how to save policies and how to restore them back when you enable firewall back.
![How to disable iptables firewall temporarily][1]
Sometimes you have the requirement to turn off iptables firewall to do some connectivity troubleshooting and then you need to turn it back on. While doing it you also want to save all your [firewall policies][2] as well. In this article, we will walk you through how to save firewall policies and how to disable/enable iptables firewall. For more details about iptables firewall and policies [read our article][3] on it.
### Save iptables policies
The first step while disabling iptables firewall temporarily is to save existing firewall rules/policies. `iptables-save` command lists all your existing policies which you can save in a file on your server.
```
root@kerneltalks # # iptables-save
# Generated by iptables-save v1.4.21 on Tue Jun 19 09:54:36 2018
*nat
:PREROUTING ACCEPT [1:52]
:INPUT ACCEPT [1:52]
:OUTPUT ACCEPT [15:1140]
:POSTROUTING ACCEPT [15:1140]
:DOCKER - [0:0]
---- output trucated----
root@kerneltalks # iptables-save > /root/firewall_rules.backup
```
So iptables-save is the command with you can take iptables policy backup.
### Stop/disable iptables firewall
For older Linux kernels you have an option of stopping service iptables with `service iptables stop` but if you are on the new kernel, you just need to wipe out all the policies and allow all traffic through the firewall. This is as good as you are stopping the firewall.
Use below list of commands to do that.
```
root@kerneltalks # iptables -F
root@kerneltalks # iptables -X
root@kerneltalks # iptables -P INPUT ACCEPT
root@kerneltalks # iptables -P OUTPUT ACCEPT
root@kerneltalks # iptables -P FORWARD ACCEPT
```
Where
* -F : Flush all policy chains
* -X : Delete user defined chains
* -P INPUT/OUTPUT/FORWARD : Accept specified traffic
Once done, check current firewall policies. It should looks like below which means everything is accepted (as good as your firewall is disabled/stopped)
```
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
```
### Restore firewall policies
Once you are done with troubleshooting and you want to turn iptables back on with all its configurations. You need to first restore policies from the backup we took in first step.
```
root@kerneltalks # iptables-restore </root/firewall_rules.backup
```
### Start iptables firewall
And then start iptables service in case you have stopped it in previous step using `service iptables start`. If you havnt stopped service then only restoring policies will do for you. Check if all policies are back in iptables firewall configurations :
```
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
-----output truncated-----
```
Thats it! You have successfully disabled and enabled firewall without loosing your policy rules.
--------------------------------------------------------------------------------
via: https://kerneltalks.com/howto/how-to-disable-iptables-firewall-temporarily/
作者:[kerneltalks][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://kerneltalks.com
[1]:https://a2.kerneltalks.com/wp-content/uploads/2018/06/How-to-disable-iptables-firewall-temporarily.png
[2]:https://kerneltalks.com/networking/configuration-of-iptables-policies/
[3]:https://kerneltalks.com/networking/basics-of-iptables-linux-firewall/

View File

@ -0,0 +1,253 @@
How to reset, revert, and return to previous states in Git
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82)
One of the lesser understood (and appreciated) aspects of working with Git is how easy it is to get back to where you were before—that is, how easy it is to undo even major changes in a repository. In this article, we'll take a quick look at how to reset, revert, and completely return to previous states, all with the simplicity and elegance of individual Git commands.
### Reset
Let's start with the Git command `reset`. Practically, you can think of it as a "rollback"—it points your local environment back to a previous commit. By "local environment," we mean your local repository, staging area, and working directory.
Take a look at Figure 1. Here we have a representation of a series of commits in Git. A branch in Git is simply a named, movable pointer to a specific commit. In this case, our branch master is a pointer to the latest commit in the chain.
![Local Git environment with repository, staging area, and working directory][2]
Fig. 1: Local Git environment with repository, staging area, and working directory
If we look at what's in our master branch now, we can see the chain of commits made so far.
```
$ git log --oneline
b764644 File with three lines
7c709f0 File with two lines
9ef9173 File with one line
```
`reset` command to do this for us. For example, if we want to reset master to point to the commit two back from the current commit, we could use either of the following methods:
What happens if we want to roll back to a previous commit. Simple—we can just move the branch pointer. Git supplies thecommand to do this for us. For example, if we want to reset master to point to the commit two back from the current commit, we could use either of the following methods:
`$ git reset 9ef9173` (using an absolute commit SHA1 value 9ef9173)
or
`$ git reset current~2` (using a relative value -2 before the "current" tag)
Figure 2 shows the results of this operation. After this, if we execute a `git log` command on the current branch (master), we'll see just the one commit.
```
$ git log --oneline
9ef9173 File with one line
```
![After reset][4]
Fig. 2: After `reset`
The `git reset` command also includes options to update the other parts of your local environment with the contents of the commit where you end up. These options include: `hard` to reset the commit being pointed to in the repository, populate the working directory with the contents of the commit, and reset the staging area; `soft` to only reset the pointer in the repository; and `mixed` (the default) to reset the pointer and the staging area.
Using these options can be useful in targeted circumstances such as `git reset --hard <commit sha1 | reference>``.` This overwrites any local changes you haven't committed. In effect, it resets (clears out) the staging area and overwrites content in the working directory with the content from the commit you reset to. Before you use the `hard` option, be sure that's what you really want to do, since the command overwrites any uncommitted changes.
### Revert
The net effect of the `git revert` command is similar to reset, but its approach is different. Where the `reset` command moves the branch pointer back in the chain (typically) to "undo" changes, the `revert` command adds a new commit at the end of the chain to "cancel" changes. The effect is most easily seen by looking at Figure 1 again. If we add a line to a file in each commit in the chain, one way to get back to the version with only two lines is to reset to that commit, i.e., `git reset HEAD~1`.
Another way to end up with the two-line version is to add a new commit that has the third line removed—effectively canceling out that change. This can be done with a `git revert` command, such as:
```
$ git revert HEAD
```
Because this adds a new commit, Git will prompt for the commit message:
```
Revert "File with three lines"
This reverts commit b764644bad524b804577684bf74e7bca3117f554.
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# On branch master
# Changes to be committed:
#       modified:   file1.txt
#
```
Figure 3 (below) shows the result after the `revert` operation is completed.
If we do a `git log` now, we'll see a new commit that reflects the contents before the previous commit.
```
$ git log --oneline
11b7712 Revert "File with three lines"
b764644 File with three lines
7c709f0 File with two lines
9ef9173 File with one line
```
Here are the current contents of the file in the working directory:
```
$ cat <filename>
Line 1
Line 2
```
#### Revert or reset?
Why would you choose to do a `revert` over a `reset` operation? If you have already pushed your chain of commits to the remote repository (where others may have pulled your code and started working with it), a revert is a nicer way to cancel out changes for them. This is because the Git workflow works well for picking up additional commits at the end of a branch, but it can be challenging if a set of commits is no longer seen in the chain when someone resets the branch pointer back.
This brings us to one of the fundamental rules when working with Git in this manner: Making these kinds of changes in your local repository to code you haven't pushed yet is fine. But avoid making changes that rewrite history if the commits have already been pushed to the remote repository and others may be working with them.
In short, if you rollback, undo, or rewrite the history of a commit chain that others are working with, your colleagues may have a lot more work when they try to merge in changes based on the original chain they pulled. If you must make changes against code that has already been pushed and is being used by others, consider communicating before you make the changes and give people the chance to merge their changes first. Then they can pull a fresh copy after the infringing operation without needing to merge.
You may have noticed that the original chain of commits was still there after we did the reset. We moved the pointer and reset the code back to a previous commit, but it did not delete any commits. This means that, as long as we know the original commit we were pointing to, we can "restore" back to the previous point by simply resetting back to the original head of the branch:
```
git reset <sha1 of commit>
```
A similar thing happens in most other operations we do in Git when commits are replaced. New commits are created, and the appropriate pointer is moved to the new chain. But the old chain of commits still exists.
### Rebase
Now let's look at a branch rebase. Consider that we have two branches—master and feature—with the chain of commits shown in Figure 4 below. Master has the chain `C4->C2->C1->C0` and feature has the chain `C5->C3->C2->C1->C0`.
![Chain of commits for branches master and feature][6]
Fig. 4: Chain of commits for branches master and feature
If we look at the log of commits in the branches, they might look like the following. (The `C` designators for the commit messages are used to make this easier to understand.)
```
$ git log --oneline master
6a92e7a C4
259bf36 C2
f33ae68 C1
5043e79 C0
$ git log --oneline feature
79768b8 C5
000f9ae C3
259bf36 C2
f33ae68 C1
5043e79 C0
```
I tell people to think of a rebase as a "merge with history" in Git. Essentially what Git does is take each different commit in one branch and attempt to "replay" the differences onto the other branch.
So, we can rebase a feature onto master to pick up `C4` (e.g., insert it into feature's chain). Using the basic Git commands, it might look like this:
```
$ git checkout feature
$ git rebase master
First, rewinding head to replay your work on top of it...
Applying: C3
Applying: C5
```
Afterward, our chain of commits would look like Figure 5.
![Chain of commits after the rebase command][8]
Fig. 5: Chain of commits after the `rebase` command
Again, looking at the log of commits, we can see the changes.
```
$ git log --oneline master
6a92e7a C4
259bf36 C2
f33ae68 C1
5043e79 C0
$ git log --oneline feature
c4533a5 C5
64f2047 C3
6a92e7a C4
259bf36 C2
f33ae68 C1
5043e79 C0
```
Notice that we have `C3'` and `C5'`—new commits created as a result of making the changes from the originals "on top of" the existing chain in master. But also notice that the "original" `C3` and `C5` are still there—they just don't have a branch pointing to them anymore.
If we did this rebase, then decided we didn't like the results and wanted to undo it, it would be as simple as:
```
$ git reset 79768b8
```
With this simple change, our branch would now point back to the same set of commits as before the `rebase` operation—effectively undoing it (Figure 6).
![After undoing rebase][10]
Fig. 6: After undoing the `rebase` operation
What happens if you can't recall what commit a branch pointed to before an operation? Fortunately, Git again helps us out. For most operations that modify pointers in this way, Git remembers the original commit for you. In fact, it stores it in a special reference named `ORIG_HEAD `within the `.git` repository directory. That path is a file containing the most recent reference before it was modified. If we `cat` the file, we can see its contents.
```
$ cat .git/ORIG_HEAD
79768b891f47ce06f13456a7e222536ee47ad2fe
```
We could use the `reset` command, as before, to point back to the original chain. Then the log would show this:
```
$ git log --oneline feature
79768b8 C5
000f9ae C3
259bf36 C2
f33ae68 C1
5043e79 C0
```
Another place to get this information is in the reflog. The reflog is a play-by-play listing of switches or changes to references in your local repository. To see it, you can use the `git reflog` command:
```
$ git reflog
79768b8 HEAD@{0}: reset: moving to 79768b
c4533a5 HEAD@{1}: rebase finished: returning to refs/heads/feature
c4533a5 HEAD@{2}: rebase: C5
64f2047 HEAD@{3}: rebase: C3
6a92e7a HEAD@{4}: rebase: checkout master
79768b8 HEAD@{5}: checkout: moving from feature to feature
79768b8 HEAD@{6}: commit: C5
000f9ae HEAD@{7}: checkout: moving from master to feature
6a92e7a HEAD@{8}: commit: C4
259bf36 HEAD@{9}: checkout: moving from feature to master
000f9ae HEAD@{10}: commit: C3
259bf36 HEAD@{11}: checkout: moving from master to feature
259bf36 HEAD@{12}: commit: C2
f33ae68 HEAD@{13}: commit: C1
5043e79 HEAD@{14}: commit (initial): C0
```
You can then reset to any of the items in that list using the special relative naming format you see in the log:
```
$ git reset HEAD@{1}
```
Once you understand that Git keeps the original chain of commits around when operations "modify" the chain, making changes in Git becomes much less scary. This is one of Git's core strengths: being able to quickly and easily try things out and undo them if they don't work.
Brent Laster will present [Power Git: Rerere, Bisect, Subtrees, Filter Branch, Worktrees, Submodules, and More][11] at the 20th annual [OSCON][12] event, July 16-19 in Portland, Ore. For more tips and explanations about using Git at any level, checkout Brent's book "[Professional Git][13]," available on Amazon.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/git-reset-revert-rebase-commands
作者:[Brent Laster][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/bclaster
[1]:/file/401126
[2]:https://opensource.com/sites/default/files/uploads/gitcommands1_local-environment.png (Local Git environment with repository, staging area, and working directory)
[3]:/file/401131
[4]:https://opensource.com/sites/default/files/uploads/gitcommands2_reset.png (After reset)
[5]:/file/401141
[6]:https://opensource.com/sites/default/files/uploads/gitcommands4_commits-branches.png (Chain of commits for branches master and feature)
[7]:/file/401146
[8]:https://opensource.com/sites/default/files/uploads/gitcommands5_commits-rebase.png (Chain of commits after the rebase command)
[9]:/file/401151
[10]:https://opensource.com/sites/default/files/uploads/gitcommands6_rebase-undo.png (After undoing rebase)
[11]:https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/67142
[12]:https://conferences.oreilly.com/oscon/oscon-or
[13]:https://www.amazon.com/Professional-Git-Brent-Laster/dp/111928497X/ref=la_B01MTGIINQ_1_2?s=books&ie=UTF8&qid=1528826673&sr=1-2

View File

@ -0,0 +1,95 @@
Use this vi setup to keep and organize your notes
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk)
The idea of using vi to manage a wiki for your notes may seem unconventional, but when you're using vi in your daily work, it makes a lot of sense.
As a software developer, its just easier to write my notes in the same tool I use to code. I want my notes to be only an editor command away, available wherever I am, and managed the same way I handle my code. That's why I created a vi-based setup for my personal knowledge base. In a nutshell: I use the vi plugin [Vimwiki][1] to manage my wiki locally on my laptop, I use Git to version it (and keep a central, updated version), and I use GitLab for online editing (for example, on my mobile device).
### Why it makes sense to use a wiki for note-keeping
I've tried many different tools to keep track of notes, write down fleeting thoughts, and structure tasks I shouldnt forget. These include offline notebooks (yes, that involves paper), special note-keeping software, and mind-mapping software.
All these solutions have positives, but none fit all of my needs. For example, [mind maps][2] are a great way to visualize whats in your mind (hence the name), but the tools I tried provided poor searching functionality. (The same thing is true for paper notes.) Also, its often hard to read mind maps after time passes, so they dont work very well for long-term note keeping.
One day while setting up a [DokuWiki][3] for a collaboration project, I found that the wiki structure fits most of my requirements. With a wiki, you can create notes (like you would in any text editor) and create links between your notes. If a link points to a non-existent page (maybe because you wanted a piece of information to be on its own page but havent set it up yet), the wiki will create that page for you. These features make a wiki a good fit for quickly writing things as they come to your mind, while still keeping your notes in a page structure that is easy to browse and search for keywords.
While this sounds promising, and setting up DokuWiki is not difficult, I found it a bit too much work to set up a whole wiki just for keeping track of my notes. After some research, I found Vimwiki, a Vi plugin that does what I want. Since I use Vi every day, keeping notes is very similar to editing code. Also, its even easier to create a page in Vimwiki than DokuWiki—all you have to do is press Enter while your cursor hovers over a word. If there isnt already a page with that name, Vimwiki will create it for you.
To take my plan to use my everyday tools for note-keeping a step further, Im not only using my favorite IDE to write notes but also my favorite code management tools—Git and GitLab—to distribute notes across my various machines and be able to access them online. Im also using Markdown syntax in GitLab's online Markdown editor to write this article.
### Setting up Vimwiki
Installing Vimwiki is easy using your existing plugin manager: Just add `vimwiki/vimwiki` to your plugins. In my preferred plugin manager, Vundle, you just add the line `Plugin 'vimwiki/vimwiki'` in your `~/.vimrc` followed by a `:source ~/.vimrc|PluginInstall`.
Following is a piece of my `~.vimrc` showing a bit of Vimwiki configuration. You can learn more about installing and using this tool on the [Vimwiki page][1].
```
let wiki_1 = {}
let wiki_1.path = '~/vimwiki_work_md/'
let wiki_1.syntax = 'markdown'
let wiki_1.ext = '.md'
let wiki_2 = {}
let wiki_2.path = '~/vimwiki_personal_md/'
let wiki_2.syntax = 'markdown'
let wiki_2.ext = '.md'
let g:vimwiki_list = [wiki_1, wiki_2]
let g:vimwiki_ext2syntax = {'.md': 'markdown', '.markdown': 'markdown', '.mdown': 'markdown'}
```
Another advantage of my approach, which you can see in the configuration, is that I can easily divide my personal and work-related notes without switching the note-keeping software. I want my personal notes accessible everywhere, but I dont want to sync my work-related notes to my private GitLab and computer. This was easier to set up in Vimwiki compared to the other software I tried.
The configuration tells Vimwiki there are two different wikis and I want to use Markdown syntax in both (again, because Im used to Markdown from my daily work). It also tells Vimwiki the folders where to store the wiki pages.
If you navigate to the folders where the wiki pages are stored, you will find your wikis flat Markdown pages without any special Vimwiki context. That makes it easy to initialize a Git repository and sync your wiki to a central repository.
### Synchronizing your wiki to GitLab
The steps to check out a GitLab project to your local Vimwiki folder are nearly the same as youd use for any GitHub repository. I just prefer to keep my notes in a private GitLab repository, so I keep a GitLab instance running for my personal projects.
GitLab has a wiki functionality that allows you to create wiki pages for your projects. Those wikis are Git repositories themselves. And they use Markdown syntax. You get where this is leading.
Just initialize the wiki you want to synchronize with the wiki of a project you created for your notes:
```
cd ~/vimwiki_personal_md/
git init
git remote add origin git@your.gitlab.com:your_user/vimwiki_personal_md.wiki
git add .
git commit -m "Initial commit"
git push -u origin master
```
These steps can be copied from the page where you land after creating a new project on GitLab. The only thing to change is the `.wiki` at the end of the repository URL (instead of `.git`), which tells it to clone the wiki repository instead of the project itself.
Thats it! Now you can manage your notes with Git and edit them in GitLabs wiki user interface.
But maybe (like me) you dont want to manually create commits for every note you add to your notebook. To solve this problem, I use the Vim plugin [chazy/dirsettings][4]. I added a `.vimdir` file with the following content to `~/vimwiki_personal_md`:
```
:cd %:p:h
silent! !git pull > /dev/null
:e!
autocmd! BufWritePost * silent! !git add .;git commit -m "vim autocommit" > /dev/null; git push > /dev/null&
```
This pulls the latest version of my wiki every time I open a wiki file and publishes my changes after every `:w` command. Doing this should keep your local copy in sync with the central repo. If you have merge conflicts, you may need to resolve them (as usual).
For now, this is the way I interact with my knowledge base, and Im quite happy with it. Please let me know what you think about this approach. And please share in the comments your favorite way to keep track of your notes.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/vimwiki-gitlab-notes
作者:[Manuel Dewald][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/ntlx
[1]:http://vimwiki.github.io/
[2]:https://opensource.com/article/17/8/mind-maps-creative-dashboard
[3]:https://www.dokuwiki.org/dokuwiki
[4]:https://github.com/chazy/dirsettings

View File

@ -0,0 +1,153 @@
Anatomy of a perfect pull request
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_science.png?itok=WDKARWGV)
Writing clean code is just one of many factors you should care about when creating a pull request.
Large pull requests cause a big overhead during the code review and can facilitate bugs in the codebase.
That's why you need to care about the pull request itself. It should be short, have a clear title and description, and do only one thing.
### Why should you care?
* A good pull request will be reviewed quickly
* It reduces bug introduction into codebase
* It facilitates new developers onboarding
* It does not block other developers
* It speeds up the code review process and consequently, product development
### The size of the pull request
![](https://opensource.com/sites/default/files/uploads/devloper.png)
The first step to identifying problematic pull requests is to look for big diffs.
Several studies show that it is harder to find bugs when reviewing a lot of code.
In addition, large pull requests will block other developers who may be depending on the code.
#### How can we determine the perfect pull request size?
A [study of a Cisco Systems programming team][1] revealed that a review of 200-400 LOC over 60 to 90 minutes should yield 70-90% defect discovery.
With this number in mind, a good pull request should not have more than 250 lines of code changed.
![](https://opensource.com/sites/default/files/uploads/pull_request_size_view_time_0.png)
Image from [small business programming][2].
As shown in the chart above, pull requests with more than 250 lines of changes usually take more than one hour to review.
### Break down large pull requests into smaller ones
Feature breakdown is an art. The more you do it, the easier it gets.
What do I mean by feature breakdown?
Feature breakdown is understanding a big feature and breaking it into small pieces that make sense and that can be merged into the codebase piece by piece without breaking anything.
#### Learning by doing
Lets say that you need to create a subscribe feature on your app. It's just a form that accepts an email address and saves it.
Without knowing how your app works, I can already break it into eight pull requests:
* Create a model to save emails
* Create a route to receive requests
* Create a controller
* Create a service to save it in the database (business logic)
* Create a policy to handle access control
* Create a subscribe component (frontend)
* Create a button to call the subscribe component
* Add the subscribe button in the interface
As you can see, I broke this feature into many parts, most of which can be done simultaneously by different developers.
### Single responsibility principle
The single responsibility principle (SRP) is a computer programming principle that states that every [module][3] or [class][4] should have responsibility for a single part of the [functionality][5] provided by the [software][6], and that responsibility should be entirely [encapsulated][7] by the class.
Just like classes and modules, pull requests should do only one thing.
Following the SRP reduces the overhead caused by revising a code that attempts to solve several problems.
Before submitting a PR for review, try applying the single responsibility principle. If the code does more than one thing, break it into other pull requests.
### Title and description matter
When creating a pull request, you should care about the title and the description.
Imagine that the code reviewer is joining your team today without knowing what is going on. He should be able to understand the changes.
![good_title_and_description.png][9]
What a good title and description look like
The image above shows [what a good title and description look like][10].
### The title of the pull request should be self-explanatory
The title should make clear what is being changed.
Here are some examples:
#### Make a useful description
* Describe what was changed in the pull request
* Explain why this PR exists
* Make it clear how it does what it sets out to do— for example, does it change a column in the database? How is this done? What happens to the old data?
* Use screenshots to demonstrate what has changed.
### Recap
#### Pull request size
The pull request must have a maximum of 250 lines of change.
#### Feature breakdown
Whenever possible, break pull requests into smaller ones.
#### Single Responsibility Principle
The pull request should do only one thing.
#### Title
Create a self-explanatory title that describes what the pull request does.
#### Description
Detail what was changed, why it was changed, and how it was changed.
_This article was originally posted at[Medium][11]. Reposted with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/anatomy-perfect-pull-request
作者:[Hugo Dias][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/hugodias
[1]:https://smartbear.com/learn/code-review/best-practices-for-peer-code-review/
[2]:https://smallbusinessprogramming.com/optimal-pull-request-size/
[3]:https://en.wikipedia.org/wiki/Modular_programming
[4]:https://en.wikipedia.org/wiki/Class_%28computer_programming%29
[5]:https://en.wikipedia.org/wiki/Software_feature
[6]:https://en.wikipedia.org/wiki/Software
[7]:https://en.wikipedia.org/wiki/Encapsulation_(computer_programming)
[8]:/file/400671
[9]:https://opensource.com/sites/default/files/uploads/good_title_and_description.png (good_title_and_description.png)
[10]:https://github.com/rails/rails/pull/32865
[11]:https://medium.com/@hugooodias/the-anatomy-of-a-perfect-pull-request-567382bb6067

View File

@ -0,0 +1,85 @@
Stop merging your pull requests manually
======
![](https://julien.danjou.info/content/images/2018/06/github-branching.png)
If there's something that I hate, it's doing things manually when I know I could automate them. Am I alone in this situation? I doubt so.
Nevertheless, every day, they are thousands of developers using [GitHub][1] that are doing the same thing over and over again: they click on this button:
![Screen-Shot-2018-06-19-at-18.12.39][2]
This does not make any sense.
Don't get me wrong. It makes sense to merge pull requests. It just does not make sense that someone has to push this damn button every time.
It does not make any sense because every development team in the world has a known list of pre-requisite before they merge a pull request. Those requirements are almost always the same, and it's something along those lines:
* Is the test suite passing?
* Is the documentation up to date?
* Does this follow our code style guideline?
* Have N developers reviewed this?
As this list gets longer, the merging process becomes more error-prone. "Oops, John just clicked on the merge button while there were not enough developer that reviewed the patch." Rings a bell?
In my team, we're like every team out there. We know what our criteria to merge some code into our repository are. That's why we set up a continuous integration system that runs our test suite each time somebody creates a pull request. We also require the code to be reviewed by 2 members of the team before it's approbated.
When those conditions are all set, I want the code to be merged.
Without clicking a single button.
That's exactly how [Mergify][3] started.
![github-branching-1][4]
[Mergify][3] is a service that pushes that merge button for you. You define rules in the `.mergify.yml` file of your repository, and when the rules are satisfied, Mergify merges the pull request.
No need to press any button.
Take a random pull request, like this one:
![Screen-Shot-2018-06-20-at-17.12.11][5]
This comes from a small project that does not have a lot of continuous integration services set up, just Travis. In this pull request, everything's green: one of the owners reviewed the code, and the tests are passing. Therefore, the code should be already merged: but it's there, hanging, chilling, waiting for someone to push that merge button. Someday.
With [Mergify][3] enabled, you'd just have to put this `.mergify.yml` a the root of the repository:
```
rules:
default:
protection:
required_status_checks:
contexts:
- continuous-integration/travis-ci
required_pull_request_reviews:
required_approving_review_count: 1
```
With such a configuration, [Mergify][3] enables the desired restrictions, i.e., Travis passes, and at least one project member reviewed the code. As soon as those conditions are positive, the pull request is automatically merged.
We built [Mergify][3] as a **free service for open-source projects**. The [engine powering the service][6] is also open-source.
Now go [check it out][3] and stop letting those pull requests hang out one second more. Merge them!
If you have any question, feel free to ask us or write a comment below! And stay tuned — as Mergify offers a few other features that I can't wait to talk about!
--------------------------------------------------------------------------------
via: https://julien.danjou.info/stop-merging-your-pull-request-manually/
作者:[Julien Danjou][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://julien.danjou.info/author/jd/
[1]:https://github.com
[2]:https://julien.danjou.info/content/images/2018/06/Screen-Shot-2018-06-19-at-18.12.39.png
[3]:https://mergify.io
[4]:https://julien.danjou.info/content/images/2018/06/github-branching-1.png
[5]:https://julien.danjou.info/content/images/2018/06/Screen-Shot-2018-06-20-at-17.12.11.png
[6]:https://github.com/mergifyio/mergify-engine

View File

@ -0,0 +1,134 @@
在开源项目中做出你的第一个贡献
============================================================
> 这是许多事情的第一步
![women programming](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard2.png?itok=WnKfsl-G "women programming")
图片提供 : [WOCinTech Chat][16]. 图片修改 : Opensource.com. [CC BY-SA 4.0][17]
有一个普遍的误解,那就是对开源做出贡献是一件很难的事。你可能会想,“有时我甚至不能理解我自己的代码;那我怎么可能理解别人的?”
放轻松。直到去年,我都以为是这样。阅读和理解他人的代码,然后把你自己的写在顶上,这是一件令人气馁的任务;但如果有合适的资源,这不像你想象的那么糟。
第一步要做的是选择一个项目。这个决定是可能是一个菜鸟转变成一个老练的开源贡献者的关键一步。
许多对开源感兴趣的业余程序员都被建议从 [Git][18] 入手但这并不是最好的开始方式。Git 是由许多有着多年软件开发经验的超级极客维护的。它是寻找可以做贡献的开源项目的好地方,但对新手并不友好。大多数对 Git 做出贡献的开发者都有足够的经验,他们不需要参考各类资源或文档。在这篇文章里,我将提供一个对新手友好的特性的列表,并且给出一些建议,希望可以使你更轻松地对开源做出贡献。
### 理解产品
在开始贡献之前,你需要理解项目是怎么工作的。为了理解这一点,你需要自己来尝试。如果你发现这个产品很有趣并且游泳,它就值得你来做贡献。
初学者常常选择参与贡献那些他们没有使用过的软件。他们会失望,并且最终放弃贡献。如果你没有用过这个软件,你不会理解它是怎么工作的。如果你不理解它是怎么工作的,你怎么能解决 bug 或添加新特性呢?
要记住:尝试它,才能改变它。
### 确认产品的状况
这个项目有多活跃?
如果你向一个暂停维护的项目提交一个<ruby>拉取请求<rt>pull request</rt></ruby>,你的请求可能永远不会被讨论或合并。找找那些活跃的项目,这样你的代码可以得到即时的反馈,你的贡献也就不会被浪费。
这里介绍了怎么确认一个项目是否还是活跃的:
* **贡献者数量:** 一个增加的贡献者数量表明开发者社区乐于接受新的贡献者。
* **<ruby>提交<rt>commit</rt></ruby>频率:** 查看最近的提交时间。如果是一周之内,甚至是一两个月内,这个项目应该是定期维护的。
* **维护者数量:** 维护者的数量越多,你越可能得到指导。
* **聊天室活动等级:** 一个繁忙的聊天室意味着你的问题可以更快得到回复。
### 新手资源
Coala 是一个开源项目的例子。它有自己的教程和文档让你可以使用它每一个类和方法的 API。这个网站还设计了一个有吸引力的界面让你有阅读的兴趣。
**文档:** 所有水平的开发者都需要可靠的,被很好地维护的文档,来理解项目的细节。找找在 [GitHub][19](或者承载的任何位置)上,或者在单独的类似于 [阅读文档][20] 的页面上提供完善文档的项目,这样可以帮助你深入了解代码。
### [Coala 新手指南.png][2]
![Coala Newcomers' Guide screen](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/coala-newcomers_guide.png?itok=G7mfPbXN "Coala Newcomers' Guide screen")
**教程:** 教程会给新手解释如何在项目里添加特性 然而你可以在任何项目里找到它。例如Coala 提供了 [tutorials for writing  _bears_][21]  (执行代码分析的<ruby>格式化代码<rt>linting</rt></ruby>工具的Python 包装器).
### [Coala 界面.png][3]
![Coala UI](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/coala_ui.png?itok=LR02629W "Coala User Interface screenshot")
**添加标签的<ruby>讨论点<rt>issue</rt></ruby>:** 对刚刚想明白如何选择第一个项目的初学者来说,选择一个讨论点是一个更加困难的任务。标签被设为“难度/低”,“难度/新手”“利于初学者”以及“low-hanging fruit”都表明是对新手友好的。F
### [Coala 讨论点标签.png][4]
![Coala labeled issues](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/coala_labeled_issues.png?itok=74qSjG_T "Coala labeled issues")
### 其他因素
### [ci_历史纪录.png][5]
![CI user pipeline log](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/ci_logs.png?itok=J3V8gbc7 "CI user pipeline log")
* **维护者对新的贡献者的态度:** 从我的经验来看,大部分开源贡献者都很乐于帮助他们项目里的新手。然而,当你问问题时,你也有可能遇到一些不太友好的人(甚至可能有点粗鲁)。不要因为这些人失去信心。他们只是因为在比他们经验更丰富的人那儿得不到发泄的机会。还有很多其他人愿意提供帮助。
* **审阅过程/结构:** 你的拉取请求将被你的同事和有经验的开发者查看和更改很多次——这就是你学习软件开发最主要的方式。一个具有严格审阅过程的项目使您能够通过编写生产级代码来作为开发人员成长。
* **一个稳健的<ruby>持续整合<rt>continuous integration</rt></ruby>管道:** 开源项目会向新手们介绍持续整合和部署服务。一个稳健的 CI 管道将帮助你学习阅读和理解 CI 日志。它也将带给你处理失败的测试案例和代码覆盖率问题的经验。
* **参加编程项目 (例如 [Google Summer Of Code][1]):** 参加组织证明了你乐于对一个项目的长期发展做贡献。他们也会给新手提供一个机会来获得现实世界中的开发经验,从而获得报酬。大多数参加这些项目的组织都欢迎新人加入。
### 7 对新手友好的组织
* [coala (Python)][7]
* [oppia (Python, Django)][8]
* [DuckDuckGo (Perl, JavaScript)][9]
* [OpenGenus (JavaScript)][10]
* [Kinto (Python, JavaScript)][11]
* [FOSSASIA (Python, JavaScript)][12]
* [Kubernetes (Go)][13]
### 关于作者
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/img_20180309_001440858.jpg?itok=tG8yvrJF)][22] Palash Nigam - 我是一个印度计算机科学专业本科生,十分乐于参与开源软件的开发,我在 GitHub 上花费了大部分的时间。我现在的兴趣包括 web 后端开发,区块链,和 All things python.[更多关于我][14]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/get-started-open-source-project
作者:[ Palash Nigam ][a]
译者:[lonaparte](https://github.com/lonaparte)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/palash25
[1]:https://en.wikipedia.org/wiki/Google_Summer_of_Code
[2]:https://opensource.com/file/391211
[3]:https://opensource.com/file/391216
[4]:https://opensource.com/file/391226
[5]:https://opensource.com/file/391221
[6]:https://opensource.com/article/18/4/get-started-open-source-project?rate=i_d2neWpbOIJIAEjQKFExhe0U_sC6SiQgkm3c7ck8IM
[7]:https://github.com/coala/coala
[8]:https://github.com/oppia/oppia
[9]:https://github.com/duckduckgo/
[10]:https://github.com/OpenGenus/
[11]:https://github.com/kinto
[12]:https://github.com/fossasia/
[13]:https://github.com/kubernetes
[14]:https://opensource.com/users/palash25
[15]:https://opensource.com/user/212436/feed
[16]:https://www.flickr.com/photos/wocintechchat/25171528213/
[17]:https://creativecommons.org/licenses/by/4.0/
[18]:https://git-scm.com/
[19]:https://github.com/
[20]:https://readthedocs.org/
[21]:http://api.coala.io/en/latest/Developers/Writing_Linter_Bears.html
[22]:https://opensource.com/users/palash25
[23]:https://opensource.com/users/palash25
[24]:https://opensource.com/users/palash25
[25]:https://opensource.com/article/18/4/get-started-open-source-project#comments
[26]:https://opensource.com/tags/web-development

View File

@ -0,0 +1,107 @@
如何在 Linux 和 Windows 之间共享文件?
=====
![](https://images.idgesg.net/images/article/2018/04/cats-eating-100755724-large.jpg)
现代很多人都在混合网络上工作Linux 和 Windows 系统都扮演着重要的结束。在两者之间共享文件有时是非常关键的,并且使用正确的工具非常容易。只需很少的功夫,你就可以将文件从 Windows 复制到 Linux 或从 Linux 到 Windows。在这篇文章中我们将讨论配置 Linux 和 Windows 系统所需的东西,以允许你轻松地将文件从一个操作系统转移到另一个。
### 在 Linux 和 Windows 之间复制文件
在 Windows 和 Linux 之间移动文件的第一步是下载并安装诸如 PuTTY 的 pscp 之类的工具。你可以从 [putty.org][1] 获得它,并轻松将其设置在 Windows 系统上。PuTTY 带有一个终端仿真器putty以及像 **pscp** 这样的工具,用于在 Linux 和 Windows 系统之间安全地复制文件。当你进入 PuTTY 站点时,你可以选择安装所有工具,或选择安装你想要的工具,也可以选择单个 .exe 文件。
你还需要在你的 Linux 系统上设置并运行 ssh-server。这允许它支持客户端Windows 端)连接请求。如果你还没有安装 ssh 服务器,那么以下步骤可以在 Debian 系统上运行(包括 Ubuntu 等):
```
sudo apt update
sudo apt install ssh-server
sudo service ssh start
```
对于 Red Hat 及其相关的 Linux 系统,使用类似的命令:
```
sudo yum install openssh-server
sudo systemctl start sshd
```
注意,如果你正在运行防火墙(例如 ufw则可能需要打开 22 端口以允许连接。
使用 **pscp** 命令,你可以将文件从 Windows 移到 Linux反之亦然。它的 “copy from to” 命令的语法非常简单。
#### 从 Windows 到 Linux
在下面显示的命令中,我们将 Windows 系统上用户账户中的文件复制到 Linux 系统下的 /tmp 目录。
```
C:\Program Files\PuTTY>pscp \Users\shs\copy_me.txt shs@192.168.0.18:/tmp
shs@192.168.0.18's password:
copy_me.txt | 0 kB | 0.1 kB/s | ETA: 00:00:00 | 100%
```
#### 从 Linux 到 Windows
将文件从 Linux 转移到 Windows 也同样简单。只要反向参数即可。
```
C:\Program Files\PuTTY>pscp shs@192.168.0.18:/tmp/copy_me.txt \Users\shs
shs@192.168.0.18's password:
copy_me.txt | 0 kB | 0.1 kB/s | ETA: 00:00:00 | 100%
```
如果 1) pscp 位于 Windows 搜索路径中,并且 2) 你的 Linux 系统在 Windows hosts 文件中则该过程可以变得更加顺畅和轻松。to 校正者:这句话不怎么明白)
#### Windows 搜索路径
如果你使用 PuTTY 安装程序安装 PuTTY 工具,你可能会发现 **C:\Program files\PuTTY** 位于 Windows 搜索路径中。你可以通过在 Windows 命令提示符下键入 **echo %path%** 来检查是否属于这种情况(在搜索栏中键入 “cmd” 来打开命令提示符)。如果是这样,你不需要关心文件系统中相对于 pscp 可执行文件的位置。进入到包含你想要移动文件的文件夹可能会更容易。
```
C:\Users\shs>pscp copy_me.txt shs@192.168.0.18:/tmp
shs@192.168.0.18's password:
copy_me.txt | 0 kB | 0.1 kB/s | ETA: 00:00:00 | 100%
```
#### 更新你的 Windows hosts 文件
这是另一个小修补。使用管理员权限,你可以将 Linux 系统添加到 Windows hosts 文件C:\Windows\System32\drivers\etc\hosts然后使用其主机名代替其 IP 地址。请记住,如果你的 Linux 系统的 IP 地址是动态分配的,那么它不会一直发挥作用。
```
C:\Users\shs>pscp copy_me.txt shs@stinkbug:/tmp
shs@192.168.0.18's password:
hosts | 0 kB | 0.8 kB/s | ETA: 00:00:00 | 100%
```
请注意Windows hosts 文件与 Linux 系统上的 /etc/hosts 文件格式相同 -- IP 地址,空格,主机名。注释以 poundto 校正者:这个符号是英镑符??) 符号来表示的。
```
# Linux systems
192.168.0.18 stinkbug
```
#### 讨厌的行结尾符
请记住Windows 上文本文件中的行以回车符和换行符结束。pscp 工具不会删除回车符,使文件看起来像 Linux 文本文件。相反,它只是完整地复制文件。你可以考虑安装 **tofrodos** 包,这使你能够在 Linux 系统上使用 **fromdos****todos** 命令来调整在平台之间移动的文件。
### 在 Windows 和 Linux 之间共享文件夹
共享文件夹是完全不同的操作。你最终将 Windows 文件夹挂载到你的 Linux 系统或将 Linux 文件夹挂载到 Windows 文件夹中,以便两个系统可以使用同一组文件,而不是将文件从一个系统复制到另一个系统。最好的工具之一就是 Samba它模拟 Windows 协议并在 Linux 系统上运行。
一旦安装了 Samba你将能够将 Linux 文件夹挂载到 Windows 上或将 Windows 文件夹挂载到 Linux 上。当然,这与本文前面描述的复制文件有很大的不同。相反,这两个系统中的每一个都可以同时访问相同的文件。
关于选择在 Linux 和 Windows 系统之间共享文件的正确工具的更多提示可以在[这里][2]找到。
在 [Facebook][3] 和 [LinkedIn][4] 上加入网络世界社区,对最重要的话题发表评论。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3269189/linux/sharing-files-between-linux-and-windows.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:https://www.putty.org
[2]:https://www.infoworld.com/article/2617683/linux/linux-moving-files-between-unix-and-windows-systems.html
[3]:https://www.facebook.com/NetworkWorld/
[4]:https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,108 @@
密码学及公钥基础设施入门
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/locks_keys_bridge_paris.png?itok=Bp0dsEc9)
安全通信正快速成为当今互联网的规范。从 2018 年 7 月起Google Chrome 将对**全部**使用 HTTP 传输(而不是 HTTPS 传输)的站点[开始显示“不安全”警告][1]。虽然密码学已经逐渐变广为人知,但其本身并没有变得更容易理解。[Let's Encrypt][3] 设计并实现了一套令人惊叹的解决方案,可以提供免费安全证书和周期性续签;但如果不了解底层概念和缺陷,你也不过是加入了类似”[<ruby>船货崇拜<rt>cargo cult</rt></ruby>][4]“的技术崇拜的程序员大军。
### 安全通信的特性
密码学最直观明显的目标是<ruby>保密性<rt>confidentiality</rt></ruby><ruby>消息<rt>message</rt></ruby>传输过程中不会被窥探内容。为了保密性,我们对消息进行加密:对于给定消息,我们结合一个<ruby>密钥<rt>key</rt></ruby>生成一个无意义的乱码,只有通过相同的密钥逆转加密过程(即解密过程)才能将其转换为可读的消息。假设我们有两个朋友 [Alice 和 Bob][5],以及他们的<ruby>八卦<rt>nosy</rt></ruby>邻居 Eve。Alice 加密类似 "Eve 很讨厌" 的消息,将其发送给 Bob期间不用担心 Eve 会窥探到这条消息的内容。
对于真正的安全通信,保密性是不够的。假如 Eve 收集了足够多 Alice 和 Bob 之间的消息,发现单词 "Eve" 被加密为 "Xyzzy"。除此之外Eve 还知道 Alice 和 Bob 正在准备一个派对Alice 会将访客名单发送给 Bob。如果 Eve 拦截了消息并将 "Xyzzy" 加到访客列表的末尾那么她已经成功的破坏了这个派对。因此Alice 和 Bob 需要他们之间的通信可以提供<ruby>完整性<rt>integrity</rt></ruby>:消息应该不会被篡改。
而且我们还有一个问题有待解决。假如 Eve 观察到 Bob 打开了标记为”来自 Alice“的信封信封中包含一条来自 Alice 的消息”再买一加仑冰淇淋“。Eve 看到 Bob 外出,回家时带着冰淇淋,这样虽然 Eve 并不知道消息的完整内容但她对消息有了大致的了解。Bob 将上述消息丢弃,但 Eve 找出了它并在下一周中的每一天都向 Bob 的邮箱中投递一封标记为”来自 Alice“的信封内容拷贝自之前 Bob 丢弃的那封信。这样到了派对的时候冰淇淋严重超量派对当晚结束后Bob 分发剩余的冰淇淋Eve 带着免费的冰淇淋回到家。消息是加密的,完整性也没问题,但 Bob 被误导了,没有认出发信人的真实身份。<ruby>身份认证<rt>Authentication</rt></ruby>这个特性用于保证你正在通信的人的确是其声称的那样。
信息安全还有[其它特性][6],但保密性、完整性和身份验证是你必须了解的三大特性。
### 加密和加密算法
加密都包含哪些部分呢?首先,需要一条消息,我们称之为<ruby>明文<rt>plaintext</rt></ruby>。接着,需要对明文做一些格式上的初始化,以便用于后续的加密过程(例如,假如我们使用<ruby>分组加密算法<rt>block cipher</rt></ruby>,需要在明文尾部填充使其达到特定长度)。下一步,需要一个保密的比特序列,我们称之为<ruby>密钥<rt>key</rt></ruby>。然后,基于私钥,使用一种加密算法将明文转换为<ruby>密文<rt>ciphertext</rt></ruby>。密文看上去像是随机噪声,只有通过相同的加密算法和相同的密钥(在后面提到的非对称加密算法情况下,是另一个数学上相关的密钥)才能恢复为明文。
LCTT 译注cipher 一般被翻译为密码,但其具体表达的意思是加密算法,这里采用加密算法的翻译)
加密算法使用密钥加密明文。考虑到希望能够解密密文,我们用到的加密算法也必须是<ruby>可逆的<rt>reversible</rt></ruby>。作为简单示例,我们可以使用 [XOR][7]。该算子可逆而且逆算子就是本身P ^ K = C; C ^ K = P故可同时用于加密和解密。该算子的平凡应用可以是<ruby>一次性密码本<rt>one-time pad</rt></ruby>,但一般而言并不[可行][9]。但可以将 XOR 与一个基于单个密钥生成<ruby>任意随机数据流<rt>arbitrary stream of random data</rt></ruby>的函数结合起来。现代加密算法 AES 和 Chacha20 就是这么设计的。
我们把加密和解密使用同一个密钥的加密算法称为<ruby>对称加密算法<rt>symmetric cipher</rt></ruby>。对称加密算法分为<ruby>流加密算法<rt>stream ciphers</rt></ruby>和分组加密算法两类。流加密算法依次对明文中的每个比特或字节进行加密。例如,我们上面提到的 XOR 加密算法就是一个流加密算法。流加密算法适用于明文长度未知的情形,例如数据从管道或 socket 传入。[RC4][10] 是最为人知的流加密算法,但在多种不同的攻击面前比较脆弱,以至于最新版本 1.3)的 TLS "HTTPS" 中的 "S")已经不再支持该加密算法。[Efforts][11] 正着手创建新的加密算法,候选算法 [ChaCha20][12] 已经被 TLS 支持。
分组加密算法对固定长度的分组,使用固定长度的密钥加密。在分组加密算法领域,排行第一的是 [<ruby>先进加密标准<rt>Advanced Encryption Standard, AES</rt></ruby>][13],使用的分组长度为 128 比特。分组包含的数据并不多,因而分组加密算法包含一个[工作模式][14],用于描述如何对任意长度的明文执行分组加密。最简单的工作模式是 [<ruby>电子密码本<rt>Electronic Code Book, ECB</rt></ruby>][15],将明文按分组大小划分成多个分组(在必要情况下,填充最后一个分组),使用密钥独立的加密各个分组。
![](https://opensource.com/sites/default/files/uploads/ecb_encryption.png)
这里我们留意到一个问题:如果相同的分组在明文中出现多次(例如互联网流量中的 "GET / HTTP/1.1" 词组),由于我们使用相同的密钥加密分组,我们会得到相同的加密结果。我们的安全通信中会出现一种<ruby>模式规律<rt>pattern</rt></ruby>,容易受到攻击。
因此还有很多高级的工作模式,例如 [<ruby>密码分组链接<rt>Cipher Block Chaining, CBC</rt></ruby>][16],其中每个分组的明文在加密前会与前一个分组的密文进行 XOR 操作,而第一个分组的明文与一个随机数构成的初始化向量进行 XOR 操作。还有其它一些工作模式,在安全性和执行速度方面各有优缺点。甚至还有 Counter (CTR) 这种工作模式,可以将分组加密算法转换为流加密算法。
![](https://opensource.com/sites/default/files/uploads/cbc_encryption.png)
除了对称加密算法,还有<ruby>非对称加密算法<rt>asymmetric ciphers</rt></ruby>,也被称为<ruby>公钥密码学<rt>public-key cryptography</rt></ruby>。这类加密算法使用两个密钥:一个<ruby>公钥<rt>public key</rt></ruby>,一个<ruby>私钥<rt>private key</rt></ruby>。公钥和私钥在数学上有一定关联,但可以区分二者。经过公钥加密的密文只能通过私钥解密,经过私钥加密的密文可以通过公钥解密。公钥可以大范围分发出去,但私钥必须对外不可见。如果你希望和一个给定的人通信,你可以使用对方的公钥加密消息,这样只有他们的私钥可以解密出消息。在非对称加密算法领域,目前 [RSA][17] 最具有影响力。
非对称加密算法最主要的缺陷是,它们是<ruby>计算密集型<rt>computationally expensive</rt></ruby>的。那么使用对称加密算法可以让身份验证更快吗?如果你只与一个人共享密钥,答案是肯定的。但这种方式很快就会失效。假如一群人希望使用对称加密算法进行两两通信,如果对每对成员通信都采用单独的密钥,一个 20 人的群体将有 190 对成员通信,即每个成员要维护 19 个密钥并确认其安全性。如果使用非对称加密算法,每个成员仅需确保自己的私钥安全并维护一个公钥列表即可。
非对称加密算法也有加密[数据长度][18]限制。类似于分组加密算法,你需要将长消息进行划分。但实际应用中,非对称加密算法通常用于建立<ruby>机密<rt>confidential</rt></ruby><ruby>已认证<rt>authenticated</rt></ruby><ruby>通道<rt>channel</rt></ruby>利用该通道交换对称加密算法的共享密钥。考虑到速度优势对称加密算法用于后续的通信。TLS 就是严格按照这种方式运行的。
### 基础
安全通信的核心在于随机数。随机数用于生成密钥并为<ruby>确定性过程<rt>deterministic processes</rt></ruby>提供不可预测性。如果我们使用的密钥是可预测的,那我们从一开始就可能受到攻击。计算机被设计成按固定规则操作,因此生成随机数是比较困难的。计算机可以收集鼠标移动或<ruby>键盘计时<rt>keyboard timings</rt></ruby>这类随机数据。但收集随机性(也叫<ruby>信息熵<rt>entropy</rt></ruby>)需要花费不少时间,而且涉及额外处理以确保<ruby>均匀分布<rt>uniform distribution</rt></ruby>。甚至可以使用专用硬件,例如[<ruby>熔岩灯<rt>lava lamps</rt></ruby>墙][19]等。一般而言,一旦有了一个真正的随机数值,我们可以将其用作<ruby>种子<rt>seed</rt></ruby>,使用<ruby>密码安全的伪随机数生成器<rt>cryptographically secure pseudorandom number generator</rt></ruby>生成随机数。使用相同的种子,同一个随机数生成器生成的随机数序列保持不变,但重要的是随机数序列是无规律的。在 Linux 内核中,[/dev/random 和 /dev/urandom][21] 工作方式如下:从多个来源收集信息熵,进行<ruby>无偏处理<rt>remove biases</rt></ruby>,生成种子,然后生成随机数,该随机数可用于 RSA 密钥生成等。
### 其它密码学组件
我们已经实现了保密性,但还没有考虑完整性和身份验证。对于后两者,我们需要使用一些额外的技术。
首先是<ruby>密码散列函数<rt>crytographic hash function</rt></ruby>,该函数接受任意长度的输入并给出固定长度的输出(一般称为<ruby>摘要<rt>digest</rt></ruby>)。如果我们找到两条消息,其摘要相同,我们称之为<ruby>碰撞<rt>collision</rt></ruby>,对应的散列函数就不适合用于密码学。这里需要强调一下“找到”:考虑到消息的条数是无限的而摘要的长度是固定的,那么总是会存在碰撞;但如果无需海量的计算资源,我们总是能找到发生碰撞的消息对,那就令人比较担心了。更严重的情况是,对于每一个给定的消息,都能找到与之碰撞的另一条消息。
另外,哈希函数必须是<ruby>单向的<rt>one-way</rt></ruby>:给定一个摘要,反向计算对应的消息在计算上不可行。相应的,这类[条件][22]被称为<ruby>碰撞阻力<rt>collision resistance</rt></ruby><ruby>第二原象抗性<rt>second preimage resistance</rt></ruby><ruby>原象抗性<rt>preimage resistance</rt></ruby>。如果满足这些条件,摘要可以用作消息的指纹。[理论上][23]不存在具有相同指纹的两个人,而且你无法使用指纹反向找到其对应的人。
如果我们同时发送消息及其摘要,接收者可以使用相同的哈希函数独立计算摘要。如果两个摘要相同,可以认为消息没有被篡改。考虑到 [SHA-1][25] 已经变得[有些过时][26],目前最流行的密码散列函数是 [SHA-256][24]。
散列函数看起来不错,但如果有人可以同时篡改消息及其摘要,那么消息发送仍然是不安全的。我们需要将哈希与加密算法结合起来。在对称加密算法领域,我们有<ruby>消息认证码<rt>message authentication codes, MACs</rt></ruby>技术。MACs 有多种形式,但<ruby>哈希消息认证码<rt>hash message authentication codes, HMAC</rt></ruby> 这类是基于哈希的。[HMAC][27] 使用哈希函数 H 处理密钥 K、消息 M公式为 H(K + H(K + M)),其中 "+" 代表<ruby>连接<rt>concatenation</rt></ruby>。公式的独特之处并不在本文讨论范围内,大致来说与保护 HMAC 自身的完整性有关。发送加密消息的同时也发送 MAC。Eve 可以任意篡改消息,但一旦 Bob 独立计算 MAC 并与接收到的 MAC 做比较,就会发现消息已经被篡改。
在非对称加密算法领域,我们有<ruby>数字签名<rt>digital signatures</rt></ruby>技术。如果使用 RSA使用公钥加密的内容只能通过私钥解密反过来也是如此这种机制可用于创建一种签名。如果只有我持有私钥并用其加密文档那么只有我的公钥可以用于解密那么大家潜在的承认文档是我写的这是一种身份验证。事实上我们无需加密整个文档。如果生成文档的摘要只要对这个指纹加密即可。对摘要签名比对整个文档签名要快得多而且可以解决非对称加密存在的消息长度限制问题。接收者解密出摘要信息独立计算消息的摘要并进行比对可以确保消息的完整性。对于不同的非对称加密算法数字签名的方法也各不相同但核心都是使用公钥来检验已有签名。
### 汇总
现在,我们已经有了全部的主体组件,可以用其实现一个我们期待的、具有全部三个特性的[<ruby>体系<rt>system</rt></ruby>][28]。Alice 选取一个保密的对称加密密钥并使用 Bob 的公钥进行加密。接着她对得到的密文进行哈希并使用其私钥对摘要进行签名。Bob 接收到密文和签名,一方面独立计算密文的摘要,另一方面使用 Alice 的公钥解密签名中的摘要如果两个摘要相同他可以确信对称加密密钥没有被篡改且通过了身份验证。Bob 使用私钥解密密文得到对称加密密钥,接着使用该密钥及 HMAC 与 Alice 进行保密通信,这样每一条消息的完整性都得到保障。但该体系没有办法抵御消息重放攻击(我们在 Eve 造成的冰淇淋灾难中见过这种攻击)。要解决重放攻击,我们需要使用某种类型的“<ruby>握手<rt>handshake</rt></ruby>”建立随机、短期的<ruby>会话标识符<rt>session identifier</rt></ruby>
密码学的世界博大精深,我希望这篇文章能让你对密码学的核心目标及其组件有一个大致的了解。这些概念为你打下坚实的基础,让你可以继续深入学习。
感谢 Hubert KarioFlorian Weimer 和 Mike Bursell 在本文写作过程中提供的帮助。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/cryptography-pki
作者:[Alex Wood][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/awood
[1]:https://security.googleblog.com/2018/02/a-secure-web-is-here-to-stay.html
[2]:https://blog.mozilla.org/security/2017/01/20/communicating-the-dangers-of-non-secure-http/
[3]:https://letsencrypt.org/
[4]:https://en.wikipedia.org/wiki/Cargo_cult_programming
[5]:https://en.wikipedia.org/wiki/Alice_and_Bob
[6]:https://en.wikipedia.org/wiki/Information_security#Availability
[7]:https://en.wikipedia.org/wiki/XOR_cipher
[8]:https://en.wikipedia.org/wiki/Involution_(mathematics)#Computer_science
[9]:https://en.wikipedia.org/wiki/One-time_pad#Problems
[10]:https://en.wikipedia.org/wiki/RC4
[11]:https://en.wikipedia.org/wiki/ESTREAM
[12]:https://en.wikipedia.org/wiki/Salsa20
[13]:https://en.wikipedia.org/wiki/Advanced_Encryption_Standard
[14]:https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation
[15]:https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#/media/File:ECB_encryption.svg
[16]:https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#/media/File:CBC_encryption.svg
[17]:https://en.wikipedia.org/wiki/RSA_(cryptosystem)
[18]:https://security.stackexchange.com/questions/33434/rsa-maximum-bytes-to-encrypt-comparison-to-aes-in-terms-of-security
[19]:https://www.youtube.com/watch?v=1cUUfMeOijg
[20]:https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
[21]:https://www.2uo.de/myths-about-urandom/
[22]:https://crypto.stackexchange.com/a/1174
[23]:https://www.telegraph.co.uk/science/2016/03/14/why-your-fingerprints-may-not-be-unique/
[24]:https://en.wikipedia.org/wiki/SHA-2
[25]:https://en.wikipedia.org/wiki/SHA-1
[26]:https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html
[27]:https://en.wikipedia.org/wiki/HMAC
[28]:https://en.wikipedia.org/wiki/Hybrid_cryptosystem

View File

@ -0,0 +1,130 @@
如何在 Linux 中使用 history 命令
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguins.png?itok=yKOpaJM_)
随着我在终端中花费越来越多的时间我感觉就像不断地寻找新的命令使我的日常任务更加高效。GNU 的 `history` 命令是一个真正改变我日常工作的命令。
GNU `history` 命令保存了从该终端会话运行的所有其他命令的列表,然后允许你重放或者重用这些命令,而不用重新输入它们。如果你是一个老玩家,你知道 `history` 的力量,但对于我们这些半吊子或新手系统管理员来说, `history` 是一个立竿见影的生产力增益。
### History 101
要查看 `history`,请在 Linux 中打开终端程序,然后输入:
```
$ history
```
这是我得到的响应:
```
1  clear
2  ls -al
3  sudo dnf update -y
4  history
```
`history` 命令显示自开始会话后输入的命令列表。 `history` 有趣的地方是你可以使用以下命令重放任意一个命令:
```
$ !3
```
提示符中的 `!3` 告诉 shell 重新运行历史列表中第 3 个命令。我还可以输入以下命令来使用:
```
linuser@my_linux_box: !sudo dnf
```
`history` 将搜索与你提供的模式相匹配的最后一个命令并运行它。
### 搜索历史
你还可以输入 `!!` 重新运行 `history` 的最后一条命令。而且,通过与` grep` 配对,你可以搜索与文本模式相匹配的命令,或者通过与 `tail` 一起使用,你可以找到你最后几条执行的命令。例如:
```
$ history | grep dnf
3  sudo dnf update -y
5  history | grep dnf
$ history | tail -n 3
4  history
5  history | grep dnf
6  history | tail -n 3
```
另一种实现这个功能的方法是输入 `Ctrl-R` 来调用你的命令历史记录的递归搜索。输入后,提示变为:
```
(reverse-i-search)`':
```
现在你可以开始输入一个命令,并且会显示匹配的命令,按回车键执行。
### 更改已执行的命令
`history` 还允许你使用不同的语法重新运行命令。例如,如果我想改变我以前的命令 `history | grep dnf``history | grep ssh`,我可以在提示符下执行以下命令:
```
$ ^dnf^ssh^
```
`history` 将重新运行该命令,但用 `ssh` 替换 `dnf`,并执行它。
### 删除历史
有时你想要删除一些或全部的历史记录。如果要删除特定命令,请输入 `history -d <行号>`。要清空历史记录,请执行 `history -c`
历史文件存储在一个你可以修改的文件中。bash shell 用户可以在他们的家目录下找到 `.bash_history`
### 下一步
你可以使用 `history` 做许多其他事情:
* 将历史缓冲区设置为一定数量
* 记录历史中每行的日期和时间
* 防止某些命令被记录在历史记录中
有关 `history` 命令的更多信息和其他有趣的事情,请参考[ GNU Bash 手册][1]。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/history-command
作者:[Steve Morris][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/smorris12
[1]:https://www.gnu.org/software/bash/manual/

View File

@ -0,0 +1,119 @@
不像 MySQL 的 MySQLMySQL 文档存储介绍
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_business_sign_store.jpg?itok=g4QibRqg)
MySQL 可以提供 NoSQL JSON <ruby>文档存储<rt>Document Store</rt></ruby>了,这样开发者保存数据前无需<ruby>规范化<rt>normalize</rt></ruby>数据、创建数据库,也无需在开发之前就制定好数据样式。从 MySQL 5.7 版本和 MySQL 8.0 版本开始,开发者可以在表的一列中存储 JSON 文档。由于引入 X DevAPI你可以从你的代码中移除令人不爽的结构化查询字符串改为使用支持现代编程设计的 API 调用。
系统学习过结构化查询语言SQL<ruby>关系理论<rt>relational theory</rt></ruby>和其它关系数据库底层理论的开发者并不多,但他们需要一个安全可靠的数据存储。如果数据库管理人员不足,事情很快就会变得一团糟,
[MySQL 文档存储][1] 允许开发者跳过底层数据结构创建、数据规范化和其它使用传统数据库时需要做的工作,直接存储数据。只需创建一个 JSON <ruby>文档集合<rt>document collection</rt></ruby>,接着就可以使用了。
### JSON 数据类型
所有这一切都基于多年前 MySQL 5.7 引入的 JSON 数据类型。允许在表的一行中提供大约 1GB 的列。数据必须是有效的 JSON否则服务器会报错但开发者可以自由使用这些空间。
### X DevAPI
旧的 MySQL 协议已经历经差不多四分之一个世纪,已经显现出疲态,因此新的协议被开发出来,协议名为 [X DevAPI][2]。协议引入高级会话概念,允许代码从单台服务器扩展到多台,使用符合<ruby>通用主机编程语言样式<rt>common host-language programming patterns</rt></ruby>的非阻塞异步 I/O。需要关注的是如何遵循现代实践和编码风格同时使用 CRUD (create, replace, update, delete) 样式。换句话说,你不再需要在你精美、淳朴的代码中嵌入丑陋的 SQL 语句字符串。
### 代码示例
一个新的 shell 支持这种新协议,即所谓的 [MySQL Shell][3]。该 shell 可用于设置<ruby>高可用集群<rt>high-availability clusters</rt></ruby>、检查服务器<ruby>升级就绪状态<rt>upgrade readiness</rt></ruby>以及与 MySQL 服务器交互。支持的交互方式有以下三种JavaScriptPython 和 SQL。
下面的代码示例基于 JavaScript 方式使用 MySQL Shell可以从 `JS>` 提示符看出。
下面,我们将使用用户 `dstokes` 、密码 `password` 登录本地系统上的 `demo` 库。`db` 是一个指针,指向 demo 库。
```
$ mysqlsh dstokes:password@localhost/demo
JS> db.createCollection("example")
JS> db.example.add(
      {
        Name: "Dave",
        State:  "Texas",
        foo : "bar"
      }
     )
JS>
```
在上面的示例中,我们登录服务器,连接到 `demo` 库,创建了一个名为 `example` 的集合,最后插入一条记录;整个过程无需创建表,也无需使用 SQL。只要你能想象的到你可以使用甚至滥用这些数据。这不是一种代码对象与关系语句之间的映射器因为并没有将代码映射为 SQL新协议直接与服务器层打交道。
### Node.js 支持
新 shell 看起来挺不错,你可以用其完成很多工作;但你可能更希望使用你选用的编程语言。下面的例子使用 `world_x` 示例数据库,搜索 `_id` 字段匹配 "CAN." 的记录。我们指定数据库中的特定集合,使用特定参数调用 `find` 命令。同样地,操作也不涉及 SQL。
```
var mysqlx = require('@mysql/xdevapi');
mysqlx.getSession({             //Auth to server
        host: 'localhost',
        port: '33060',
        dbUser: 'root',
        dbPassword: 'password'
}).then(function (session) {    // use world_x.country.info
     var schema = session.getSchema('world_x');
     var collection = schema.getCollection('countryinfo');
collection                      // Get row for 'CAN'
  .find("$._id == 'CAN'")
  .limit(1)
  .execute(doc => console.log(doc))
  .then(() => console.log("\n\nAll done"));
  session.close();
})
```
下面例子使用 PHP搜索 `_id` 字段匹配 "USA" 的记录:
```
<?PHP
// Connection parameters
  $user = 'root';
  $passwd = 'S3cret#';
  $host = 'localhost';
  $port = '33060';
  $connection_uri = 'mysqlx://'.$user.':'.$passwd.'@'.$host.':'.$port;
  echo $connection_uri . "\n";
// Connect as a Node Session
  $nodeSession = mysql_xdevapi\getNodeSession($connection_uri);
// "USE world_x" schema
  $schema = $nodeSession->getSchema("world_x");
// Specify collection to use
  $collection = $schema->getCollection("countryinfo");
// SELECT * FROM world_x WHERE _id = "USA"
  $result = $collection->find('_id = "USA"')->execute();
// Fetch/Display data
  $data = $result->fetchAll();
  var_dump($data);
?>
```
可以看出,在上面两个使用不同编程语言的例子中,`find` 操作符的用法基本一致。这种一致性对跨语言编程的开发者有很大帮助,对试图降低新语言学习成本的开发者也不无裨益。
支持的语言还包括 CJavaPython 和 JavaScript 等,未来还会有更多支持的语言。
### 从两种方式受益
我会告诉你使用 NoSQL 方式录入的数据也可以用 SQL 方式使用?换句话说,我会告诉你新引入的 NoSQL 方式可以访问旧式关系型表中的数据?现在使用 MySQL 服务器有多种方式,作为 SQL 服务器,作为 NoSQL 服务器或者同时作为两者。
Dave Stokes 将于 6 月 8-10 日在北卡罗来纳州 Charlotte 市举行的 [Southeast LinuxFest][4] 大会上做”不用 SQL 的 MySQL我的天哪“主题演讲。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/mysql-document-store
作者:[Dave Stokes][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/davidmstokes
[1]:https://www.mysql.com/products/enterprise/document_store.html
[2]:https://dev.mysql.com/doc/x-devapi-userguide/en/
[3]:https://dev.mysql.com/downloads/shell/
[4]:http://www.southeastlinuxfest.org/