Merge pull request #6 from LCTT/master

UPDATE
This commit is contained in:
Morisun029 2019-10-22 20:39:16 +08:00 committed by GitHub
commit 5bd749a0d8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
78 changed files with 7695 additions and 3369 deletions

View File

@ -0,0 +1,134 @@
[#]: collector: (lujun9972)
[#]: translator: (tomjlw)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11467-1.html)
[#]: subject: (Guide to Install VMware Tools on Linux)
[#]: via: (https://itsfoss.com/install-vmware-tools-linux)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
在 Linux 上安装 VMware 工具
======
> VMware 工具通过允许你共享剪贴板和文件夹以及其他东西来提升你的虚拟机体验。了解如何在 Ubuntu 和其它 Linux 发行版上安装 VMware 工具。
![如何在 Linux 上安装 VMware 工具][4]
在先前的教程中,你学习了[在 Ubuntu 上安装 VMware 工作站][1]。你还可以通过安装 VMware 工具进一步提升你的虚拟机功能。
如果你已经在 VMware 上安装了一个访客机系统,你必须要注意 [VMware 工具][2]的要求 —— 尽管并不完全清楚到底有什么要求。
在本文中,我们将要强调 VMware 工具的重要性、所提供的特性,以及在 Ubuntu 和其它 Linux 发行版上安装 VMware 工具的方法。
### VMware 工具:概览及特性
![在 Ubuntu 上安装 VMware 工具][3]
出于显而易见的理由,虚拟机(你的访客机系统)并不能做到与宿主机上的表现完全一致。在其性能和操作上会有特定的限制。那就是为什么引入 VMware 工具的原因。
VMware 工具以一种高效的形式在提升了其性能的同时,也可以帮助管理访客机系统。
#### VMware 工具到底负责什么?
你大致知道它可以做什么,但让我们探讨一下细节:
* 同步访客机系统与宿主机系统间的时间以简化操作
* 提供从宿主机系统向访客机系统传递消息的能力。比如说,你可以复制文字到剪贴板,并将它轻松粘贴到你的访客机系统
* 在访客机系统上启用声音
* 提升访客机视频分辨率
* 修正错误的网络速度数据
* 减少不合适的色深
在访客机系统上安装了 VMware 工具会给它带来显著改变,但是它到底包含了什么特性才解锁或提升这些功能的呢?让我们来看看……
#### VMware 工具:核心特性细节
![用 VMware 工具在宿主机系统与访客机系统间共享剪切板][5]
如果你不想知道它包含什么来启用这些功能的话,你可以跳过这部分。但是为了好奇的读者,让我们简短地讨论它一下:
**VMware 设备驱动:** 它具体取决于操作系统。大多数主流操作系统都默认包含了设备驱动因此你不必另外安装它。这主要涉及到内存控制驱动、鼠标驱动、音频驱动、网卡驱动、VGA 驱动以及其它。
**VMware 用户进程:** 这是这里真正有意思的地方。通过它你获得了在访客机和宿主机间复制粘贴和拖拽的能力。基本上,你可以从宿主机复制粘贴文本到虚拟机,反之亦然。
你同样也可以拖拽文件。此外,在你未安装 SVGA 驱动时它会启用鼠标指针的释放/锁定。
**VMware 工具生命周期管理:** 嗯,我们会在下面看看如何安装 VMware 工具,但是这个特性帮你在虚拟机中轻松安装/升级 VMware 工具。
**共享文件夹**除了这些。VMware 工具同样允许你在访客机与宿主机系统间共享文件夹。
![使用 VMware 工具在访客机与宿机系统间共享文件][6]
当然,它的效果同样取决于访客机系统。例如在 Windows 上你通过 Unity 模式运行虚拟机上的程序并从宿主机系统上操作它。
### 如何在 Ubuntu 和其它 Linux 发行版上安装 VMware 工具
**注意:** 对于 Linux 操作系统你应该已经安装好了“Open VM 工具”,大多数情况下免除了额外安装 VMware 工具的需要。
大部分时候,当你安装了访客机系统时,如果操作系统支持 [Easy Install][7] 的话你会收到软件更新或弹窗告诉你要安装 VMware 工具。
Windows 和 Ubuntu 都支持 Easy Install。因此如果你使用 Windows 作为你的宿主机或尝试在 Ubuntu 上安装 VMware 工具,你应该会看到一个和弹窗消息差不多的选项来轻松安装 VMware 工具。这是它应该看起来的样子:
![安装 VMware 工具的弹窗][8]
这是搞定它最简便的办法。因此当你配置虚拟机时确保你有一个通畅的网络连接。
如果你没收到任何弹窗或者选项来轻松安装 VMware 工具。你需要手动安装它。以下是如何去做:
1. 运行 VMware Workstation Player。
2. 从菜单导航至 “Virtual Machine -> Install VMware tools”。如果你已经安装了它并想修复安装你会看到 “Re-install VMware tools” 这一选项出现。
3. 一旦你点击了,你就会看到一个虚拟 CD/DVD 挂载在访客机系统上。
4. 打开该 CD/DVD并复制粘贴那个 tar.gz 文件到任何你选择的区域并解压,这里我们选择“桌面”作为解压目的地。
![][9]
5. 在解压后,运行终端并通过输入以下命令导航至里面的文件夹:
```
cd Desktop/VMwareTools-10.3.2-9925305/vmware-tools-distrib
```
你需要检查文件夹与路径名,这取决于版本与解压目的地,名字可能会改变。
![][10]
用你的存储位置(如“下载”)替换“桌面”,如果你安装的也是 10.3.2 版本,其它的保持一样即可。
6. 现在仅需输入以下命令开始安装:
```
sudo ./vmware-install.pl -d
```
![][11]
你会被询问密码以获得安装权限,输入密码然后应当一切都搞定了。
到此为止了,你搞定了。这系列步骤应当适用于几乎大部分基于 Ubuntu 的访客机系统。如果你想要在 Ubuntu 服务器上或其它系统安装 VMware 工具,步骤应该类似。
### 总结
在 Ubuntu Linux 上安装 VMware 工具应该挺简单。除了简单办法,我们也详述了手动安装的方法。如果你仍需帮助或者对安装有任何建议,在评论区评论让我们知道。
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-vmware-tools-linux
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[tomjlw](https://github.com/tomjlw)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/install-vmware-player-ubuntu-1310/
[2]: https://kb.vmware.com/s/article/340
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-downloading.jpg?fit=800%2C531&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/install-vmware-tools-linux.png?resize=800%2C450&ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-features.gif?resize=800%2C500&ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-shared-folder.jpg?fit=800%2C660&ssl=1
[7]: https://docs.vmware.com/en/VMware-Workstation-Player-for-Linux/15.0/com.vmware.player.linux.using.doc/GUID-3F6B9D0E-6CFC-4627-B80B-9A68A5960F60.html
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools.jpg?fit=800%2C481&ssl=1
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-extraction.jpg?fit=800%2C564&ssl=1
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-folder.jpg?fit=800%2C487&ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-installation-ubuntu.jpg?fit=800%2C492&ssl=1

View File

@ -0,0 +1,73 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11461-1.html)
[#]: subject: (Blockchain 2.0 Introduction To Hyperledger Fabric [Part 10])
[#]: via: (https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-fabric/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
区块链 2.0Hyperledger Fabric 介绍(十)
======
![Hyperledger Fabric][1]
### Hyperledger Fabric
[Hyperledger 项目][2] 是一个伞形组织,包括许多正在开发的不同模块和系统。在这些子项目中,最受欢迎的是 “Hyperledger Fabric”。这篇博文将探讨一旦区块链系统开始大量使用到主流将使 Fabric 在不久的将来成为几乎不可或缺的功能。最后,我们还将快速了解开发人员和爱好者们需要了解的有关 Hyperledger Fabric 技术的知识。
### 起源
按照 Hyperledger 项目的常规方式Fabric 由其核心成员之一 IBM “捐赠”给该组织,而 IBM 以前是该组织的主要开发者。由 IBM 共享的这个技术平台在 Hyperledger 项目中进行了联合开发,来自 100 多个成员公司和机构为之做出了贡献。
目前Fabric 正处于 LTS 版本的 v1.4该版本已经发展很长一段时间并且被视为企业管理业务数据的解决方案。Hyperledger 项目的核心愿景也必然会渗透到 Fabric 中。Hyperledger Fabric 系统继承了所有企业级的可扩展功能,这些功能已深深地刻入到 Hyperledger 组织旗下所有的项目当中。
### Hyperledger Fabric 的亮点
Hyperledger Fabric 提供了多种功能和标准,这些功能和标准围绕着支持快速开发和模块化体系结构的使命而构建。此外,与竞争对手(主要是瑞波和[以太坊][3]相比Fabric 明确用于封闭和[许可区块链][4]。它们的核心目标是开发一套工具,这些工具将帮助区块链开发人员创建定制的解决方案,而不是创建独立的生态系统或产品。
Hyperledger Fabric 的一些亮点如下:
#### 许可区块链系统
这是一个 Hyperledger Fabric 与其他平台如以太坊和瑞波差异很大的地方。默认情况下Fabric 是一种旨在实现私有许可的区块链的工具。此类区块链不能被所有人访问,并且其中致力于达成共识或验证交易的节点将由中央机构进行选择。这对于某些应用(例如银行和保险)可能很重要,在这些应用中,交易必须由中央机构而不是参与者来验证。
#### 机密和受控的信息流
Fabric 内置了权限系统,该权限系统将视情况限制特定组或某些个人中的信息流。与公有区块链不同,在公有区块链中,任何运行节点的人都可以对存储在区块链中的数据进行复制和选择性访问,而 Fabric 系统的管理员可以选择谁能访问共享的信息,以及访问的方式。与现有竞争产品相比,它还有以更好的安全性标准对存储的数据进行加密的子系统。
#### 即插即用架构
Hyperledger Fabric 具有即插即用类型的体系结构。可以选择实施系统的各个组件而开发人员看不到用处的系统组件可能会被废弃。Fabric 采取高度模块化和可定制的方式进行开发,而不是一种与其竞争对手采用的“一种方法适应所有需求”的方式。对于希望快速构建精益系统的公司和公司而言,这尤其有吸引力。这与 Fabric 和其它 Hyperledger 组件的互操作性相结合,意味着开发人员和设计人员现在可以使用各种标准化工具,而不必从其他来源提取代码并随后进行集成。它还提供了一种相当可靠的方式来构建健壮的模块化系统。
#### 智能合约和链码
运行在区块链上的分布式应用程序称为[智能合约][5]。虽然智能合约这个术语或多或少与以太坊平台相关联,但<ruby>链码<rt>chaincode</rt></ruby>是 Hyperledger 阵营中为其赋予的名称。链码应用程序除了拥有 DApp 中有的所有优点之外,使 Hyperledger 与众不同的是,该应用程序的代码可以用多种高级编程语言编写。它本身支持 [Go][6] 和 JavaScript并且在与适当的编译器模块集成后还支持许多其它编程语言。尽管这一事实在此时可能并不代表什么但这意味着如果可以将现有人才用于正在进行的涉及区块链的项目从长远来看这有可能为公司节省数十亿美元的人员培训和管理费用。开发人员可以使用自己喜欢的语言进行编码从而在 Hyperledger Fabric 上开始构建应用程序,而无需学习或培训平台特定的语言和语法。这提供了 Hyperledger Fabric 当前竞争对手无法提供的灵活性。
### 总结
* Hyperledger Fabric 是一个后端驱动程序平台,是一个主要针对需要区块链或其它分布式账本技术的集成项目。因此,除了次要的脚本功能外,它不提供任何面向用户的服务。(认可以为​​它更像是一种脚本语言。)
* Hyperledger Fabric 支持针对特定用例构建侧链。如果开发人员希望将一组用户或参与者隔离到应用程序的特定部分或功能,则可以通过侧链来实现。侧链是衍生自主要父代的区块链,但在其初始块之后形成不同的链。产生新链的块将不受新链进一步变化的影响,即使将新信息添加到原始链中,新链也将保持不变。此功能将有助于扩展正在开发的平台,并引入用户特定的和案例特定的处理功能。
* 前面的功能还意味着并非所有用户都会像通常对公有链所期望的那样拥有区块链中所有数据的“精确”副本。参与节点将具有仅与之相关的数据副本。例如,假设有一个类似于印度的 PayTM 的应用程序,该应用程序具有钱包功能以及电子商务功能。但是,并非所有的钱包用户都使用 PayTM 在线购物。在这种情况下,只有活跃的购物者将在 PayTM 电子商务网站上拥有相应的交易链,而钱包用户将仅拥有存储钱包交易的链的副本。这种灵活的数据存储和检索体系结构在扩展时非常重要,因为大量的单链区块链已经显示出会增加处理交易的前置时间。这样可以保持链的精简和分类。
我们将在以后的文章中详细介绍 Hyperledger Project 下的其他模块。
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-fabric/
作者:[sk][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Hyperledger-Fabric-720x340.png
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
[3]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/
[4]: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/
[5]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
[6]: https://www.ostechnix.com/install-go-language-linux/

View File

@ -0,0 +1,151 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11478-1.html)
[#]: subject: (What is a Java constructor?)
[#]: via: (https://opensource.com/article/19/6/what-java-constructor)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
什么是 Java 构造器?
======
> 构造器是编程的强大组件。使用它们来释放 Java 的全部潜力。
![](https://img.linux.net.cn/data/attachment/album/201910/18/230523hdx7sy804xdtxybb.jpg)
在开源、跨平台编程领域Java 无疑(?)是无可争议的重量级语言。尽管有许多[伟大的跨平台][2][框架][3],但很少有像 [Java][4] 那样统一和直接的。
当然Java 也是一种非常复杂的语言具有自己的微妙之处和惯例。Java 中与<ruby>构造器<rt> constructor</rt></ruby>有关的最常见问题之一是:它们是什么,它们的作用是什么?
简而言之:构造器是在 Java 中创建新<ruby>对象<rt>object</rt></ruby>时执行的操作。当 Java 应用程序创建一个你编写的类的实例时,它将检查构造器。如果(该类)存在构造器,则 Java 在创建实例时将运行构造器中的代码。这几句话中包含了大量的技术术语,但是当你看到它的实际应用时就会更加清楚,所以请确保你已经[安装了 Java][5] 并准备好进行演示。
### 没有使用构造器的开发日常
如果你正在编写 Java 代码那么你已经在使用构造器了即使你可能不知道它。Java 中的所有类都有一个构造器因为即使你没有创建构造器Java 也会在编译代码时为你生成一个。但是,为了进行演示,请忽略 Java 提供的隐藏构造器(因为默认构造器不添加任何额外的功能),并观察没有显式构造器的情况。
假设你正在编写一个简单的 Java 掷骰子应用程序,因为你想为游戏生成一个伪随机数。
首先,你可以创建骰子类来表示一个骰子。你玩了很久[《龙与地下城》][6],所以你决定创建一个 20 面的骰子。在这个示例代码中,变量 `dice` 是整数 20表示可能的最大掷骰数一个 20 边骰子的掷骰数不能超过 20。变量 `roll` 是最终的随机数的占位符,`rand` 用作随机数种子。
```
import java.util.Random;
public class DiceRoller {
private int dice = 20;
private int roll;
private Random rand = new Random();
```
接下来,在 `DiceRoller` 类中创建一个函数,以执行计算机模拟模子滚动所必须采取的步骤:从 `rand` 中获取一个整数并将其分配给 `roll`变量,考虑到 Java 从 0 开始计数但 20 面的骰子没有 0 值的情况,`roll` 再加 1 ,然后打印结果。
```
import java.util.Random;
public class DiceRoller {
private int dice = 20;
private int roll;
private Random rand = new Random();
```
最后,产生 `DiceRoller` 类的实例并调用其关键函数 `Roller`
```
// main loop
public static void main (String[] args) {
System.out.printf("You rolled a ");
DiceRoller App = new DiceRoller();
App.Roller();
}
}
```
只要你安装了 Java 开发环境(如 [OpenJDK][10]),你就可以在终端上运行你的应用程序:
```
$ java dice.java
You rolled a 12
```
在本例中,没有显式构造器。这是一个非常有效和合法的 Java 应用程序,但是它有一点局限性。例如,如果你把游戏《龙与地下城》放在一边,晚上去玩一些《快艇骰子》,你将需要六面骰子。在这个简单的例子中,更改代码不会有太多的麻烦,但是在复杂的代码中这不是一个现实的选择。解决这个问题的一种方法是使用构造器。
### 构造函数的作用
这个示例项目中的 `DiceRoller` 类表示一个虚拟骰子工厂:当它被调用时,它创建一个虚拟骰子,然后进行“滚动”。然而,通过编写一个自定义构造器,你可以让掷骰子的应用程序询问你希望模拟哪种类型的骰子。
大部分代码都是一样的,除了构造器接受一个表示面数的数字参数。这个数字还不存在,但稍后将创建它。
```
import java.util.Random;
public class DiceRoller {
private int dice;
private int roll;
private Random rand = new Random();
// constructor
public DiceRoller(int sides) {
dice = sides;
}
```
模拟滚动的函数保持不变:
```
public void Roller() {
roll = rand.nextInt(dice);
roll += 1;
System.out.println (roll);
}
```
代码的主要部分提供运行应用程序时提供的任何参数。这的确会是一个复杂的应用程序,你需要仔细解析参数并检查意外结果,但对于这个例子,唯一的预防措施是将参数字符串转换成整数类型。
```
public static void main (String[] args) {
System.out.printf("You rolled a ");
DiceRoller App = new DiceRoller( Integer.parseInt(args[0]) );
App.Roller();
}
```
启动这个应用程序,并提供你希望骰子具有的面数:
```
$ java dice.java 20
You rolled a 10
$ java dice.java 6
You rolled a 2
$ java dice.java 100
You rolled a 44
```
构造器已接受你的输入,因此在创建类实例时,会将 `sides` 变量设置为用户指定的任何数字。
构造器是编程的功能强大的组件。练习用它们来解开了 Java 的全部潜力。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/what-java-constructor
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag
[2]: https://opensource.com/resources/python
[3]: https://opensource.com/article/17/4/pyqt-versus-wxpython
[4]: https://opensource.com/resources/java
[5]: https://openjdk.java.net/install/index.html
[6]: https://opensource.com/article/19/5/free-rpg-day
[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+random
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[10]: https://openjdk.java.net/
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+integer

View File

@ -0,0 +1,257 @@
[#]: collector: (lujun9972)
[#]: translator: (lnrCoder)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11480-1.html)
[#]: subject: (How to Install and Configure PostgreSQL on Ubuntu)
[#]: via: (https://itsfoss.com/install-postgresql-ubuntu/)
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
如何在 Ubuntu 上安装和配置 PostgreSQL
======
> 本教程中,你将学习如何在 Ubuntu Linux 上安装和使用开源数据库 PostgreSQL。
[PostgreSQL][1] (又名 Postgres) 是一个功能强大的自由开源的关系型数据库管理系统 ([RDBMS][2]) ,其在可靠性、稳定性、性能方面获得了业内极高的声誉。它旨在处理各种规模的任务。它是跨平台的,而且是 [macOS Server][3] 的默认数据库。
如果你喜欢简单易用的 SQL 数据库管理器,那么 PostgreSQL 将是一个正确的选择。PostgreSQL 对标准的 SQL 兼容的同时提供了额外的附加特性,同时还可以被用户大量扩展,用户可以添加数据类型、函数并执行更多的操作。
之前我曾论述过 [在 Ubuntu 上安装 MySQL][4]。在本文中,我将向你展示如何安装和配置 PostgreSQL以便你随时可以使用它来满足你的任何需求。
![][5]
### 在 Ubuntu 上安装 PostgreSQL
PostgreSQL 可以从 Ubuntu 主存储库中获取。然而,和许多其它开发工具一样,它可能不是最新版本。
首先在终端中使用 [apt 命令][7] 检查 [Ubuntu 存储库][6] 中可用的 PostgreSQL 版本:
```
apt show postgresql
```
在我的 Ubuntu 18.04 中,它显示 PostgreSQL 的可用版本是 1010+190 表示版本 10而 PostgreSQL 版本 11 已经发布。
```
Package: postgresql
Version: 10+190
Priority: optional
Section: database
Source: postgresql-common (190)
Origin: Ubuntu
```
根据这些信息,你可以自主决定是安装 Ubuntu 提供的版本还是还是获取 PostgreSQL 的最新发行版。
我将向你介绍这两种方法:
#### 方法一:通过 Ubuntu 存储库安装 PostgreSQL
在终端中,使用以下命令安装 PostgreSQL
```
sudo apt update
sudo apt install postgresql postgresql-contrib
```
根据提示输入你的密码,依据于你的网速情况,程序将在几秒到几分钟安装完成。说到这一点,随时检查 [Ubuntu 中的各种网络带宽][8]。
> 什么是 postgresql-contrib?
> postgresql-contrib 或者说 contrib 包,包含一些不属于 PostgreSQL 核心包的实用工具和功能。在大多数情况下,最好将 contrib 包与 PostgreSQL 核心一起安装。
#### 方法二:在 Ubuntu 中安装最新版本的 PostgreSQL 11
要安装 PostgreSQL 11, 你需要在 `sources.list` 中添加官方 PostgreSQL 存储库和证书,然后从那里安装它。
不用担心,这并不复杂。 只需按照以下步骤。
首先添加 GPG 密钥:
```
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
```
现在,使用以下命令添加存储库。如果你使用的是 Linux Mint则必须手动替换你的 Mint 所基于的 Ubuntu 版本号:
```
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" >> /etc/apt/sources.list.d/pgdg.list'
```
现在一切就绪。使用以下命令安装 PostgreSQL
```
sudo apt update
sudo apt install postgresql postgresql-contrib
```
> PostgreSQL GUI 应用程序
> 你也可以安装用于管理 PostgreSQL 数据库的 GUI 应用程序pgAdmin
> `sudo apt install pgadmin4`
### PostgreSQL 配置
你可以通过执行以下命令来检查 PostgreSQL 是否正在运行:
```
service postgresql status
```
通过 `service` 命令,你可以启动、关闭或重启 `postgresql`。输入 `service postgresql` 并按回车将列出所有选项。现在,登录该用户。
默认情况下PostgreSQL 会创建一个拥有所权限的特殊用户 `postgres`。要实际使用 PostgreSQL你必须先登录该账户
```
sudo su postgres
```
你的提示符会更改为类似于以下的内容:
```
postgres@ubuntu-VirtualBox:/home/ubuntu$
```
现在,使用 `psql` 来启动 PostgreSQL Shell
```
psql
```
你应该会看到如下提示符:
```
postgress=#
```
你可以输入 `\q` 以退出,输入 `\?` 获取帮助。
要查看现有的所有表,输入如下命令:
```
\l
```
输出内容类似于下图所示(单击 `q` 键退出该视图):
![PostgreSQL Tables][10]
使用 `\du` 命令,你可以查看 PostgreSQL 用户:
![PostgreSQLUsers][11]
你可以使用以下命令更改任何用户(包括 `postgres`)的密码:
```
ALTER USER postgres WITH PASSWORD 'my_password';
```
**注意:**将 `postgres` 替换为你要更改的用户名,`my_password` 替换为所需要的密码。另外,不要忘记每条命令后面的 `;`(分号)。
建议你另外创建一个用户(不建议使用默认的 `postgres` 用户)。为此,请使用以下命令:
```
CREATE USER my_user WITH PASSWORD 'my_password';
```
运行 `\du`,你将看到该用户,但是,`my_user` 用户没有任何的属性。来让我们给它添加超级用户权限:
```
ALTER USER my_user WITH SUPERUSER;
```
你可以使用以下命令删除用户:
```
DROP USER my_user;
```
要使用其他用户登录,使用 `\q` 命令退出,然后使用以下命令登录:
```
psql -U my_user
```
你可以使用 `-d` 参数直接连接数据库:
```
psql -U my_user -d my_db
```
你可以使用其他已存在的用户调用 PostgreSQL。例如我使用 `ubuntu`。要登录,从终端执行以下命名:
```
psql -U ubuntu -d postgres
```
**注意:**你必须指定一个数据库(默认情况下,它将尝试将你连接到与登录的用户名相同的数据库)。
如果遇到如下错误:
```
psql: FATAL: Peer authentication failed for user "my_user"
```
确保以正确的用户身份登录,并使用管理员权限编辑 `/etc/postgresql/11/main/pg_hba.conf`
```
sudo vim /etc/postgresql/11/main/pg_hba.conf
```
**注意:**用你的版本替换 `11`(例如 `10`)。
对如下所示的一行进行替换:
```
local all postgres peer
```
替换为:
```
local all postgres md5
```
然后重启 PostgreSQL
```
sudo service postgresql restart
```
使用 PostgreSQL 与使用其他 SQL 类型的数据库相同。由于本文旨在帮助你进行初步的设置,因此不涉及具体的命令。不过,这里有个 [非常有用的要点][12] 可供参考! 另外, 手册(`man psql`)和 [文档][13] 也非常有用。
### 总结
阅读本文有望指导你完成在 Ubuntu 系统上安装和准备 PostgreSQL 的过程。如果你不熟悉 SQL你应该阅读 [基本的 SQL 命令][15]。
如果你有任何问题或疑惑,请随时在评论部分提出。
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-postgresql-ubuntu/
作者:[Sergiu][a]
选题:[lujun9972][b]
译者:[lnrCoder](https://github.com/lnrCoder)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sergiu/
[b]: https://github.com/lujun9972
[1]: https://www.postgresql.org/
[2]: https://www.codecademy.com/articles/what-is-rdbms-sql
[3]: https://www.apple.com/in/macos/server/
[4]: https://itsfoss.com/install-mysql-ubuntu/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-postgresql-ubuntu.png?resize=800%2C450&ssl=1
[6]: https://itsfoss.com/ubuntu-repositories/
[7]: https://itsfoss.com/apt-command-guide/
[8]: https://itsfoss.com/network-speed-monitor-linux/
[9]: https://itsfoss.com/fix-gvfsd-smb-high-cpu-ubuntu/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/07/postgresql_tables.png?fit=800%2C303&ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/07/postgresql_users.png?fit=800%2C244&ssl=1
[12]: https://gist.github.com/Kartones/dd3ff5ec5ea238d4c546
[13]: https://www.postgresql.org/docs/manuals/
[14]: https://itsfoss.com/sync-any-folder-with-dropbox/
[15]: https://itsfoss.com/basic-sql-commands/

View File

@ -0,0 +1,273 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11468-1.html)
[#]: subject: (Mutation testing is the evolution of TDD)
[#]: via: (https://opensource.com/article/19/8/mutation-testing-evolution-tdd)
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic)
变异测试是测试驱动开发TDD的演变
======
> 测试驱动开发技术是根据大自然的运作规律创建的,变异测试自然成为 DevOps 演变的下一步。
![Ants and a leaf making the word "open"][1]
在 “[故障是无懈可击的开发运维中的一个特点][2]”,我讨论了故障在通过征求反馈来交付优质产品的过程中所起到的重要作用。敏捷 DevOps 团队就是用故障来指导他们并推动开发进程的。<ruby>[测试驱动开发][3]<rt>Test-driven development</rt></ruby>TDD是任何敏捷 DevOps 团队评估产品交付的[必要条件][4]。以故障为中心的 TDD 方法仅在与可量化的测试配合使用时才有效。
TDD 方法仿照大自然是如何运作的以及自然界在进化博弈中是如何产生赢家和输家为模型而建立的。
### 自然选择
![查尔斯·达尔文][5]
1859 年,<ruby>[查尔斯·达尔文][6]<rt>Charles Darwin</rt></ruby>在他的《<ruby>[物种起源][7]<rt>On the Origin of Species</rt></ruby>》一书中提出了进化论学说。达尔文的论点是,自然变异是由生物个体的自发突变和环境压力共同造成的。环境压力淘汰了适应性较差的生物体,而有利于其他适应性强的生物的发展。每个生物体的染色体都会发生变异,而这些自发的变异会携带给下一代(后代)。然后在自然选择下测试新出现的变异性 —— 当下存在的环境压力是由变异性的环境条件所导致的。
这张简图说明了调整适应环境条件的过程。
![环境压力对鱼类的影响][8]
*图1. 不同的环境压力导致自然选择下的不同结果。图片截图来源于[理查德·道金斯的一个视频][9]。*
该图显示了一群生活在自己栖息地的鱼。栖息地各不相同(海底或河床底部的砾石颜色有深有浅),每条鱼长的也各不相同(鱼身图案和颜色也有深有浅)。
这张图还显示了两种情况(即环境压力的两种变化):
1. 捕食者在场
2. 捕食者不在场
在第一种情况下,在砾石颜色衬托下容易凸显出来的鱼被捕食者捕获的风险更高。当砾石颜色较深时,浅色鱼的数量会更少一些。反之亦然,当砾石颜色较浅时,深色鱼的数量会更少。
在第二种情况下,鱼完全放松下来进行交配。在没有捕食者和没有交配仪式的情况下,可以预料到相反的结果:在砾石背景下显眼的鱼会有更大的机会被选来交配并将其特性传递给后代。
### 选择标准
变异性在进行选择时,绝不是任意的、反复无常的、异想天开的或随机的。选择过程中的决定性因素通常是可以度量的。该决定性因素通常称为测试或目标。
一个简单的数学例子可以说明这一决策过程。(在该示例中,这种选择不是由自然选择决定的,而是由人为选择决定。)假设有人要求你构建一个小函数,该函数将接受一个正数,然后计算该数的平方根。你将怎么做?
敏捷 DevOps 团队的方法是快速验证失败。谦虚一点,先承认自己并不真的知道如何开发该函数。这时,你所知道的就是如何描述你想做的事情。从技术上讲,你已准备好进行单元测试。
<ruby>单元测试<rt>unit test</rt></ruby>”描述了你的具体期望结果是什么。它可以简单地表述为“给定数字 16我希望平方根函数返回数字 4”。你可能知道 16 的平方根是 4。但是你不知道一些较大数字例如 533的平方根。
但至少,你已经制定了选择标准,即你的测试或你的期望值。
### 进行故障测试
[.NET Core][10] 平台可以演示该测试。.NET 通常使用 xUnit.net 作为单元测试框架。(要跟随进行这个代码示例,请安装 .NET Core 和 xUnit.net。
打开命令行并创建一个文件夹,在该文件夹实现平方根解决方案。例如,输入:
```
mkdir square_root
```
再输入:
```
cd square_root
```
为单元测试创建一个单独的文件夹:
```
mkdir unit_tests
```
进入 `unit_tests` 文件夹下(`cd unit_tests`),初始化 xUnit 框架:
```
dotnet new xunit
```
现在,转到 `square_root` 下, 创建 `app` 文件夹:
```
mkdir app
cd app
```
如果有必要的话,为你的代码创建一个脚手架:
```
dotnet new classlib
```
现在打开你最喜欢的编辑器开始编码!
在你的代码编辑器中,导航到 `unit_tests` 文件夹,打开 `UnitTest1.cs`
`UnitTest1.cs` 中自动生成的代码替换为:
```
using System;
using Xunit;
using app;
namespace unit_tests{
public class UnitTest1{
Calculator calculator = new Calculator();
[Fact]
public void GivenPositiveNumberCalculateSquareRoot(){
var expected = 4;
var actual = calculator.CalculateSquareRoot(16);
Assert.Equal(expected, actual);
}
}
}
```
该单元测试描述了变量的**期望值**应该为 4。下一行描述了**实际值**。建议通过将输入值发送到称为`calculator` 的组件来计算**实际值**。对该组件的描述是通过接收数值来处理`CalculateSquareRoot` 信息。该组件尚未开发。但这并不重要,我们在此只是描述期望值。
最后,描述了触发消息发送时发生的情况。此时,判断**期望值**是否等于**实际值**。如果是,则测试通过,目标达成。如果**期望值**不等于**实际值**,则测试失败。
接下来,要实现称为 `calculator` 的组件,在 `app` 文件夹中创建一个新文件,并将其命名为`Calculator.cs`。要实现计算平方根的函数,请在此新文件中添加以下代码:
```
namespace app {
public class Calculator {
public double CalculateSquareRoot(double number) {
double bestGuess = number;
return bestGuess;
}
}
}
```
在测试之前,你需要通知单元测试如何找到该新组件(`Calculator`)。导航至 `unit_tests` 文件夹,打开 `unit_tests.csproj` 文件。在 `<ItemGroup>` 代码块中添加以下代码:
```
<ProjectReference Include="../app/app.csproj" />
```
保存 `unit_test.csproj` 文件。现在,你可以运行第一个测试了。
切换到命令行,进入 `unit_tests` 文件夹。运行以下命令:
```
dotnet test
```
运行单元测试,会输出以下内容:
![单元测试失败后xUnit的输出结果][12]
*图2. 单元测试失败后 xUnit 的输出结果*
正如你所看到的,单元测试失败了。期望将数字 16 发送到 `calculator` 组件后会输出数字 4但是输出`Actual`)的是 16。
恭喜你!创建了第一个故障。单元测试为你提供了强有力的反馈机制,敦促你修复故障。
### 修复故障
要修复故障,你必须要改进 `bestGuess`。当下,`bestGuess` 仅获取函数接收的数字并返回。这不够好。
但是,如何找到一种计算平方根值的方法呢? 我有一个主意 —— 看一下大自然母亲是如何解决问题的。
### 效仿大自然的迭代
在第一次(也是唯一的)尝试中要得出正确值是非常难的(几乎不可能)。你必须允许自己进行多次尝试猜测,以增加解决问题的机会。允许多次尝试的一种方法是进行迭代。
要迭代,就要将 `bestGuess` 值存储在 `previousGuess` 变量中,转换 `bestGuess` 的值,然后比较两个值之间的差。如果差为 0则说明问题已解决。否则继续迭代。
这是生成任何正数的平方根的函数体:
```
double bestGuess = number;
double previousGuess;
do {
previousGuess = bestGuess;
bestGuess = (previousGuess + (number/previousGuess))/2;
} while((bestGuess - previousGuess) != 0);
return bestGuess;
```
该循环(迭代)将 `bestGuess` 值集中到设想的解决方案。现在,你精心设计的单元测试通过了!
![单元测试通过了][13]
*图 3. 单元测试通过了。*
### 迭代解决了问题
正如大自然母亲解决问题的方法,在本练习中,迭代解决了问题。增量方法与逐步改进相结合是获得满意解决方案的有效方法。该示例中的决定性因素是具有可衡量的目标和测试。一旦有了这些,就可以继续迭代直到达到目标。
### 关键点!
好的,这是一个有趣的试验,但是更有趣的发现来自于使用这种新创建的解决方案。到目前为止,`bestGuess` 从开始一直把函数接收到的数字作为输入参数。如果更改 `bestGuess` 的初始值会怎样?
为了测试这一点,你可以测试几种情况。 首先,在迭代多次尝试计算 25 的平方根时,要逐步细化观察结果:
![25 平方根的迭代编码][14]
*图 4. 通过迭代来计算 25 的平方根。*
以 25 作为 `bestGuess` 的初始值,该函数需要八次迭代才能计算出 25 的平方根。但是,如果在设计 `bestGuess` 初始值上犯下荒谬的错误,那将怎么办? 尝试第二次,那 100 万可能是 25 的平方根吗? 在这种明显错误的情况下会发生什么?你写的函数是否能够处理这种低级错误。
直接来吧。回到测试中来,这次以一百万开始:
![逐步求精法][15]
*图 5. 在计算 25 的平方根时,运用逐步求精法,以 100 万作为 bestGuess 的初始值。*
哇! 以一个荒谬的数字开始,迭代次数仅增加了两倍(从八次迭代到 23 次)。增长幅度没有你直觉中预期的那么大。
### 故事的寓意
啊哈! 当你意识到,迭代不仅能够保证解决问题,而且与你的解决方案的初始猜测值是好是坏也没有关系。 不论你最初理解得多么不正确,迭代过程以及可衡量的测试/目标,都可以使你走上正确的道路并得到解决方案。
图 4 和 5 显示了陡峭而戏剧性的燃尽图。一个非常错误的开始,迭代很快就产生了一个绝对正确的解决方案。
简而言之,这种神奇的方法就是敏捷 DevOps 的本质。
### 回到一些更深层次的观察
敏捷 DevOps 的实践源于人们对所生活的世界的认知。我们生活的世界存在不确定性、不完整性以及充满太多的困惑。从科学/哲学的角度来看,这些特征得到了<ruby>[海森堡的不确定性原理][16]<rt>Heisenberg's Uncertainty Principle</rt></ruby>(涵盖不确定性部分),<ruby>[维特根斯坦的逻辑论哲学][17]<rt>Wittgenstein's Tractatus Logico-Philosophicus</rt></ruby>(歧义性部分),<ruby>[哥德尔的不完全性定理][18]<rt>Gödel's incompleteness theorems</rt></ruby>(不完全性方面)以及<ruby>[热力学第二定律][19]<rt>Second Law of Thermodynamics</rt></ruby>(无情的熵引起的混乱)的充分证明和支持。
简而言之,无论你多么努力,在尝试解决任何问题时都无法获得完整的信息。因此,放下傲慢的姿态,采取更为谦虚的方法来解决问题对我们会更有帮助。谦卑会给为你带来巨大的回报,这个回报不仅是你期望的一个解决方案,还会有它的副产品。
### 总结
大自然在不停地运作,这是一个持续不断的过程。大自然没有总体规划。一切都是对先前发生的事情的回应。 反馈循环是非常紧密的,明显的进步/倒退都是逐步实现的。大自然中随处可见,任何事物的都在以一种或多种形式逐步完善。
敏捷 DevOps 是工程模型逐渐成熟的一个非常有趣的结果。DevOps 基于这样的认识,即你所拥有的信息总是不完整的,因此你最好谨慎进行。获得可衡量的测试(例如,假设、可测量的期望结果),进行简单的尝试,大多数情况下可能失败,然后收集反馈,修复故障并继续测试。除了同意每个步骤都必须要有可衡量的假设/测试之外,没有其他方法。
在本系列的下一篇文章中,我将仔细研究变异测试是如何提供及时反馈来推动实现结果的。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/mutation-testing-evolution-tdd
作者:[Alex Bunardzic][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/Morisun029)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-bunardzic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520X292_openanttrail-2.png?itok=xhD3WmUd (Ants and a leaf making the word "open")
[2]: https://opensource.com/article/19/7/failure-feature-blameless-devops
[3]: https://en.wikipedia.org/wiki/Test-driven_development
[4]: https://www.merriam-webster.com/dictionary/conditio%20sine%20qua%20non
[5]: https://opensource.com/sites/default/files/uploads/darwin.png (Charles Darwin)
[6]: https://en.wikipedia.org/wiki/Charles_Darwin
[7]: https://en.wikipedia.org/wiki/On_the_Origin_of_Species
[8]: https://opensource.com/sites/default/files/uploads/environmentalconditions2.png (Environmental pressures on fish)
[9]: https://www.youtube.com/watch?v=MgK5Rf7qFaU
[10]: https://dotnet.microsoft.com/
[11]: https://xunit.net/
[12]: https://opensource.com/sites/default/files/uploads/xunit-output.png (xUnit output after the unit test run fails)
[13]: https://opensource.com/sites/default/files/uploads/unit-test-success.png (Unit test successful)
[14]: https://opensource.com/sites/default/files/uploads/iterating-square-root.png (Code iterating for the square root of 25)
[15]: https://opensource.com/sites/default/files/uploads/bestguess.png (Stepwise refinement)
[16]: https://en.wikipedia.org/wiki/Uncertainty_principle
[17]: https://en.wikipedia.org/wiki/Tractatus_Logico-Philosophicus
[18]: https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems
[19]: https://en.wikipedia.org/wiki/Second_law_of_thermodynamics

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11464-1.html)
[#]: subject: (The lifecycle of Linux kernel testing)
[#]: via: (https://opensource.com/article/19/8/linux-kernel-testing)
[#]: author: (Major Hayden https://opensource.com/users/mhaydenhttps://opensource.com/users/mhaydenhttps://opensource.com/users/marcobravohttps://opensource.com/users/mhayden)
@ -12,33 +12,33 @@ Linux 内核测试的生命周期
> 内核持续集成CKI项目旨在防止错误进入 Linux 内核。
![arrows cycle symbol for failing faster][1]
![](https://img.linux.net.cn/data/attachment/album/201910/16/101933nexzccpea9sjxcq9.jpg)
在 [Linux 内核的持续集成测试][2] 中,我介绍了 [内核持续集成][3]CKI项目及其改变内核开发人员和维护人员工作的使命。本文深入探讨了该项目的某些技术方面以及所有部分如何组合在一起。
在 [Linux 内核的持续集成测试][2] 一文中,我介绍了 <ruby>[内核持续集成][3]<rt>Continuous Kernel Integration</rt></ruby>CKI项目及其使命改变内核开发人员和维护人员的工作方式。本文深入探讨了该项目的某些技术方面以及所有部分如何组合在一起
### 一切始于更改
### 从一次更改开始
内核中每一项令人兴奋的功能、改进和错误都始于开发人员提出的更改。这些更改出现在各个内核存储库的大量邮件列表中。一些存储库关注内核中的某些子系统,例如存储或网络,而其它存储库关注内核的更多方面。 当开发人员向内核提出更改或补丁集时或者维护者在存储库本身中进行更改时CKI 项目就会付诸行动。
CKI 项目维护用于监视这些补丁集并采取措施的触发器。诸如 [Patchwork][4] 之类的软件项目通过将多个补丁贡献整合为单个补丁系列,使此过程变得更加容易。补丁系列作为一个整体通过 CKI 系统传播,并可以针对该系列发布单个报告。
CKI 项目维护的触发器用于监视这些补丁集并采取措施。诸如 [Patchwork][4] 之类的软件项目通过将多个补丁贡献整合为单个补丁系列,使此过程变得更加容易。补丁系列作为一个整体历经 CKI 系统,并可以针对该系列发布单个报告。
其他触发器监视存储库中的更改。当内核维护人员合并补丁集、还原补丁或创建新标签时,就会发生这种情况。测试这些关键的更改可确保开发人员始终具有坚实的基线,可以用作编写新补丁的基础。
其他触发器可以监视存储库中的更改。当内核维护人员合并补丁集、还原补丁或创建新标签时,就会发。测试这些关键的更改可确保开发人员始终具有坚实的基线,可以用作编写新补丁的基础。
所有这些更改都进入 GitLab CI 管道,并历经多个阶段和多个系统。
所有这些更改都进入 GitLab CI 管道,并历经多个阶段和多个系统。
### 准备构建
首先要准备好要编译的源代码。这需要克隆存储库、打上开发人员建议的补丁集,并生成内核配置文件。这些配置文件具有成千上万个用于打开或关闭功能的选项,并且配置文件在不同的系统体系结构之间差异非常大。 例如,一个相当标准的 x86\_64 系统在其配置文件中可能有很多可用选项,但是 s390x 系统IBM zSeries 大型机)的选项可能要少得多。在该大型机上,某些选项可能有意义,但在消费类笔记本电脑上没有任何作用。
内核进一步转换为源代码工件。该工件包含整个存储库(已打上补丁)以及编译所需的所有内核配置文件。 上游内核会打包成压缩包,而 Red Hat 的内核生成下一步所用的源代码 RPM 包。
内核进一步转换为源代码工件。该工件包含整个存储库(已打上补丁)以及编译所需的所有内核配置文件。 上游内核会打包成压缩包,而 Red Hat 的内核生成下一步所用的源代码 RPM 包。
### 成堆的编译
编译内核会将源代码转换为计算机可以启动和使用的代码。配置文件描述了要构建的内容,内核中的脚本描述了如何构建它,系统上的工具(例如 GCC 和 glibc完成构建。此过程需要一段时间才能完成但是 CKI 项目需要针对四种体系结构快速完成aarch6464 位 ARM、ppc64lePOWER、s390xIBM zSeries和 x86\_64。重要的是我们必须快速编译内核以便使工作任务不会积压而开发人员可以及时收到反馈。
添加更多的 CPU 可以大大提高速度但是每个系统都有其局限性。CKI 项目在 OpenShift 的部署环境中的容器内编译内核;尽管 OpenShift 可以实现大量的可伸缩性,但部署环境中的可用 CPU 仍然是数量有限的。CKI 团队分配 20 个虚拟 CPU 来编译每个内核。涉及到四个体系结构,这就涨到 80 个 CPU
添加更多的 CPU 可以大大提高速度但是每个系统都有其局限性。CKI 项目在 OpenShift 的部署环境中的容器内编译内核;尽管 OpenShift 可以实现高伸缩性,但在部署环境中的可用 CPU 仍然是数量有限的。CKI 团队分配 20 个虚拟 CPU 来编译每个内核。涉及到四个体系结构,这就涨到 80 个 CPU
另一个速度的提高来自称为 [ccache][5] 工具。内核开发进展迅速但是即使在多个发布版本之间内核的大部分仍保持不变。ccache 工具进行编译期间会在磁盘上缓存已构建的对象整个内核的一小部分。稍后再进行另一个内核编译时ccache 会查找以前看到的内核的未更改部分。ccache 会从磁盘中提取缓存的对象并重新使用它。这样可以加快编译速度并降低总体 CPU 使用率。现在,耗时 20 分钟编译的内核在不到几分钟的时间内就完成了。
另一个速度的提高来自 [ccache][5] 工具。内核开发进展迅速但是即使在多个发布版本之间内核的大部分仍保持不变。ccache 工具进行编译期间会在磁盘上缓存已构建的对象整个内核的一小部分。稍后再进行另一个内核编译时ccache 会查找以前看到的内核的未更改部分。ccache 会从磁盘中提取缓存的对象并重新使用它。这样可以加快编译速度并降低总体 CPU 使用率。现在,耗时 20 分钟编译的内核在不到几分钟的时间内就完成了。
### 测试时间
@ -46,7 +46,7 @@ CKI 项目维护用于监视这些补丁集并采取措施的触发器。诸如
大型测试框架,例如 [Linux Test Project][6]LTP包含了大量测试这些测试在内核中寻找麻烦的回归问题。其中一些回归问题可能会回滚关键的安全修复程序并且进行测试以确保这些改进仍保留在内核中。
测试完成后,关键的一步仍然是:报告。内核开发人员和维护人员需要一份简明的报告,准确地告诉他们哪些有效、哪些无效以及如何获取更多信息。每个 CKI 报告都包含所用源代码、编译参数和测试输出的详细信息。该信息可帮助开发人员知道从哪里开始寻找解决问题的方法。此外,它还可以帮助维护人员在漏洞进入内核存储库之前知道何时需要保留补丁集以进行其他查看。
测试完成后,关键的一步仍然是:报告。内核开发人员和维护人员需要一份简明的报告,准确地告诉他们哪些有效、哪些无效以及如何获取更多信息。每个 CKI 报告都包含所用源代码、编译参数和测试输出的详细信息。该信息可帮助开发人员知道从哪里开始寻找解决问题的方法。此外,它还可以帮助维护人员在漏洞进入内核存储库之前知道何时需要保留补丁集以进行其他查看。
### 总结
@ -59,7 +59,7 @@ via: https://opensource.com/article/19/8/linux-kernel-testing
作者:[Major Hayden][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,184 @@
[#]: collector: (lujun9972)
[#]: translator: (amwps290)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11477-1.html)
[#]: subject: "How to Install Linux on Intel NUC"
[#]: via: "https://itsfoss.com/install-linux-on-intel-nuc/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
在 Intel NUC 上安装 Linux
======
![](https://img.linux.net.cn/data/attachment/album/201910/18/221221pw3hbbi3bbbbprr4.jpg)
在上周,我买了一台 [InteL NUC][1]。虽然它是如此之小,但它与成熟的桌面型电脑差别甚小。实际上,大部分的[基于 Linux 的微型 PC][2] 都是基于 Intel NUC 构建的。
我买了第 8 代 Core i3 处理器的“<ruby>准系统<rt>barebone</rt></ruby>” NUC。准系统意味着该设备没有 RAM、没有硬盘显然也没有操作系统。我添加了一个 [Crucial 的 8 GB 内存条][3](大约 33 美元)和一个 [240 GB 的西数的固态硬盘][4](大约 45 美元)。
现在,我已经有了一台不到 400 美元的电脑。因为我已经有了一个电脑屏幕和键鼠套装,所以我没有把它们计算在内。
![在我的办公桌上放着一个崭新的英特尔 NUC NUC8i3BEH后面有树莓派 4][5]
我买这个 Intel NUC 的主要原因就是我想在实体机上测试各种各样的 Linux 发行版。我已经有一个 [树莓派 4][6] 设备作为一个入门级的桌面系统,但它是一个 [ARM][7] 设备,因此,只有少数 Linux 发行版可用于树莓派上。LCTT 译注:新发布的 Ubuntu 19.10 支持树莓派 4B
*这个文章里的亚马逊链接是(原文的)受益连接。请参阅我们的[受益政策][8]。*
### 在 NUC 上安装 Linux
现在我准备安装 Ubuntu 18.04 长期支持版,因为我现在就有这个系统的安装文件。你也可以按照这个教程安装其他的发行版。在最重要的分区之前,前边的步骤都大致相同。
#### 第一步:创建一个 USB 启动盘
你可以在 Ubuntu 官网下载它的安装文件。使用另一个电脑去[创建一个 USB 启动盘][9]。你可以使用像 [Rufus][10] 和 [Etcher][11] 这样的软件。在 Ubuntu上你可以使用默认的启动盘创建工具。
#### 第二步:确认启动顺序是正确的
将你的 USB 启动盘插入到你的电脑并开机。一旦你看到 “Intel NUC” 字样出现在你的屏幕上,快速的按下 `F2` 键进入到 BIOS 设置中。
![Intel NUC 的 BIOS 设置][12]
在这里,只是确认一下你的第一启动项是你的 USB 设备。如果不是,切换启动顺序。
如果你修改了一些选项,按 `F10` 键保存退出,否则直接按下 `ESC` 键退出 BIOS 设置。
#### 第三步:正确分区,安装 Linux
现在当机器重启的时候,你就可以看到熟悉的 Grub 界面,可以让你试用或者安装 Ubuntu。现在我们选择安装它。
开始的几个安装步骤非常简单,选择键盘的布局,是否连接网络还有一些其他简单的设置。
![在安装 Ubuntu Linux 时选择键盘布局][14]
你可能会使用常规安装,默认情况下会安装一些有用的应用程序。
![][15]
接下来的是要注意的部分。你有两种选择:
* “<ruby>擦除磁盘并安装 Ubuntu<rt>Erase disk and install Ubuntu</rt></ruby>”:最简单的选项,它将在整个磁盘上安装 Ubuntu。如果你只想在 Intel NUC 上使用一个操作系统请选择此选项Ubuntu 将负责剩余的工作。
* “<ruby>其他选项<rt>Something else</rt></ruby>”:这是一个控制所有选择的高级选项。就我而言,我想在同一 SSD 上安装多个 Linux 发行版。因此,我选择了此高级选项。
![][16]
**如果你选择了“<ruby>擦除磁盘并安装 Ubuntu<rt>Erase disk and install Ubuntu</rt></ruby>”,点击“<ruby>继续<rt>Continue</rt></ruby>”,直接跳到第四步,**
如果你选择了高级选项,请按照下面剩下的部分进行操作。
选择固态硬盘,然后点击“<ruby>新建分区表<rt>New Partition Table</rt></ruby>”。
![][17]
它会给你显示一个警告。直接点击“<ruby>继续<rt>Continue</rt></ruby>”。
![][18]
现在你就可以看到你 SSD 磁盘里的空闲空间。我的想法是为 EFI bootloader 创建一个 EFI 系统分区。一个根(`/`)分区,一个主目录(`/home`)分区。这里我并没有创建[交换分区][19]。Ubuntu 会根据自己的需要来创建交换分区。我也可以通过[创建新的交换文件][32]来扩展交换分区。
我将在磁盘上保留近 200 GB 的可用空间,以便可以在此处安装其他 Linux 发行版。你可以将其全部用于主目录分区。保留单独的根分区和主目录分区可以在你需要重新安装系统时帮你保存里面的数据。
选择可用空间,然后单击加号以添加分区。
![][20]
一般来说100MB 足够 EFI 的使用,但是某些发行版可能需要更多空间,因此我要使用 500MB 的 EFI 分区。
![][21]
接下来,我将使用 20GB 的根分区。如果你只使用一个发行版,则可以随意地将其增加到 40GB。
根目录(`/`)是系统文件存放的地方。你的程序缓存和你安装的程序将会有一些文件放在这个目录下边。我建议你可以阅读一下 [Linux 文件系统层次结构][22]来了解更多相关内容。
填入分区的大小,选择 Ext4 文件系统,选择 `/` 作为挂载点。
![][24]
接下来是创建一个主目录分区,我再说一下,如果你仅仅想使用一个 Linux 发行版。那就把剩余的空间都使用完吧。为主目录分区选择一个合适的大小。
主目录是你个人的文件,比如文档、图片、音乐、下载和一些其他的文件存储的地方。
![][25]
既然你创建好了 EFI 分区、根分区、主目录分区,那你就可以点击“<ruby>现在安装<rt>Install Now</rt></ruby>”按钮安装系统了。
![][26]
它将会提示你新的改变将会被写入到磁盘,点击“<ruby>继续<rt>Continue</rt></ruby>”。
![][27]
#### 第四步:安装 Ubuntu
事情到了这就非常明了了。现在选择你的分区或者以后选择也可以。
![][28]
接下来,输入你的用户名、主机名以及密码。
![][29]
看 7-8 分钟的幻灯片就可以安装完成了。
![][30]
一旦安装完成,你就可以重新启动了。
![][31]
当你重启的时候,你必须要移除你的 USB 设备,否则你将会再次进入安装系统的界面。
这就是在 Intel NUC 设备上安装 Linux 所需要做的一切。坦白说,你可以在其他任何系统上使用相同的过程。
### Intel NUC 和 Linux 在一起:如何使用它?
我非常喜欢 Intel NUC。它不占用太多的桌面空间而且有足够的能力去取代传统的桌面型电脑。你可以将它的内存升级到 32GB。你也可以安装两个 SSD 硬盘。总之,它提供了一些配置和升级范围。
如果你想购买一个桌面型的电脑,我非常推荐你购买使用 [Intel NUC][1] 迷你主机。如果你不想自己安装系统,那么你可以购买一个[基于 Linux 的已经安装好的系统迷你主机][2]。
你是否已经有了一个 Intel NUC有一些什么相关的经验你有什么相关的意见与我们分享吗可以在下面评论。
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-linux-on-intel-nuc/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[amwps290](https://github.com/amwps290)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.amazon.com/Intel-NUC-Mainstream-Kit-NUC8i3BEH/dp/B07GX4X4PW?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07GX4X4PW "Intel NUC"
[2]: https://itsfoss.com/linux-based-mini-pc/
[3]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B01BIWKP58?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01BIWKP58 "8GB RAM from Crucial"
[4]: https://www.amazon.com/Western-Digital-240GB-Internal-WDS240G1G0B/dp/B01M9B2VB7?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01M9B2VB7 "240 GB Western Digital SSD"
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/intel-nuc.jpg?resize=800%2C600&ssl=1
[6]: https://itsfoss.com/raspberry-pi-4/
[7]: https://en.wikipedia.org/wiki/ARM_architecture
[8]: https://itsfoss.com/affiliate-policy/
[9]: https://itsfoss.com/create-live-usb-of-ubuntu-in-windows/
[10]: https://rufus.ie/
[11]: https://www.balena.io/etcher/
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/boot-screen-nuc.jpg?ssl=1
[13]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/
[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-1_tutorial.jpg?ssl=1
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-2_tutorial.jpg?ssl=1
[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-3_tutorial.jpg?ssl=1
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-4_tutorial.jpg?ssl=1
[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-5_tutorial.jpg?ssl=1
[19]: https://itsfoss.com/swap-size/
[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-6_tutorial.jpg?ssl=1
[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-7_tutorial.jpg?ssl=1
[22]: https://linuxhandbook.com/linux-directory-structure/
[23]: https://itsfoss.com/share-folders-local-network-ubuntu-windows/
[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-8_tutorial.jpg?ssl=1
[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-9_tutorial.jpg?ssl=1
[26]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-10_tutorial.jpg?ssl=1
[27]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-11_tutorial.jpg?ssl=1
[28]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-12_tutorial.jpg?ssl=1
[29]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-13_tutorial.jpg?ssl=1
[30]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-14_tutorial.jpg?ssl=1
[31]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-15_tutorial.jpg?ssl=1
[32]: https://itsfoss.com/create-swap-file-linux/

View File

@ -0,0 +1,214 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11487-1.html)
[#]: subject: (Installation Guide of Manjaro 18.1 (KDE Edition) with Screenshots)
[#]: via: (https://www.linuxtechi.com/install-manjaro-18-1-kde-edition-screenshots/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
Manjaro 18.1KDE安装图解
======
在 Manjaro 18.0Illyria发布一年之际该团队发布了他们的下一个重要版本即 Manjaro 18.1,代号为 “Juhraya”。该团队还发布了一份官方声明称 Juhraya 包含了许多改进和错误修复。
### Manjaro 18.1 中的新功能
以下列出了 Manjaro 18.1 中的一些新功能和增强功能:
* 可以在 LibreOffice 或 Free Office 之间选择
* Xfce 版的新 Matcha 主题
* 在 KDE 版本中重新设计了消息传递系统
* 使用 bhau 工具支持 Snap 和 FlatPak 软件包
### 最小系统需求
* 1 GB RAM
* 1 GHz 处理器
* 大约 30 GB 硬盘空间
* 互联网连接
* 启动介质USB/DVD
### 安装 Manjaro 18.1KDE 版)的分步指南
要在系统中开始安装 Manjaro 18.1KDE 版),请遵循以下步骤:
#### 步骤 1) 下载 Manjaro 18.1 ISO
在安装之前,你需要从位于 [这里][1] 的官方下载页面下载 Manjaro 18.1 的最新副本。由于我们这里介绍的是 KDE 版本,因此我们选择 KDE 版本。但是对于所有桌面环境(包括 Xfce、KDE 和 Gnome 版本),安装过程都是相同的。
#### 步骤 2) 创建 USB 启动盘
从 Manjaro 下载页面成功下载 ISO 文件后,就可以创建 USB 磁盘了。将下载的 ISO 文件复制到 USB 磁盘中,然后创建可引导磁盘。确保将你的引导设置更改为使用 USB 引导,并重新启动系统。
#### 步骤 3) Manjaro Live 版安装环境
系统重新启动时,它将自动检测到 USB 驱动器,并开始启动进入 Manjaro Live 版安装屏幕。
![Boot-Manjaro-18-1-kde-installation][3]
接下来,使用箭头键选择 “<ruby>启动Manjaro x86\_64 kde<rt>Boot: Manjaro x86\_64 kde</rt></ruby>”,然后按回车键以启动 Manjaro 安装程序。
#### 安装 4) 选择启动安装程序
接下来,将启动 Manjaro 安装程序如果你已连接到互联网Manjaro 将自动检测你的位置和时区。单击 “<ruby>启动安装程序<rt>Launch Installer</rt></ruby>”,开始在系统中安装 Manjaro 18.1 KDE 版本。
![Choose-Launch-Installaer-Manjaro18-1-kde][4]
#### 步骤 5) 选择语言
接下来,安装程序将带你选择你的首选语言。
![Choose-Language-Manjaro18-1-Kde-Installation][5]
选择你想要的语言,然后单击“<ruby>下一步<rt>Next</rt></ruby>”。
#### 步骤 6) 选择时区和区域
在下一个屏幕中,选择所需的时区和区域,然后单击“<ruby>下一步<rt>Next</rt></ruby>”继续。
![Select-Location-During-Manjaro18-1-KDE-Installation][6]
#### 步骤 7) 选择键盘布局
在下一个屏幕中,选择你喜欢的键盘布局,然后单击“<ruby>下一步<rt>Next</rt></ruby>”继续。
![Select-Keyboard-Layout-Manjaro18-1-kde-installation][7]
#### 步骤 8) 选择分区类型
这是安装过程中非常关键的一步。 它将允许你选择分区方式:
* 擦除磁盘
* 手动分区
* 并存安装
* 替换分区
如果在 VM虚拟机中安装 Manjaro 18.1,则将看不到最后两个选项。
如果你不熟悉 Manjaro Linux那么我建议你使用第一个选项<ruby>擦除磁盘<rt>Erase Disk</rt></ruby>),它将为你自动创建所需的分区。如果要创建自定义分区,则选择第二个选项“<ruby>手动分区<rt>Manual Partitioning</rt></ruby>”,顾名思义,它将允许我们创建自己的自定义分区。
在本教程中,我将通过选择“<ruby>手动分区<rt>Manual Partitioning</rt></ruby>”选项来创建自定义分区:
![Manual-Partition-Manjaro18-1-KDE][8]
选择第二个选项,然后单击“<ruby>下一步<rt>Next</rt></ruby>”继续。
如我们所见,我有 40 GB 硬盘,因此我将在其上创建以下分区,
* `/boot`         2GBext4
* `/`             10 GBext4
* `/home`        22 GBext4
* `/opt`         4 GBext4
* <ruby>交换分区<rt>Swap</rt></ruby>       2 GB
当我们在上方窗口中单击“<ruby>下一步<rt>Next</rt></ruby>”时,将显示以下屏幕,选择“<ruby>新建分区表<rt>new partition table</rt></ruby>”:
![Create-Partition-Table-Manjaro18-1-Installation][9]
点击“<ruby>确定<rt>OK</rt></ruby>”。
现在选择可用空间,然后单击“<ruby>创建<rt>create</rt></ruby>”以将第一个分区设置为大小为 2 GB 的 `/boot`
![boot-partition-manjaro-18-1-installation][10]
单击“<ruby>确定<rt>OK</rt></ruby>”以继续操作,在下一个窗口中再次选择可用空间,然后单击“<ruby>创建<rt>create</rt></ruby>”以将第二个分区设置为 `/`,大小为 10 GB
![slash-root-partition-manjaro18-1-installation][11]
同样,将下一个分区创建为大小为 22 GB 的 `/home`
![home-partition-manjaro18-1-installation][12]
到目前为止,我们已经创建了三个分区作为主分区,现在创建下一个分区作为扩展分区:
![Extended-Partition-Manjaro18-1-installation][13]
单击“<ruby>确定<rt>OK</rt></ruby>”以继续。
创建大小分别为 5 GB 和 2 GB 的 `/opt` 和交换分区作为逻辑分区。
![opt-partition-manjaro-18-1-installation][14]
![swap-partition-manjaro18-1-installation][15]
完成所有分区的创建后,单击“<ruby>下一步<rt>Next</rt></ruby>”:
![choose-next-after-partition-creation][16]
#### 步骤 9) 提供用户信息
在下一个屏幕中,你需要提供用户信息,包括你的姓名、用户名、密码、计算机名等:
![User-creation-details-manjaro18-1-installation][17]
提供所有信息后,单击“<ruby>下一步<rt>Next</rt></ruby>”继续安装。
在下一个屏幕中,系统将提示你选择办公套件,因此请做出适合你的选择:
![Office-Suite-Selection-Manjaro18-1][18]
单击“<ruby>下一步<rt>Next</rt></ruby>”以继续。
#### 步骤 10) 摘要信息
在完成实际安装之前,安装程序将向你显示你选择的所有详细信息,包括语言、时区、键盘布局和分区信息等。单击“<ruby>安装<rt>Install</rt></ruby>”以继续进行安装过程。
![Summary-manjaro18-1-installation][19]
#### 步骤 11) 进行安装
现在,实际的安装过程开始,一旦完成,请重新启动系统以登录到 Manjaro 18.1 KDE 版:
![Manjaro18-1-Installation-Progress][20]
![Restart-Manjaro-18-1-after-installation][21]
#### 步骤 12) 安装成功后登录
重新启动后,我们将看到以下登录屏幕,使用我们在安装过程中创建的用户凭据登录:
![Login-screen-after-manjaro-18-1-installation][22]
点击“<ruby>登录<rt>Login</rt></ruby>”。
![KDE-Desktop-Screen-Manjaro-18-1][23]
就是这样!你已经在系统中成功安装了 Manjaro 18.1 KDE 版,并探索了所有令人兴奋的功能。请在下面的评论部分中发表你的反馈和建议。
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/install-manjaro-18-1-kde-edition-screenshots/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://manjaro.org/download/official/kde/
[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Boot-Manjaro-18-1-kde-installation.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Launch-Installaer-Manjaro18-1-kde.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Language-Manjaro18-1-Kde-Installation.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Location-During-Manjaro18-1-KDE-Installation.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Keyboard-Layout-Manjaro18-1-kde-installation.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Manual-Partition-Manjaro18-1-KDE.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Create-Partition-Table-Manjaro18-1-Installation.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/boot-partition-manjaro-18-1-installation.jpg
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/09/slash-root-partition-manjaro18-1-installation.jpg
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/09/home-partition-manjaro18-1-installation.jpg
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Extended-Partition-Manjaro18-1-installation.jpg
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/09/opt-partition-manjaro-18-1-installation.jpg
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/09/swap-partition-manjaro18-1-installation.jpg
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/09/choose-next-after-partition-creation.jpg
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/09/User-creation-details-manjaro18-1-installation.jpg
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Office-Suite-Selection-Manjaro18-1.jpg
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Summary-manjaro18-1-installation.jpg
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Manjaro18-1-Installation-Progress.jpg
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Restart-Manjaro-18-1-after-installation.jpg
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Login-screen-after-manjaro-18-1-installation.jpg
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/09/KDE-Desktop-Screen-Manjaro-18-1.jpg

View File

@ -0,0 +1,186 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11483-1.html)
[#]: subject: (Mutation testing by example: How to leverage failure)
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-tdd)
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic)
变异测试:如何利用故障?
======
> 使用事先设计好的故障以确保你的代码达到预期的结果,并遵循 .NET xUnit.net 测试框架来进行测试。
![](https://img.linux.net.cn/data/attachment/album/201910/20/200030ipm13zmi08mv8z34.jpg)
[在变异测试是 TDD 的演变][2] 一文中,我谈到了迭代的力量。在可度量的测试中,迭代能够保证找到问题的解决方案。在那篇文章中,我们讨论了迭代法帮助确定实现计算给定数字平方根的代码。
我还演示了最有效的方法是找到可衡量的目标或测试,然后以最佳猜测值开始迭代。正如所预期的,第一次测试通常会失败。因此,必须根据可衡量的目标或测试对失败的代码进行完善。根据运行结果,对测试值进行验证或进一步加以完善。
在此模型中,学习获得解决方案的唯一方法是反复失败。这听起来有悖常理,但它确实有效。
按照这种分析,本文探讨了在构建包含某些依赖项的解决方案时使用 DevOps 的最佳方法。第一步是编写一个预期结果失败的用例。
### 依赖性问题是你不能依赖它们
正如<ruby>迈克尔·尼加德<rt>Michael Nygard</rt></ruby>在《[没有终结状态的架构][3]》中机智的表示的那样依赖问题是一个很大的话题最好留到另一篇文章中讨论。在这里你将会看到依赖项给项目带来的一些潜在问题以及如何利用测试驱动开发TDD来避免这些陷阱。
首先,找到现实生活中的一个挑战,然后看看如何使用 TDD 解决它。
### 谁把猫放出来?
![一只猫站在屋顶][4]
在敏捷开发环境中,通过定义期望结果开始构建解决方案会很有帮助。通常,在 <ruby>[用户故事][5]<rt>user story</rt></ruby> 中描述期望结果:
> 我想使用我的家庭自动化系统HAS来控制猫何时可以出门因为我想保证它在夜间的安全。
现在你已经有了一个用户故事,你需要通过提供一些功能要求(即指定验收标准)来对其进行详细说明。 从伪代码中描述的最简单的场景开始:
> 场景 1在夜间关闭猫门
>
> * 用时钟监测到了晚上的时间
> * 时钟通知 HAS 系统
> * HAS 关闭支持物联网IoT的猫门
### 分解系统
开始构建之前你需要将正在构建的系统HAS进行分解分解为依赖项。你必须要做的第一件事是识别任何依赖项如果幸运的话你的系统没有依赖项这将会更容易但是这样的系统可以说不是非常有用
从上面的简单场景中,你可以看到所需的业务成果(自动控制猫门)取决于对夜间情况监测。这种依赖性取决于时钟。但是时钟是无法区分白天和夜晚的。需要你来提供这种逻辑。
正在构建的系统中的另一个依赖项是能够自动访问猫门并启用或关闭它。该依赖项很可能取决于具有 IoT 功能的猫门提供的 API。
### 依赖管理面临快速失败
为了满足依赖项,我们将构建确定当前时间是白天还是晚上的逻辑。本着 TDD 的精神,我们将从一个小小的失败开始。
有关如何设置此练习所需的开发环境和脚手架的详细说明,请参阅我的[上一篇文章][2]。我们将重用相同的 NET 环境和 [xUnit.net][6] 框架。
接下来,创建一个名为 HAS“家庭自动化系统”的新项目创建一个名为 `UnitTest1.cs` 的文件。在该文件中,编写第一个失败的单元测试。在此单元测试中,描述你的期望结果。例如,当系统运行时,如果时间是晚上 7 点,负责确定是白天还是夜晚的组件将返回值 `Nighttime`
这是描述期望值的单元测试:
```
using System;
using Xunit;
namespace unittest
{
public class UnitTest1
{
DayOrNightUtility dayOrNightUtility = new DayOrNightUtility();
[Fact]
public void Given7pmReturnNighttime()
{
var expected = "Nighttime";
var actual = dayOrNightUtility.GetDayOrNight();
Assert.Equal(expected, actual);
}
}
}
```
至此,你可能已经熟悉了单元测试的结构。快速复习一下:在此示例中,通过给单元测试一个描述性名称`Given7pmReturnNighttime` 来描述期望结果。然后,在单元测试的主体中,创建一个名为 `expected` 的变量,并为该变量指定期望值(在该示例中,值为 `Nighttime`)。然后,为实际值指定一个 `actual`(在组件或服务处理一天中的时间之后可用)。
最后,通过断言期望值和实际值是否相等来检查是否满足期望结果:`Assert.Equal(expected, actual)`。
你还可以在上面的列表中看到名为 `dayOrNightUtility` 的组件或服务。该模块能够接收消息`GetDayOrNight`,并且返回 `string` 类型的值。
同样,本着 TDD 的精神,描述的组件或服务还尚未构建(仅为了后面说明在此进行描述)。构建这些是由所描述的期望结果来驱动的。
`app` 文件夹中创建一个新文件,并将其命名为 `DayOrNightUtility.cs`。将以下 C 代码添加到该文件中并保存:
```
using System;
namespace app {
public class DayOrNightUtility {
public string GetDayOrNight() {
string dayOrNight = "Undetermined";
return dayOrNight;
}
}
}
```
现在转到命令行,将目录更改为 `unittests` 文件夹,然后运行:
```
[Xunit.net 00:00:02.33] unittest.UnitTest1.Given7pmReturnNighttime [FAIL]
Failed unittest.UnitTest1.Given7pmReturnNighttime
[...]
```
恭喜,你已经完成了第一个失败的单元测试。单元测试的期望结果是 `DayOrNightUtility` 方法返回字符串 `Nighttime`,但相反,它返回是 `Undetermined`
### 修复失败的单元测试
修复失败的测试的一种快速而粗略的方法是将值 `Undetermined` 替换为值 `Nighttime` 并保存更改:
```
using System;
namespace app {
public class DayOrNightUtility {
public string GetDayOrNight() {
string dayOrNight = "Nighttime";
return dayOrNight;
}
}
}
```
现在运行,成功了。
```
Starting test execution, please wait...
Total tests: 1. Passed: 1. Failed: 0. Skipped: 0.
Test Run Successful.
Test execution time: 2.6470 Seconds
```
但是,对值进行硬编码基本上是在作弊,最好为 `DayOrNightUtility` 方法赋予一些智能。修改 `GetDayOrNight` 方法以包括一些时间计算逻辑:
```
public string GetDayOrNight() {
string dayOrNight = "Daylight";
DateTime time = new DateTime();
if(time.Hour < 7) {
dayOrNight = "Nighttime";
}
return dayOrNight;
}
```
该方法现在从系统获取当前时间,并与 `Hour` 比较,查看其是否小于上午 7 点。如果小于,则处理逻辑将 `dayOrNight` 字符串值从 `Daylight` 转换为 `Nighttime`。现在,单元测试通过。
### 测试驱动解决方案的开始
现在,我们已经开始了基本的单元测试,并为我们的时间依赖项提供了可行的解决方案。后面还有更多的测试案例需要执行。
在下一篇文章中,我将演示如何对白天时间进行测试以及如何在整个过程中利用故障。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/mutation-testing-example-tdd
作者:[Alex Bunardzic][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/Morisun029)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-bunardzic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_failure_celebrate.png?itok=LbvDAEZF (failure sign at a party, celebrating failure)
[2]: https://linux.cn/article-11468-1.html
[3]: https://www.infoq.com/presentations/Architecture-Without-an-End-State/
[4]: https://opensource.com/sites/default/files/uploads/cat.png (Cat standing on a roof)
[5]: https://www.agilealliance.org/glossary/user-stories
[6]: https://xunit.net/
[7]: http://www.google.com/search?q=new+msdn.microsoft.com

View File

@ -0,0 +1,120 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11485-1.html)
[#]: subject: (Essential Accessories for Intel NUC Mini PC)
[#]: via: (https://itsfoss.com/intel-nuc-essential-accessories/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
英特尔 NUC 迷你 PC 的基本配件
======
![](https://img.linux.net.cn/data/attachment/album/201910/20/224650me0qoiqjeiysqqph.jpg)
几周前,我买了一台 [英特尔 NUC 迷你 PC][1]。我[在上面安装了 Linux][2],我非常喜欢它。这个小巧的无风扇机器取代了台式机那庞大的 CPU。
英特尔 NUC 通常采用准系统形式,这意味着它没有任何内存、硬盘,也显然没有操作系统。许多[基于 Linux 的微型 PC][3] 定制化英特尔 NUC 并添加磁盘、RAM 和操作系统将它出售给终端用户。
不用说,它不像大多数其他台式机那样带有键盘、鼠标或屏幕。
[英特尔 NUC][4] 是一款出色的设备,如果你要购买台式机,我强烈建议你购买它。如果你正在考虑购买英特尔 NUC你需要买一些配件以便开始使用它。
### 基本的英特尔 NUC 配件
![][5]
*文章中的 Amazon 链接是(原文的)受益链接。请阅读我们的[受益政策][6]。
#### 外围设备:显示器、键盘和鼠标
这很容易想到。你需要有屏幕、键盘和鼠标才能使用计算机。你需要一台有 HDMI 连接的显示器和一个 USB 或无线键盘鼠标。如果你已经有了这些东西,那你可以继续。
如果你正在寻求建议,我建议购买 LG IPS LED 显示器。我有两台 22 英寸的型号,我对它提供的清晰视觉效果感到满意。
这些显示器有一个简单的固定支架。如果要使显示器可以上下移动并纵向旋转,请尝试使用 [HP EliteDisplay 显示器][7]。
![HP EliteDisplay Monitor][8]
我在多屏设置中同时连接了三台显示器。一台显示器连接到指定的 HDMI 端口。两台显示器通过 [Club 3D 的 Thunderbolt 转 HDMI 分配器][9]连接到 Thunderbolt 端口。
你也可以选择超宽显示器。我对此没有亲身经历。
#### 内存
英特尔 NUC 有两个内存插槽,最多可支持 32GB 内存。由于我的是 i3 核心处理器,因此我选择了 [Crucial 的 8GB DDR4 内存][11],价格约为 $33。
![][12]
8 GB 内存在大多数情况下都没问题,但是如果你的是 i7 核心处理器,那么可以选择 [16GB 内存][13],价格约为 $67。你可以加两条以获得最大 32GB。选择全在于你。
#### 硬盘(重要)
英特尔 NUC 同时支持 2.5 英寸驱动器和 M.2 SSD因此你可以同时使用两者以获得更多存储空间。
2.5 英寸插槽可同时容纳 SSD 和 HDD。我强烈建议选择 SSD因为它比 HDD 快得多。[480GB 2.5 英寸][14]的价格是 $60。我认为这是一个合理的价格。
![][15]
2.5 英寸驱动器的标准 SATA 口速度为 6 Gb/秒。根据你是否选择 NVMe SSDM.2 插槽可能会更快。 NVMe非易失性内存主机控制器接口规范SSD 的速度比普通 SSD也称为 SATA SSD快 4 倍。但是它们可能也比 SATA M2 SSD 贵一些。
当购买 M.2 SSD 时,请检查产品图片。无论是 NVMe 还是 SATA SSD都应在磁盘本身的图片中提到。你可以考虑使用[经济的三星 EVO NVMe M.2 SSD][16]。
![Make sure that your are buying the faster NVMe M2 SSD][17]
M.2 插槽和 2.5 英寸插槽中的 SATA SSD 具有相同的速度。这就是为什么如果你不想选择昂贵的 NVMe SSD建议你选择 2.5 英寸 SATA SSD并保留 M.2 插​​槽供以后升级。
#### 交流电源线
当我拿到 NUC 时,为惊讶地发现,尽管它有电源适配器,但它并没有插头。
正如一些读者指出的那样,你可能有完整的电源线。这取决于你的地理区域和供应商。因此,请检查产品说明和用户评论,以验证其是否具有完整的电源线。
![][10]
#### 其他配套配件
你需要使用 HDMI 线缆连接显示器。如果你要购买新显示器,通常应会有一根线缆。
如果要使用 M.2 插槽,那么可能需要螺丝刀。英特尔 NUC 是一款出色的设备,你只需用手旋转四个脚即可拧开底部面板。你必须打开设备才能放置内存和磁盘。
![Intel NUC with Security Cable | Image Credit Intel][18]
NUC 还有防盗孔,可与防盗绳一起使用。在业务环境中,建议使用防盗绳保护计算机安全。购买[防盗绳几美元][19]便可节省数百美元。
### 你使用什么配件?
这些就是我在使用和建议使用的英特尔 NUC 配件。你呢?如果你有一台 NUC你会使用哪些配件并推荐给其他 NUC 用户?
--------------------------------------------------------------------------------
via: https://itsfoss.com/intel-nuc-essential-accessories/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.amazon.com/Intel-NUC-Mainstream-Kit-NUC8i3BEH/dp/B07GX4X4PW?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07GX4X4PW (barebone Intel NUC mini PC)
[2]: https://linux.cn/article-11477-1.html
[3]: https://itsfoss.com/linux-based-mini-pc/
[4]: https://www.intel.in/content/www/in/en/products/boards-kits/nuc.html
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/intel-nuc-accessories.png?ssl=1
[6]: https://itsfoss.com/affiliate-policy/
[7]: https://www.amazon.com/HP-EliteDisplay-21-5-Inch-1FH45AA-ABA/dp/B075L4VKQF?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B075L4VKQF (HP EliteDisplay monitors)
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/hp-elitedisplay-monitor.png?ssl=1
[9]: https://www.amazon.com/Club3D-CSV-1546-USB-C-Multi-Monitor-Splitter/dp/B06Y2FX13G?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B06Y2FX13G (thunderbolt to HDMI splitter from Club 3D)
[10]: https://img.linux.net.cn/data/attachment/album/201910/20/224718eebvzvvvm0b6f3ow.jpg
[11]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B01BIWKP58?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01BIWKP58 (8GB DDR4 RAM from Crucial)
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/crucial-ram.jpg?ssl=1
[13]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B019FRBHZ0?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B019FRBHZ0 (16 GB RAM)
[14]: https://www.amazon.com/Green-480GB-Internal-SSD-WDS480G2G0A/dp/B01M3POPK3?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01M3POPK3 (480 GB 2.5)
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/wd-green-ssd.png?ssl=1
[16]: https://www.amazon.com/Samsung-970-EVO-500GB-MZ-V7E500BW/dp/B07BN4NJ2J?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BN4NJ2J (Samsung EVO is a cost effective NVMe M.2 SSD)
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/samsung-evo-nvme.jpg?ssl=1
[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/intel-nuc-security-cable.jpg?ssl=1
[19]: https://www.amazon.com/Kensington-Combination-Laptops-Devices-K64673AM/dp/B005J7Y99W?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B005J7Y99W (few dollars in the security cable)

View File

@ -0,0 +1,221 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11456-1.html)
[#]: subject: (7 Bash history shortcuts you will actually use)
[#]: via: (https://opensource.com/article/19/10/bash-history-shortcuts)
[#]: author: (Ian Miell https://opensource.com/users/ianmiell)
7 个实用的操作 Bash 历史记录的快捷方式
======
> 这些必不可少的 Bash 快捷键可在命令行上节省时间。
![Command line prompt][1]
大多数介绍 Bash 历史记录的指南都详尽地列出了全部可用的快捷方式。这样做的问题是,你会对每个快捷方式都浅尝辄止,然后在尝试了那么多的快捷方式后就搞得目不暇接。而在开始工作时它们就全被丢在脑后,只记住了刚开始使用 Bash 时学到的 [!! 技巧][2]。这些技巧大多数从未进入记忆当中。
本文概述了我每天实际使用的快捷方式。它基于我的书《[Bash 学习,艰难之旅][3]》中的某些内容(你可以阅读其中的[样章][4]以了解更多信息)。
当人们看到我使用这些快捷方式时,他们经常问我:“你做了什么!?” 学习它们只需付出很少的精力或智力,但是要真正的学习它们,我建议每周用一天学一个,然后下次再继续学习一个。值得花时间让它们落在你的指尖下,因为从长远来看,节省的时间将很重要。
### 1、最后一个参数`!$`
如果你仅想从本文中学习一种快捷方式,那就是这个。它会将最后一个命令的最后一个参数替换到你的命令行中。
看看这种情况:
```
$ mv /path/to/wrongfile /some/other/place
mv: cannot stat '/path/to/wrongfile': No such file or directory
```
啊哈,我在命令中写了错误的文件名 “wrongfile”我应该用正确的文件名 “rightfile” 代替。
你可以重新键入上一个命令,并用 “rightfile” 完全替换 “wrongfile”。但是你也可以键入
```
$ mv /path/to/rightfile !$
mv /path/to/rightfile /some/other/place
```
这个命令也可以奏效。
在 Bash 中还有其他方法可以通过快捷方式实现相同的目的,但是重用上一个命令的最后一个参数的这种技巧是我最常使用的。
### 2、第 n 个参数:`!:2`
是不是干过像这样的事情:
```
$ tar -cvf afolder afolder.tar
tar: failed to open
```
像许多其他人一样,我也经常搞错 `tar`(和 `ln`)的参数顺序。
![xkcd comic][5]
当你搞混了参数,你可以这样:
```
$ !:0 !:1 !:3 !:2
tar -cvf afolder.tar afolder
```
这样就不会出丑了。
上一个命令的各个参数的索引是从零开始的,并且可以用 `!:` 之后跟上该索引数字代表各个参数。
显然,你也可以使用它来重用上一个命令中的特定参数,而不是所有参数。
### 3、全部参数`!:1-$`
假设我运行了类似这样的命令:
```
$ grep '(ping|pong)' afile
```
参数是正确的。然而,我想在文件中匹配 “ping” 或 “pong”但我使用的是 `grep` 而不是 `egrep`
我开始输入 `egrep`,但是我不想重新输入其他参数。因此,我可以使用 `!:1-$` 快捷方式来调取上一个命令的所有参数,从第二个(记住它们的索引从零开始,因此是 `1`)到最后一个(由 `$` 表示)。
```
$ egrep !:1-$
egrep '(ping|pong)' afile
ping
```
你不用必须用 `1-$` 选择全部参数;你也可以选择一个子集,例如 `1-2``3-9` (如果上一个命令中有那么多参数的话)。
### 4、倒数第 n 行的最后一个参数:`!-2:$`
当我输错之后马上就知道该如何更正我的命令时,上面的快捷键非常有用,但是我经常在原来的命令之后运行别的命令,这意味着上一个命令不再是我所要引用的命令。
例如,还是用之前的 `mv` 例子,如果我通过 `ls` 检查文件夹的内容来纠正我的错误:
```
$ mv /path/to/wrongfile /some/other/place
mv: cannot stat '/path/to/wrongfile': No such file or directory
$ ls /path/to/
rightfile
```
我就不能再使用 `!$` 快捷方式了。
在这些情况下,我可以在 `!` 之后插入 `-n`:(其中 `n` 是要在历史记录中回溯的命令条数),以从较旧的命令取得最后的参数:
```
$ mv /path/to/rightfile !-2:$
mv /path/to/rightfile /some/other/place
```
同样,一旦你学会了它,你可能会惊讶于你需要使用它的频率。
### 5、进入文件夹`!$:h`
从表面上看,这个看起来不太有用,但我每天要用它几十次。
想象一下,我运行的命令如下所示:
```
$ tar -cvf system.tar /etc/system
 tar: /etc/system: Cannot stat: No such file or directory
 tar: Error exit delayed from previous errors.
```
我可能要做的第一件事是转到 `/etc` 文件夹,查看其中的内容并找出我做错了什么。
我可以通过以下方法来做到这一点:
```
$ cd !$:h
cd /etc
```
这是说:“获取上一个命令的最后一个参数(`/etc/system`),并删除其最后的文件名部分,仅保留 `/ etc`。”
### 6、当前行`!#:1`
多年以来,在我最终找到并学会之前,我有时候想知道是否可以在当前行引用一个参数。我多希望我能早早学会这个快捷方式。我经常常使用它制作备份文件:
```
$ cp /path/to/some/file !#:1.bak
cp /path/to/some/file /path/to/some/file.bak
```
但当我学会之后,它很快就被下面的快捷方式替代了……
### 7、搜索并替换`!!:gs`
这将搜索所引用的命令,并将前两个 `/` 之间的字符替换为后两个 `/` 之间的字符。
假设我想告诉别人我的 `s` 键不起作用,而是输出了 `f`
```
$ echo my f key doef not work
my f key doef not work
```
然后我意识到这里出现的 `f` 键都是错的。要将所有 `f` 替换为 `s`,我可以输入:
```
$ !!:gs/f /s /
echo my s key does not work
my s key does not work
```
它不只对单个字符起作用。我也可以替换单词或句子:
```
$ !!:gs/does/did/
echo my s key did not work
my s key did not work
```
### 测试一下
为了向你展示如何组合这些快捷方式,你知道这些命令片段将输出什么吗?
```
$ ping !#:0:gs/i/o
$ vi /tmp/!:0.txt
$ ls !$:h
$ cd !-2:$:h
$ touch !$!-3:$ !! !$.txt
$ cat !:1-$
```
### 总结
对于日常的命令行用户Bash 可以作为快捷方式的优雅来源。虽然有成千上万的技巧要学习,但这些是我经常使用的最喜欢的技巧。
如果你想更深入地了解 Bash 可以教给你的全部知识,请买本我的书,《[Bash 学习,艰难之旅][3]》,或查看我的在线课程《[精通 Bash shell][7]》。
* * *
本文最初发布在 Ian 的博客 [Zwischenzugs.com][8] 上,并经允许重复发布。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/bash-history-shortcuts
作者:[Ian Miell][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ianmiell
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
[2]: https://opensource.com/article/18/5/bash-tricks
[3]: https://leanpub.com/learnbashthehardway
[4]: https://leanpub.com/learnbashthehardway/read_sample
[5]: https://opensource.com/sites/default/files/uploads/tar_2x.png (xkcd comic)
[6]: https://xkcd.com/1168/
[7]: https://www.educative.io/courses/master-the-bash-shell
[8]: https://zwischenzugs.com/2019/08/25/seven-god-like-bash-history-shortcuts-you-will-actually-use/

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11460-1.html)
[#]: subject: (All That You Can Do with Google Analytics, and More)
[#]: via: (https://opensourceforu.com/2019/10/all-that-you-can-do-with-google-analytics-and-more/)
[#]: author: (Ashwin Sathian https://opensourceforu.com/author/ashwin-sathian/)
@ -10,212 +10,222 @@
Google Analytics 的一些用法介绍
======
[![][1]][2]
![][1]
Google Analytics 这个最流行的用户活动追踪工具我们或多或少都听说过甚至使用过,但它的用途并不仅仅限于对页面访问的追踪。作为一个既实用又流行的工具,它已经受到了广泛的欢迎,因此我们将要在下文中介绍如何在各种 Angular 和 React 单页应用中使用 Google Analytics。
Google Analytics GA这个最流行的用户活动追踪工具我们或多或少都听说过甚至使用过,但它的用途并不仅仅限于对页面访问的追踪。作为一个既实用又流行的工具,它已经受到了广泛的欢迎,因此我们将要在下文中介绍如何在各种 Angular 和 React 单页应用中使用 Google Analytics。
这篇文章源自这样一个问题:如何对单页应用中的页面访问进行跟踪?
通常来说,有很多的方法可以解决这个问题,在这里我们只讨论其中的一种方法。下面我会使用 Angular 来写出对应的实现,如果你使用的是 React相关的用法和概念也不会有太大的差别。接下来就开始吧。
**准备好应用程序**
### 准备好应用程序
首先需要有一个<ruby>追踪 ID<rt>tracking ID</rt></ruby>。在开始编写业务代码之前,要先准备好一个追踪 ID通过这个唯一的标识Google Analytics 才能识别出某个点击或者某个页面访问是来自于哪一个应用。
按照以下的步骤:
1. 访问 _<https://analytics.google.com>_
1. 访问 <https://analytics.google.com>
2. 提交指定信息以完成注册,并确保可用于 Web 应用,因为我们要开发的正是一个 Web 应用;
3. 同意相关的条款,生成一个追踪 ID
4. 保存好追踪 ID。
追踪 ID 拿到了,就可以开始编写业务代码了。
**添加 `analytics.js` 脚本**
### 添加 analytics.js 脚本
Google 已经帮我们做好了接入之前需要做的所有事情,接下来就是我们的工作了。不过我们要做的也很简单,只需要把下面这段脚本添加到应用的 `index.html` 里,就可以了:
```
<script>
(function(i,s,o,g,r,a,m){i[GoogleAnalyticsObject]=r;i[r]=i[r]||function(){
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,script,https://www.google-analytics.com/analytics.js,ga);
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
</script>
```
现在我们来看一下 Google Analytics 是如何在应用程序中初始化的。
**创建追踪器**
### 创建追踪器
首先创建一个应用程序的追踪器。在 `app.component.ts` 中执行以下两个步骤:
1. 声明一个名为 `ga`,类型为 `any` 的全局变量(在 Typescript 中需要制定变量类型);
2. 将下面一行代码加入到 `ngInInit()` 中。
```
ga(create, <你的追踪 ID>, auto);
ga('create', <你的追踪 ID>, 'auto');
```
这样就已经成功地在应用程序中初始化了一个 Google Analytics 的追踪器了。由于追踪器的初始化是在 OnInit() 函数中执行的,因此每当应用程序启动,追踪器就会启动。
这样就已经成功地在应用程序中初始化了一个 Google Analytics 的追踪器了。由于追踪器的初始化是在 `OnInit()` 函数中执行的,因此每当应用程序启动,追踪器就会启动。
### 在单页应用中记录页面访问情况
**在单页应用中记录页面访问情况**
我们需要实现的是记录<ruby>访问路由<rt>route-visits</rt></ruby>
记录用户在一个应用中不同部分的访问,这是一个难点。从功能上来看,单页应用中的路由对应了传统多页面应用中各个页面之间的跳转,因此我们需要记录访问路由。要做到这一点尽管不算简单,但仍然是可以实现的。在 `app.component.ts``ngOnInit()` 函数中添加以下的代码片段:
如何记录用户在一个应用中不同部分的访问,这是一个难点。从功能上来看,单页应用中的路由对应了传统多页面应用中各个页面之间的跳转,因此我们需要记录访问路由。要做到这一点尽管不算简单,但仍然是可以实现的。在 `app.component.ts``ngOnInit()` 函数中添加以下的代码片段:
```
import { Router, NavigationEnd } from @angular/router;
import { Router, NavigationEnd } from '@angular/router';
...
constructor(public router: Router) {}
...
this.router.events.subscribe(
event => {
if (event instanceof NavigationEnd) {
ga(set, page, event.urlAfterRedirects);
ga(send, { hitType: pageview, hitCallback: () => { this.pageViewSent = true; }
});
}
} );
event => {
if (event instanceof NavigationEnd) {
ga('set', 'page', event.urlAfterRedirects);
ga('send', { hitType: 'pageview', hitCallback: () => { this.pageViewSent = true; }});
}
}
);
```
神奇的是,只需要这么几行代码,就实现了 Angular 应用中记录页面访问情况的功能。
这段代码实际上做了以下几件事情:
1. 从 Angular Router 中导入了 Router、NavigationEnd
2. 通过构造函数中将 Router 添加到组件中;
3. 然后订阅 router 事件,也就是由 Angular Router 发出的所有事件;
4. 只要产生了一个 NavigationEnd 事件实例,就将路由和目标作为一个页面访问进行记录。
1. 从 Angular Router 中导入了 `Router`、`NavigationEnd`
2. 通过构造函数中将 `Router` 添加到组件中;
3. 然后订阅 `router` 事件,也就是由 Angular Router 发出的所有事件;
4. 只要产生了一个 `NavigationEnd` 事件实例,就将路由和目标作为一个页面访问进行记录。
这样,只要使用到了页面路由,就会向 Google Analytics 发送一条页面访问记录,在 Google Analytics 的在线控制台中可以看到这些记录。
类似地,我们可以用相同的方式来记录除了页面访问之外的活动,例如某个界面的查看次数或者时长等等。只要像上面的代码那样使用 `hitCallBack()` 就可以在有需要收集的数据的时候让应用程序作出反应,这里我们做的事情仅仅是把一个变量的值设为 `true`,但实际上 `hitCallBack()` 中可以执行任何代码。
**追踪用户交互活动**
除了页面访问记录之外Google Analytics 还经常被用于追踪用户的交互活动,例如某个按钮的点击情况。“提交按钮被用户点击了多少次?”“产品手册会被经常查阅吗?”这些都是 Web 应用程序的产品评审会议上的常见问题。这一节我们将会介绍如何实现这些数据的统计。
### 追踪用户交互活动
**按钮点击:** 设想这样一种场景,需要统计到应用程序中某个按钮或链接被点击的次数,这是一个和注册之类的关键动作关系最密切的数据指标。下面我们来举一个例子:
除了页面访问记录之外Google Analytics 还经常被用于追踪用户的交互活动,例如某个按钮的点击情况。“提交按钮被用户点击了多少次?”,“产品手册会被经常查阅吗?”这些都是 Web 应用程序的产品评审会议上的常见问题。这一节我们将会介绍如何实现这些数据的统计。
#### 按钮点击
设想这样一种场景,需要统计到应用程序中某个按钮或链接被点击的次数,这是一个和注册之类的关键动作关系最密切的数据指标。下面我们来举一个例子:
假设应用程序中有一个“感兴趣”按钮,用于显示即将推出的活动,你希望通过统计这个按钮被点击的次数来推测有多少用户对此感兴趣。那么我们可以使用以下的代码来实现这个功能:
```
...
params = {
eventCategory:
Button
,
eventAction:
Click
,
eventLabel:
Show interest
,
eventValue:
1
eventCategory:
'Button'
,
eventAction:
'Click'
,
eventLabel:
'Show interest'
,
eventValue:
1
};
showInterest() {
ga(send, event, this.params);
ga('send', 'event', this.params);
}
...
```
现在看下这段代码实际上做了什么。正如前面说到,当我们向 Google Analytics 发送数据的时候Google Analytics 就会记录下来。因此我们可以向 `send()` 方法传递不同的参数,以区分不同的事件,例如两个不同按钮的点击记录。
1\. 首先我们定义了一个 `params` 对象,这个对象包含了以下几个字段:
1、首先我们定义了一个 `params` 对象,这个对象包含了以下几个字段:
1. `eventCategory` 交互发生的对象这里对应的是按钮button
2. `eventAction` 发生的交互的类型这里对应的是点击click
3. `eventLabel` 交互动作的标识,这里对应的是这个按钮的内容,也就是“感兴趣”
4. `eventValue` 与每个发生的事件实例相关联的值
由于这个例子中是要统计点击了“感兴趣”按钮的用户数,因此我们把 `eventValue` 的值定为 1。
2\. 对象构造完成之后,下一步就是将 `params` 对象作为请求负载发送到 Google Analytics而这一步是通过事件绑定将 `showInterest()` 绑定在按钮上实现的。这也是使用 Google Analytics 追踪中最常用的发送事件方法。
2对象构造完成之后,下一步就是将 `params` 对象作为请求负载发送到 Google Analytics而这一步是通过事件绑定将 `showInterest()` 绑定在按钮上实现的。这也是使用 Google Analytics 追踪中最常用的发送事件方法。
至此Google Analytics 就可以通过记录按钮的点击次数来统计感兴趣的用户数了。
**追踪社交活动:** Google Analytics 还可以通过应用程序追踪用户在社交媒体上的互动。其中一种场景就是在应用中放置类似 Facebook 的点赞按钮,下面我们来看看如何使用 Google Analytics 来追踪这一行为。
#### 追踪社交活动
Google Analytics 还可以通过应用程序追踪用户在社交媒体上的互动。其中一种场景就是在应用中放置类似 Facebook 的点赞按钮,下面我们来看看如何使用 Google Analytics 来追踪这一行为。
```
...
fbLikeParams = {
socialNetwork:
'Facebook',
socialAction:
'Like',
socialTarget:
'https://facebook.com/mypage'
socialNetwork:
'Facebook',
socialAction:
'Like',
socialTarget:
'https://facebook.com/mypage'
};
...
fbLike() {
ga('send', 'social', this.fbLikeParams);
ga('send', 'social', this.fbLikeParams);
}
```
如果你觉得这段代码似曾相识,那是因为它确实跟上面统计“感兴趣”按钮点击次数的代码非常相似。下面我们继续看其中每一步的内容:
1\. 构造发送的数据负载,其中包括以下字段:
1构造发送的数据负载,其中包括以下字段:
1. `socialNetwork` 交互发生的社交媒体,例如 Facebook、Twitter 等等
2. `socialAction` 发生的交互类型,例如点赞、发表推文、分享等等
3. `socialTarget` 交互的目标 URL一般是社交媒体账号的主页
2\. 下一步是增加一个函数来发送整个交互记录。和统计按钮点击数量时相比,这里使用 `send()` 的方式有所不同。另外,我们还需要把这个函数绑定到已有的点赞按钮上。
2、下一步是增加一个函数来发送整个交互记录。和统计按钮点击数量时相比这里使用 `send()` 的方式有所不同。另外,我们还需要把这个函数绑定到已有的点赞按钮上。
在追踪用户交互方面Google Analytics 还可以做更多的事情,其中最重要的一种是针对异常的追踪,这让我们可以通过 Google Analytics 来追踪应用程序中出现的错误和异常。在本文中我们就不赘述这一点了,但我们鼓励读者自行探索。
**用户识别**
**隐私是一项权利,而不是奢侈品:** Google Analytics 除了可以记录很多用户的操作和交互活动之外,这一节还将介绍一个不太常见的功能,就是可以控制是否对用户的身份进行追踪。
**Cookies** Google Analytics 追踪用户活动的方式是基于 Cookies 的,因此我们可以自定义 Cookies 的名称以及一些其它的内容,请看下面这段代码:
### 用户识别
#### 隐私是一项权利,而不是奢侈品
Google Analytics 除了可以记录很多用户的操作和交互活动之外,这一节还将介绍一个不太常见的功能,就是可以控制是否对用户的身份进行追踪。
#### Cookies
Google Analytics 追踪用户活动的方式是基于 Cookies 的,因此我们可以自定义 Cookies 的名称以及一些其它的内容,请看下面这段代码:
```
trackingID =
UA-139883813-1
'UA-139883813-1'
;
cookieParams = {
cookieName: myGACookie,
cookieDomain: window.location.hostname,
cookieExpires: 604800
cookieName: 'myGACookie',
cookieDomain: window.location.hostname,
cookieExpires: 604800
};
...
ngOnInit() {
ga(create, this.trackingID, this.cookieParams);
ga('create', this.trackingID, this.cookieParams);
...
}
```
在上面这段代码中,我们设置了 Google Analytics Cookies 的名称、域以及过期时间,这就让我们能够将不同网站或 Web 应用的 Cookies 区分开来。因此我们需要为我们自己的应用程序的 Google Analytics 追踪器的 Cookies 设置一个自定义的标识1而不是一个自动生成的随机标识。
**IP 匿名:** 在某些场景下我们可能不需要知道应用程序的流量来自哪里。例如对于一个按钮点击的追踪器我们只需要关心按钮的点击量而不需要关心点击者的地理位置。在这种场景下Google Analytics 允许我们只追踪用户的操作行为,而不获取到用户的 IP 地址。
#### IP 匿名
在某些场景下我们可能不需要知道应用程序的流量来自哪里。例如对于一个按钮点击的追踪器我们只需要关心按钮的点击量而不需要关心点击者的地理位置。在这种场景下Google Analytics 允许我们只追踪用户的操作行为,而不获取到用户的 IP 地址。
```
ipParams = {
anonymizeIp: true
anonymizeIp: true
};
ngOnInit() {
ga('set', this.ipParams);
...
ngOnInit() {
...
ga('set', this.ipParams);
...
}
```
在上面这段代码中,我们将 Google Analytics 追踪器的 `abibymizeIp` 参数设置为 `true`。这样用户的 IP 地址就不会被 Google Analytics 所追踪,这可以让用户知道自己的隐私正在被保护。
**不被跟踪:** 还有些时候用户可能不希望自己的行为受到追踪,而 Google Analytics 也允许这样的需求。因此也存在让用户不被追踪的选项。
#### 不被跟踪
还有些时候用户可能不希望自己的行为受到追踪,而 Google Analytics 也允许这样的需求。因此也存在让用户不被追踪的选项。
```
...
optOut() {
window[ga-disable-UA-139883813-1] = true;
window['ga-disable-UA-139883813-1'] = true;
}
...
```
@ -233,7 +243,7 @@ via: https://opensourceforu.com/2019/10/all-that-you-can-do-with-google-analytic
作者:[Ashwin Sathian][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,190 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11458-1.html)
[#]: subject: (How to Install and Configure VNC Server on Centos 8 / RHEL 8)
[#]: via: (https://www.linuxtechi.com/install-configure-vnc-server-centos8-rhel8/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
如何在 Centos 8 / RHEL 8 上安装和配置 VNC 服务器
======
VNC<ruby>虚拟网络计算<rt>Virtual Network Computing</rt></ruby>)服务器是基于 GUI 的桌面共享平台,它可让你访问远程桌面计算机。在 Centos 8 和 RHEL 8 系统中,默认未安装 VNC 服务器,它需要手动安装。在本文中,我们将通过简单的分步指南,介绍如何在 Centos 8 / RHEL 8 上安装 VNC 服务器。
### 在 Centos 8 / RHEL 8 上安装 VNC 服务器的先决要求
要在你的系统中安装 VNC 服务器,请确保你的系统满足以下要求:
* CentOS 8 / RHEL 8
* GNOME 桌面环境
* root 用户权限
* DNF / YUM 软件包仓库
### 在 Centos 8 / RHEL 8 上安装 VNC 服务器的分步指导
#### 步骤 1安装 GNOME 桌面环境
在 CentOS 8 / RHEL 8 中安装 VNC 服务器之前请确保已安装了桌面环境DE。如果已经安装了 GNOME 桌面或安装了 GUI 支持,那么可以跳过此步骤。
在 CentOS 8 / RHEL 8 中GNOME 是默认的桌面环境。如果你的系统中没有它,请使用以下命令进行安装:
```
[root@linuxtechi ~]# dnf groupinstall "workstation"
或者
[root@linuxtechi ~]# dnf groupinstall "Server with GUI
```
成功安装上面的包后,请运行以下命令启用图形模式:
```
[root@linuxtechi ~]# systemctl set-default graphical
```
现在重启系统,进入 GNOME 登录页面LCTT 译注:你可以通过切换运行态来进入图形界面)。
```
[root@linuxtechi ~]# reboot
```
重启后,请取消注释 `/etc/gdm/custom.conf` 中的 `WaylandEnable=false`,以使通过 vnc 进行的远程桌面会话请求由 GNOME 桌面的 xorg 处理,来代替 Wayland 显示管理器。
注意: Wayland 是 GNOME 中的默认显示管理器 GDM并且未配置用于处理 X.org 等远程渲染的 API。
#### 步骤 2安装 VNC 服务器tigervnc-server
接下来,我们将安装 VNC 服务器,有很多 VNC 服务器可以选择,出于安装目的,我们将安装 `TigerVNC 服务器`。它是最受欢迎的 VNC 服务器之一,并且高性能还独立于平台,它使用户可以轻松地与远程计算机进行交互。
现在,使用以下命令安装 TigerVNC 服务器:
```
[root@linuxtechi ~]# dnf install tigervnc-server tigervnc-server-module -y
```
#### 步骤 3为本地用户设置 VNC 密码
假设我们希望用户 `pkumar` 使用 VNC 进行远程桌面会话,然后切换到该用户并使用 `vncpasswd` 命令设置其密码,
```
[root@linuxtechi ~]# su - pkumar
[root@linuxtechi ~]$ vncpasswd
Password:
Verify:
Would you like to enter a view-only password (y/n)? n
A view-only password is not used
[root@linuxtechi ~]$
[root@linuxtechi ~]$ exit
logout
[root@linuxtechi ~]#
```
#### 步骤 4设置 VNC 服务器配置文件
下一步是配置 VNC 服务器配置文件。创建含以下内容的 `/etc/systemd/system/vncserver@.service`,以便为上面的本地用户 `pkumar` 启动 tigervnc-server 的服务。
```
[root@linuxtechi ~]# vim /etc/systemd/system/vncserver@.service
[Unit]
Description=Remote Desktop VNC Service
After=syslog.target network.target
[Service]
Type=forking
WorkingDirectory=/home/pkumar
User=pkumar
Group=pkumar
ExecStartPre=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'
ExecStart=/usr/bin/vncserver -autokill %i
ExecStop=/usr/bin/vncserver -kill %i
[Install]
WantedBy=multi-user.target
```
保存并退出文件,
注意:替换上面文件中的用户名为你自己的。
默认情况下VNC 服务器在 tcp 端口 5900+n 上监听,其中 n 是显示端口号,如果显示端口号为 “1”那么 VNC 服务器将在 TCP 端口 5901 上监听其请求。
#### 步骤 5启动 VNC 服务并允许防火墙中的端口
我将显示端口号设置为 1因此请使用以下命令在显示端口号 “1” 上启动并启用 vnc 服务,
```
[root@linuxtechi ~]# systemctl daemon-reload
[root@linuxtechi ~]# systemctl start vncserver@:1.service
[root@linuxtechi ~]# systemctl enable vncserver@:1.service
Created symlink /etc/systemd/system/multi-user.target.wants/vncserver@:1.service → /etc/systemd/system/vncserver@.service.
[root@linuxtechi ~]#
```
使用下面的 `netstat``ss` 命令来验证 VNC 服务器是否开始监听 5901 上的请求,
```
[root@linuxtechi ~]# netstat -tunlp | grep 5901
tcp 0 0 0.0.0.0:5901 0.0.0.0:* LISTEN 8169/Xvnc
tcp6 0 0 :::5901 :::* LISTEN 8169/Xvnc
[root@linuxtechi ~]# ss -tunlp | grep -i 5901
tcp LISTEN 0 5 0.0.0.0:5901 0.0.0.0:* users:(("Xvnc",pid=8169,fd=6))
tcp LISTEN 0 5 [::]:5901 [::]:* users:(("Xvnc",pid=8169,fd=7))
[root@linuxtechi ~]#
```
使用下面的 `systemctl` 命令验证 VNC 服务器的状态,
```
[root@linuxtechi ~]# systemctl status vncserver@:1.service
```
![vncserver-status-centos8-rhel8][2]
上面命令的输出确认在 tcp 端口 5901 上成功启动了 VNC。使用以下命令在系统防火墙中允许 VNC 服务器端口 “5901”
```
[root@linuxtechi ~]# firewall-cmd --permanent --add-port=5901/tcp
success
[root@linuxtechi ~]# firewall-cmd --reload
success
[root@linuxtechi ~]#
```
#### 步骤 6连接到远程桌面会话
现在,我们已经准备就绪,可以查看远程桌面连接是否正常工作。要访问远程桌面,请在 Windows / Linux 工作站中启动 VNC Viewer然后输入 VNC 服务器的 IP 地址和端口号,然后按回车。
![VNC-Viewer-Windows10][3]
接下来,它将询问你的 VNC 密码。输入你先前为本地用户创建的密码,然后单击 “OK” 继续。
![VNC-Viewer-Connect-CentOS8-RHEL8-VNC-Server][4]
现在你可以看到远程桌面,
![VNC-Desktop-Screen-CentOS8][5]
就是这样,你已经在 Centos 8 / RHEL 8 中成功安装了 VNC 服务器。
### 总结
希望这篇在 Centos 8 / RHEL 8 上安装 VNC 服务器的分步指南为你提供了轻松设置 VNC 服务器并访问远程桌面的所有信息。请在下面的评论栏中提供你的意见和建议。下篇文章再见。谢谢再见!!!
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/install-configure-vnc-server-centos8-rhel8/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://www.linuxtechi.com/cdn-cgi/l/email-protection
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/vncserver-status-centos8-rhel8.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/10/VNC-Viewer-Windows10.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/10/VNC-Viewer-Connect-CentOS8-RHEL8-VNC-Server.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/10/VNC-Desktop-Screen-CentOS8.jpg

View File

@ -0,0 +1,98 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11469-1.html)
[#]: subject: (Command line quick tips: Locate and process files with find and xargs)
[#]: via: (https://fedoramagazine.org/command-line-quick-tips-locate-and-process-files-with-find-and-xargs/)
[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/)
命令行技巧:使用 find 和 xargs 查找和处理文件
======
![][1]
`find` 是日常工具箱中功能强大、灵活的命令行程序之一。它如它名字所暗示的:查找符合你指定条件的文件和目录。借助 `-exec``-delete` 之类的参数,你可以让它对找到的文件进行操作。
在[命令行提示][2]系列的这一期中,你将会看到 `find` 命令的介绍,并学习如何使用内置命令或使用 `xargs` 命令处理文件。
### 查找文件
`find` 至少要加上查找的路径。例如,此命令将查找(并打印)系统上的每个文件:
```
find /
```
由于一切皆文件,因此你会看到大量的输出。这可能无法帮助你找到所需的内容。你可以更改路径参数缩小范围,但这实际上并没有比使用 `ls` 命令更好。因此,你需要考虑要查找的内容。
也许你想在家目录中查找所有 JPEG 文件。 `-name` 参数允许你将结果限制为与给定模式匹配的文件。
```
find ~ -name '*jpg'
```
但是等等!如果其中一些扩展名是大写怎么办? `-iname` 类似于 `-name`,但不区分大小写:
```
find ~ -iname '*jpg'
```
很好!但是 8.3 命名方案出自 1985 年。某些图片的扩展名可能是 .jpeg。幸运的是我们可以将模式使用“或”`-o`)进行组合。括号需要转义,以便使 `find` 命令而不是 shell 程序尝试解释它们。
```
find ~ \( -iname 'jpeg' -o -iname 'jpg' \)
```
更进一步。如果你有一些以 `jpg` 结尾的目录怎么办?(我不懂你为什么将目录命名为 `bucketofjpg` 而不是 `pictures`?)我们可以加上 `-type` 参数来仅查找文件:
```
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f
```
或者,也许你想找到那些名字奇怪的目录,以便之后可以重命名它们:
```
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type d
```
最近你拍摄了很多照片,因此使用 `-mtime`(修改时间)将范围缩小到最近一周修改过的文件。 `-7` 表示 7 天或更短时间内修改的所有文件。
```
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -mtime -7
```
### 使用 xargs 进行操作
`xargs` 命令从标准输入流中获取参数,并基于它们执行命令。继续使用上一节中的示例,假设你要将上周修改过的家目录中的所有 JPEG 文件复制到 U 盘,以便插到电子相册上。假设你已经将 U 盘挂载到 `/media/photo_display`
```
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -mtime -7 -print0 | xargs -0 cp -t /media/photo_display
```
这里的 `find` 命令与以前的版本略有不同。`-print0` 命令让输出有一些更改:它不使用换行符,而是添加了一个 `null` 字符。`xargs` 的 `-0`(零)选项可调整解析以达到预期效果。这很重要,不然对包含空格、引号或其他特殊字符的文件名执行操作可能无法按预期进行。对文件采取任何操作时,都应使用这些选项。
`cp` 命令的 `-t` 参数很重要,因为 `cp` 通常要求目的地址在最后。你可以不使用 `xargs` 而使用 `find``-exec` 执行此操作,但是 `xargs` 的方式会更快,尤其是对于大量文件,因为它会单次调用 `cp`
### 了解更多
这篇文章仅仅是 `find` 可以做的事情的表面。 `find` 支持基于权限、所有者、访问时间等的测试。它甚至可以将搜索路径中的文件与其他文件进行比较。将测试与布尔逻辑相结合,可以为你提供惊人的灵活性,以精确地找到你要查找的文件。使用内置命令或管道传递给 `xargs`,你可以快速处理大量文件。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/command-line-quick-tips-locate-and-process-files-with-find-and-xargs/
作者:[Ben Cotton][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bcotton/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2018/10/commandlinequicktips-816x345.jpg
[2]: https://fedoramagazine.org/?s=command+line+quick+tips
[3]: https://opensource.com/article/18/4/how-use-find-linux
[4]: https://unsplash.com/@wflwong?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[5]: https://unsplash.com/s/photos/search?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,136 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11481-1.html)
[#]: subject: (Top 10 open source video players for Linux)
[#]: via: (https://opensourceforu.com/2019/10/top-10-open-source-video-players-for-linux/)
[#]: author: (Stella Aldridge https://opensourceforu.com/author/stella-aldridge/)
Linux 中的十大开源视频播放器
======
![][1]
> 选择合适的视频播放器有助于确保你获得最佳的观看体验,并为你提供[创建视频网站][3]的工具。你甚至可以根据个人喜好自定义正在观看的视频。
因此,为了帮助你挑选适合你需求的最佳播放器,我们列出了 Linux 中的十大开源播放器。
让我们来看看:
### 1、XBMC Kodi 媒体中心
这是一个灵活的跨平台播放器,核心使用 C++ 编写,并提供 Python 脚本作为附加组件。使用 Kodi 的好处包括:
* 提供超过 69 种语言版本
* 用户可以从网络和本地存储播放音频、视频和媒体播放文件
* 可与 JeOS 一起作为应用套件用于智能电视和机顶盒等设备
* 有很多不错的附加组件,如视频和音频流插件、主题、屏幕保护程序等
* 它支持多种格式,如 MPEG-1、2、4、RealVideo、HVC、HEVC 等
### 2、VLC 媒体播放器
由于该播放器在一系列操作系统上具有令人印象深刻的功能和可用性,它出现在列表上是理所当然的。它使用 C、C++ 和 Objective C 编写用户无需使用插件这要归功于它对解码库的广泛支持。VLC 媒体播放器的优势包括:
* 在 Linux 上支持 DVD 播放器
* 能够播放 .iso 文件
* 能够播放高清录制的 D-VHS 磁带
* 可以直接从 U 盘或外部驱动器运行
* API 支持和浏览器支持(通过插件)
### 3、BomiCMPlayer
这个灵活和强大的播放器被许多普通用户选择,它的优势有:
* 易于使用的图形用户界面GUI
* 令人印象深刻的播放能力
* 可以恢复播放
* 支持字幕,可以渲染多个字幕文件
![][4]
### 4、Miro 音乐与视频播放器
以前被称为 Democracy PlayerDTVMiro 由<ruby>参与文化基金会<rt>Participatory Culture Foundation</rt></ruby>重新开发,是一个不错的跨平台音频视频播放器。令人印象深刻,因为:
* 支持一些高清音频和视频
* 提供超过 40 种语言版本
* 可以播放多种文件格式例如QuickTime、WMV、MPEG 文件、AVI、XVID
* 一旦可用,可以自动通知用户并下载视频
### 5、SMPlayer
这个跨平台的媒体播放器,只使用 C++ 的 Qt 库编写,它是一个强大的多功能播放器。我们喜欢它,因为:
* 有多语言选择
* 支持所有默认格式
* 支持 EDL 文件,你可以配置从互联网获取的字幕
* 可从互联网下载的各种皮肤
* 倍速播放
### 6、MPV 播放器
它用 C、Objective-C、Lua 和 Python 编写,免费、易于使用,并且有许多新功能,便于使用。主要加分是:
* 可以编译为一个库,公开客户端 API从而增强控制
* 允许媒体编码
* 平滑动画
### 7、Deepin Movie
此播放器是开源媒体播放器的一个极好的例子,它有很多优势,包括:
* 通过键盘完成所有播放操作
* 各种格式的视频文件可以通过这个播放器轻松播放
* 流媒体功能能让用户享受许多在线视频资源
### 8、Gnome 视频
以前称为 Totem这是 Gnome 桌面环境的播放器。
完全用 C 编写,使用 GStreamer 多媒体框架构建,高于 2.7.1 的版本使用 xine 作为后端。它是很棒的,因为:
它支持大量的格式,包括:
* SHOUTcast、SMIL、M3U、Windows 媒体播放器格式等
* 你可以在播放过程中调整灯光设置,如亮度和对比度
* 加载 SubRip 字幕
* 支持从互联网频道(如 Apple直接播放视频
### 9、Xine 多媒体播放器
我们列表中用 C 编写的另外一个跨平台多媒体播放器。这是一个全能播放器,因为:
* 它支持物理媒体以及视频设备。3gp、MKV、 MOV、Mp4、音频格式
* 网络协议V4L、DVB 和 PVR 等
* 它可以手动校正音频和视频流的同步
### 10、ExMPlayer
最后但同样重要的一个ExMPlayer 是一个惊人的、强大的 MPlayer 的 GUI 前端。它的优点包括:
* 可以播放任何媒体格式
* 支持网络流和字幕
* 易于使用的音频转换器
* 高品质的音频提取,而不会影响音质
上面这些视频播放器在 Linux 上工作得很好。我们建议你尝试一下,选择一个最适合你的播放器。
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/top-10-open-source-video-players-for-linux/
作者:[Stella Aldridge][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/stella-aldridge/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Depositphotos_50337841_l-2015.jpg?resize=696%2C585&ssl=1 (Depositphotos_50337841_l-2015)
[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Depositphotos_50337841_l-2015.jpg?fit=900%2C756&ssl=1
[3]: https://www.ning.com/create-video-website/
[4]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Depositphotos_20380441_l-2015.jpg?resize=350%2C231&ssl=1
[5]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Depositphotos_20380441_l-2015.jpg?ssl=1

View File

@ -1,22 +1,24 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: translator: (laingke)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11462-1.html)
[#]: subject: (Viewing files and processes as trees on Linux)
[#]: via: (https://www.networkworld.com/article/3444589/viewing-files-and-processes-as-trees-on-linux.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Viewing files and processes as trees on Linux
在 Linux 上以树状查看文件和进程
======
A look at three Linux commands - ps, pstree and tree - for viewing files and processes in a tree-like format.
[Melissa McMasters][1] [(CC BY 2.0)][2]
[Linux][3] provides several handy commands for viewing both files and processes in a branching, tree-like format that makes it easy to view how they are related. In this post, we'll look at the **ps**, **pstree** and **tree** commands along with some options they provide to help focus your view on what you want to see.
> 介绍三个 Linux 命令ps、pstree 和 tree 以类似树的格式查看文件和进程。
![](https://img.linux.net.cn/data/attachment/album/201910/15/093202rwm5k9pnpntgbtpr.jpg)
[Linux][3] 提供了一些方便的命令,用于以树状分支形式查看文件和进程,从而易于查看它们之间的关系。在本文中,我们将介绍 `ps`、`pstree` 和 `tree` 命令以及它们提供的一些选项,这些选项可帮助你将注意力集中在要查看的内容上。
### ps
The **ps** command that we all use to list processes has some interesting options that many of us never take advantage of. While the commonly used **ps -ef** provides a complete listing of running processes, the **ps -ejH** command adds a nice effect. It indents related processes to make the relationship between these processes visually more clear   as in this excerpt:
我们用来列出进程的 `ps` 命令有一些有趣的选项,但是很多人从来没有利用过。虽然常用的 `ps -ef` 提供了正在运行的进程的完整列表,但是 `ps -ejH` 命令增加了一个不错的效果。它缩进了相关的进程以使这些进程之间的关系在视觉上更加清晰——就像这个片段:
```
$ ps -ejH
@ -29,17 +31,9 @@ $ ps -ejH
30968 30968 28410 pts/0 00:00:00 ps
```
As you can see, the ps process is being run is run within bash and bash within an ssh session.
可以看到,正在运行的 `ps` 进程是在 `bash` 中运行的,而 `bash` 是在 ssh 会话中运行的。
[][4]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][4]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
The **-exjf** option string provides a similar view, but with some additional details and symbols to highlight the hierarchical nature of the processes:
`-exjf` 选项字符串提供了类似的视图,但是带有一些其它细节和符号以突出显示进程的层次结构性质:
```
$ ps -exjf
@ -52,19 +46,17 @@ PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
28410 31028 31028 28410 pts/0 31028 R+ 1000 0:00 \_ ps axjf
```
The options used in these commands represent:
Advertisement
命令中使用的这些选项表示:
```
-e select all processes
-j use the jobs format
-f provide a full format listing
-H show the process hierarchy (i.e., the "forest format")
-x lift the "must be associated with a tty" restriction
-e 选择所有进程
-j 使用工作格式
-f 提供完整格式列表
-H 分层显示进程(如,树状格式)
-x 取消“必须与 tty 相关联”的限制
```
There's also a **\--forest** option that provides a similar view.
同时,该命令也有一个 `--forest` 选项提供了类似的视图。
```
$ ps -ef --forest
@ -77,13 +69,11 @@ shs 28410 28409 0 12:56 pts/0 00:00:00 \_ -bash
shs 32351 28410 0 14:39 pts/0 00:00:00 \_ ps -ef --forest
```
Note that these examples are only a sampling of how these commands can be used. You can select whichever options that give you the view of processes that works best for you.
[MORE ON NETWORK WORLD: Linux: Best desktop distros for newbies][5]
注意,这些示例只是这些命令如何使用的示例。你可以选择最适合你的进程视图的任何选项组合。
### pstree
A similar view of processes is available using the **pstree** command. While even **pstree** offers many options, the command provides a very useful display on its own. Notice that many parent-child process relationships are displayed on single lines rather than subsequent lines.
使用 `pstree` 命令可以获得类似的进程视图。尽管 `pstree` 具备了许多选项,但是该命令本身就提供了非常有用的显示。注意,许多父子进程关系显示在单行而不是后续行上。
```
$ pstree
@ -99,7 +89,7 @@ $ pstree
│ └─xdg-permission-───2*[{xdg-permission-}]
```
With the **-n** option, **pstree** displays the process in numerical (by process ID) order:
通过 `-n` 选项,`pstree` 以数值(按进程 ID顺序显示进程
```
$ pstree -n
@ -130,17 +120,17 @@ systemd─┬─systemd-journal
├─sshd───sshd───sshd───bash───pstree
```
Some options to consider when using **pstree** include **-a** (include command line arguments) and **-g** (include process groups).
使用 `pstree` 时可以考虑的一些选项包括 `-a`(包括命令行参数)和 `-g`(包括进程组)。
Here are some quick (truncated) examples.
以下是一些简单的示例(片段)。
Output from **pstree -a**
命令 `pstree -a` 的输出内容:
```
└─wpa_supplicant -u -s -O /run/wpa_supplicant
```
Output from **pstree -g**:
命令 `pstree -g` 的输出内容:
```
├─sshd(1396)───sshd(28281)───sshd(28281)───bash(28410)───pstree(1115)
@ -148,9 +138,9 @@ Output from **pstree -g**:
### tree
While the **tree** command sounds like it would be very similar to **pstree**, it's a command for looking at files rather than processes. It provides a nice tree-like view of directories and files.
虽然 `tree` 命令听起来与 `pstree` 非常相似,但这是用于查看文件而非进程的命令。它提供了一个漂亮的树状目录和文件视图。
If you use the **tree** command to look at **/proc**, your display would begin similar to this one:
如果你使用 `tree` 命令查看 `/proc` 目录,你显示的开头部分将类似于这个:
```
$ tree /proc
@ -178,9 +168,9 @@ $ tree /proc
...
```
You will see a lot more detail if you run a command like this as root (**sudo tree /proc)** since much of the contents of **/proc** is inaccessible to regular users.
如果以 root 权限运行这条命令(`sudo tree /proc`),你将会看到更多详细信息,因为 `/proc` 目录的许多内容对于普通用户而言是无法访问的。
The **tree -d** command will limit your display to directories.
命令 `tree -d` 将会限制仅显示目录。
```
$ tree -d /proc
@ -205,7 +195,7 @@ $ tree -d /proc
...
```
With the **-f** option, **tree** will show full pathnames.
使用 `-f` 选项,`tree` 命令会显示完整的路径。
```
$ tree -f /proc
@ -228,9 +218,7 @@ $ tree -f /proc
...
```
Hierarchical displays can often make the relationship between processes and files easier to understand. While the number of options available is rather broad, you'll probably find some views that help you see just what you're looking for.
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
分层显示通常可以使进程和文件之间的关系更容易理解。可用选项的数量很多,而你总可以找到一些视图,帮助你查看所需的内容。
--------------------------------------------------------------------------------
@ -238,8 +226,8 @@ via: https://www.networkworld.com/article/3444589/viewing-files-and-processes-as
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[laingke](https://github.com/laingke)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,124 @@
[#]: collector: (lujun9972)
[#]: translator: (singledo)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11470-1.html)
[#]: subject: (How to Unzip a Zip File in Linux [Beginners Tutorial])
[#]: via: (https://itsfoss.com/unzip-linux/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
新手教程:如何在 Linux 下解压 Zip 文件
======
> 本文将会向你展示如何在 Ubuntu 和其他 Linux 发行版本上解压文件。终端和图形界面的方法都会讨论。
[Zip][1] 是一种创建压缩存档文件的最普通、最流行的方法。它也是一种古老的文件归档文件格式,这种格式创建于 1989 年。由于它的广泛使用,你会经常遇见 zip 文件。
在更早的一份教程里,我介绍了[如何在 Linux 上用 zip 压缩一个文件夹][2]。在这篇面向初学者的快速教程中,我会介绍如何在 Linux 上解压文件。
先决条件:检查你是否安装了 `unzip`
为了解压 zip 归档文件,你必须在你的系统上安装了 unzip 软件包。大多数现代的的 Linux 发行版本提供了解压 zip 文件的支持,但是对这些 zip 文件进行校验以避免以后出现损坏总是没有坏处的。
在基于 [Unbutu][3] 和 [Debian][4] 的发行版上,你能够使用下面的命令来安装 `unzip`。如果你已经安装了,你会被告知已经被安装。
```
sudo apt install unzip
```
一旦你能够确认你的系统中安装了 `unzip`,你就可以通过 `unzip` 来解压 zip 归档文件。
你也能够使用命令行或者图形工具来达到目的,我会向你展示两种方法:
### 使用命令行解压文件
在 Linux 下使用 `unzip` 命令是非常简单的。在你放 zip 文件的目录,用下面的命令:
```
unzip zipped_file.zip
```
你可以给 zip 文件提供解压路径而不是解压到当前所在路径。你会在终端输出中看到提取的文件:
```
unzip metallic-container.zip -d my_zip
Archive: metallic-container.zip
inflating: my_zip/625993-PNZP34-678.jpg
inflating: my_zip/License free.txt
inflating: my_zip/License premium.txt
```
上面的命令有一个小问题。它会提取 zip 文件中所有的内容到现在的文件夹。你会在当前文件夹下留下一堆没有组织的文件,这不是一件很好的事情。
#### 解压到文件夹下
在 Linux 命令行下,对于把文件解压到一个文件夹下是一个好的做法。这种方式下,所有的提取文件都会被存储到你所指定的文件夹下。如果文件夹不存在,会创建该文件夹。
```
unzip zipped_file.zip -d unzipped_directory
```
现在 `zipped_file.zip` 中所有的内容都会被提取到 `unzipped_directory` 中。
由于我们在讨论好的做法,这里有另一个注意点,我们可以查看压缩文件中的内容而不用实际解压。
#### 查看压缩文件中的内容而不解压压缩文件
```
unzip -l zipped_file.zip
```
下面是该命令的输出:
```
unzip -l metallic-container.zip
Archive: metallic-container.zip
Length Date Time Name
--------- ---------- ----- ----
6576010 2019-03-07 10:30 625993-PNZP34-678.jpg
1462 2019-03-07 13:39 License free.txt
1116 2019-03-07 13:39 License premium.txt
--------- -------
6578588 3 files
```
在 Linux 下,还有些 `unzip` 的其它用法,但我想你现在已经对在 Linux 下使用解压文件有了足够的了解。
### 使用图形界面来解压文件
如果你使用桌面版 Linux那你就不必总是使用终端。在图形化的界面下我们又要如何解压文件呢? 我使用的是 [GNOME 桌面][7],不过其它桌面版 Linux 发行版也大致相同。
打开文件管理器,然后跳转到 zip 文件所在的文件夹下。在文件上点击鼠标右键,你会在弹出的窗口中看到 “提取到这里”,选择它。
![Unzip File in Ubuntu][8]
`unzip` 命令不同这个提取选项会创建一个和压缩文件名相同的文件夹LCTT 译注:文件夹没有 `.zip` 扩展名),并且把压缩文件中的所有内容存储到创建的文件夹下。相对于 `unzip` 命令的默认行为是将压缩文件提取到当前所在的文件下,图形界面的解压对于我来说是一件非常好的事情。
这里还有一个选项“提取到……”,你可以选择特定的文件夹来存储提取的文件。
你现在知道如何在 Linux 解压文件了。你也许还对学习[在 Linux 下使用 7zip][9] 感兴趣?
--------------------------------------------------------------------------------
via: https://itsfoss.com/unzip-linux/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[octopus](https://github.com/singledo)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Zip_(file_format)
[2]: https://itsfoss.com/linux-zip-folder/
[3]: https://ubuntu.com/
[4]: https://www.debian.org/
[5]: tmp.eqEocGssC8#terminal
[6]: tmp.eqEocGssC8#gui
[7]: https://gnome.org/
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/unzip-files-ubuntu.jpg?ssl=1
[9]: https://itsfoss.com/use-7zip-ubuntu-linux/

View File

@ -0,0 +1,79 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11476-1.html)
[#]: subject: (Use sshuttle to build a poor mans VPN)
[#]: via: (https://fedoramagazine.org/use-sshuttle-to-build-a-poor-mans-vpn/)
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
使用 shuttle 构建一个穷人的虚拟专网
======
![][1]
如今,企业网络经常使用“虚拟专用网络”[来保证员工通信安全][2]。但是,使用的协议有时会降低性能。如果你可以使用 SSH 连接远程主机,那么你可以设置端口转发。但这可能会很痛苦,尤其是在你需要与该网络上的许多主机一起使用的情况下。试试 `sshuttle`,它可以通过 SSH 访问来设置快速简易的虚拟专网。请继续阅读以获取有关如何使用它的更多信息。
`sshuttle` 正是针对上述情况而设计的。远程端的唯一要求是主机必须有可用的 Python。这是因为 `sshuttle` 会构造并运行一些 Python 代码来帮助传输数据。
### 安装 sshuttle
`sshuttle` 被打包在官方仓库中,因此很容易安装。打开一个终端,并[使用 sudo][3] 来运行以下命令:
```
$ sudo dnf install sshuttle
```
安装后,你可以在手机页中找到相关信息:
```
$ man sshuttle
```
### 设置虚拟专网
最简单的情况就是将所有流量转发到远程网络。这不一定是一个疯狂的想法,尤其是如果你不在自己家里这样的受信任的本地网络中。将 `-r` 选项与 SSH 用户名和远程主机名一起使用:
```
$ sshuttle -r username@remotehost 0.0.0.0/0
```
但是,你可能希望将该虚拟专网限制为特定子网,而不是所有网络流量。(有关子网的完整讨论超出了本文的范围,但是你可以在[维基百科][4]上阅读更多内容。)假设你的办公室内部使用了预留的 A 类子网 10.0.0.0 和预留的 B 类子网 172.16.0.0。上面的命令变为:
```
$ sshuttle -r username@remotehost 10.0.0.0/8 172.16.0.0/16
```
这非常适合通过 IP 地址访问远程网络的主机。但是,如果你的办公室是一个拥有大量主机的大型网络,该怎么办?名称可能更方便,甚至是必须的。不用担心,`sshuttle` 还可以使用 `dns` 选项转发 DNS 查询:
```
$ sshuttle --dns -r username@remotehost 10.0.0.0/8 172.16.0.0/16
```
要使 `sshuttle` 以守护进程方式运行,请加上 `-D` 选项。它会以 syslog 兼容的日志格式发送到 systemd 日志中。
根据本地和远程系统的功能,可以将 `sshuttle` 用于基于 IPv6 的虚拟专网。如果需要,你还可以设置配置文件并将其与系统启动集成。如果你想阅读更多有关 `sshuttle` 及其工作方式的信息,请[查看官方文档][5]。要查看代码,请[进入 GitHub 页面][6]。
*题图由 [Kurt Cotoaga][7] 拍摄并发表在 [Unsplash][8] 上。*
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/use-sshuttle-to-build-a-poor-mans-vpn/
作者:[Paul W. Frields][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/pfrields/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/sshuttle-816x345.jpg
[2]: https://en.wikipedia.org/wiki/Virtual_private_network
[3]: https://fedoramagazine.org/howto-use-sudo/
[4]: https://en.wikipedia.org/wiki/Subnetwork
[5]: https://sshuttle.readthedocs.io/en/stable/index.html
[6]: https://github.com/sshuttle/sshuttle
[7]: https://unsplash.com/@kydroon?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[8]: https://unsplash.com/s/photos/shuttle?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,136 @@
[#]: collector: (lujun9972)
[#]: translator: (algzjh)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11474-1.html)
[#]: subject: (4 Free and Open Source Alternatives to Adobe Photoshop)
[#]: via: (https://itsfoss.com/open-source-photoshop-alternatives/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Adobe Photoshop 的 4 种自由开源替代品
======
> 想寻找免费的 Photoshop 替代品?这里有一些最好的自由开源软件,你可以用它们来代替 Adobe Photoshop。
Adobe Photoshop 是一个可用于 Windows 和 macOS 的高级图像编辑和设计工具。毫无疑问,几乎每个人都知道它。其十分受欢迎。在 Linux 上,你可以在虚拟机中使用 Windows 或[通过 Wine][1] 来使用 Photoshop但这并不是一种理想的体验。
一般来说,我们没有太多可以替代 Adobe Photoshop 的选项。然而,在本文中,我们将提到一些在 Linux 上可用的最佳的开源 Photoshop 替代品(也支持跨平台)。
请注意 Photoshop 不仅仅是一个图片编辑器。摄影师、数码艺术家、专业编辑使用它用于各种用途。此处的替代软件可能不具备 Photoshop 的所有功能,但你可以将它们用于在 Photoshop 中完成的各种任务。
### 适用于 Linux、Windows 和 macOS 的 Adobe Photoshop 的开源替代品
![][2]
最初,我想只关注 Linux 中的 Photoshop 替代品,但为什么要把这个列表局限于 Linux 呢?其他操作系统用户也可使用开源软件。
**如果你正在使用 Linux则所有提到的软件都应该可以在你的发行版的存储库中找到。你可以使用软件中心或包管理器进行安装。**
对于其他平台,请查看官方项目网站以获取安装文件。
*该列表没有特定的排名顺序*
#### 1、GIMP真正的 Photoshop 替代品
![][3]
主要特点:
* 可定制的界面
* 数字级修饰
* 照片增强(使用变换工具)
* 支持广泛的硬件(压敏平板、音乐数字接口等)
* 几乎支持所有主要的图像文件
* 支持图层管理
可用平台Linux、Windows 和 macOS
[GIMP][4] 是我处理任何事情的必备工具,无论任务多么基础/高级。也许,这是你在 Linux 下最接近 Photoshop 的替代品。除此之外,它还是一个开源和免费的解决方案,适合希望在 Linux 上创作伟大作品的艺术家。
它具有任何类型的图像处理所必需的所有功能。当然,还有图层管理支持。根据你的经验水平,利用率会有所不同。因此,如果你想充分利用它,则应阅读 [文档][5] 并遵循 [官方教程][6]。
#### 2、Krita
![][7]
主要特点:
* 支持图层管理
* 转换工具
* 丰富的笔刷/绘图工具
可用平台Linux、Windows 和 macOS
[Krita][8] 是一个令人印象深刻的开源的数字绘画工具。图层管理支持和转换工具的存在使它成为 Photoshop 的基本编辑任务的替代品之一。
如果你喜欢素描/绘图,这将对你很有帮助。
#### 3、Darktable
![][9]
主要特点:
* RAW 图像显影
* 支持多种图像格式
* 多个带有混合运算符的图像操作模块
可用平台Linux、Windows 和 macOS
[Darktable][10] 是一个由摄影师制作的开源摄影工作流应用程序。它可以让你在数据库中管理你的数码底片。从你的收藏中,显影 RAW 格式的图像并使用可用的工具对其进行增强。
从基本的图像编辑工具到支持混合运算符的多个图像模块,你将在探索中发现许多。
#### 4、Inkscape
![][11]
主要特点:
* 创建对象的工具(最适合绘图/素描)
* 支持图层管理
* 用于图像处理的转换工具
* 颜色选择器RGB、HSL、CMYK、色轮、CMS
* 支持所有主要文件格式
可用平台Linux、Windows 和 macOS
[Inkscape][12] 是一个非常流行的开源矢量图形编辑器,许多专业人士都使用它。它提供了灵活的设计工具,可帮助你创作漂亮的艺术作品。从技术上说,它是 Adobe Illustrator 的直接替代品,但它也提供了一些技巧,可以帮助你将其作为 Photoshop 的替代品。
与 GIMP 的官方资源类似,你可以利用 [Inkscape 的教程][13] 来最大程度地利用它。
### 在你看来,真正的 Photoshop 替代品是什么?
很难提供与 Adobe Photoshop 完全相同的功能。然而,如果你遵循官方文档和资源,则可以使用上述 Photoshop 替代品做很多很棒的事情。
Adobe 提供了一系列的图形工具,并且我们有 [整个 Adobe 创意套件的开源替代方案][14]。 你也可以去看看。
你觉得我们在此提到的 Photoshop 替代品怎么样?你是否知道任何值得提及的更好的替代方案?请在下面的评论中告诉我们。
--------------------------------------------------------------------------------
via: https://itsfoss.com/open-source-photoshop-alternatives/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[algzjh](https://github.com/algzjh)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/install-latest-wine/
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/open_source_photoshop_alternatives.png?ssl=1
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/08/gimp-screenshot.jpg?ssl=1
[4]: https://www.gimp.org/
[5]: https://www.gimp.org/docs/
[6]: https://www.gimp.org/tutorials/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/krita-paint.png?ssl=1
[8]: https://krita.org/
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/darktable.jpg?ssl=1
[10]: https://www.darktable.org/
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2017/12/inkscape-screenshot.jpg?ssl=1
[12]: https://inkscape.org/
[13]: https://inkscape.org/learn/
[14]: https://itsfoss.com/adobe-alternatives-linux/

View File

@ -1,58 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Samsung introduces SSDs it claims will 'never die')
[#]: via: (https://www.networkworld.com/article/3440026/samsung-introduces-ssds-it-claims-will-never-die.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Samsung introduces SSDs it claims will 'never die'
======
New fail-in-place technology in Samsung's SSDs will allow the chips to gracefully recover from chip failure.
Samsung
[Solid-state drives][1] (SSDs) operate by writing to cells within the chip, and after so many writes, the cell eventually dies off and can no longer be written to. For that reason, SSDs have more actual capacity than listed. A 1TB drive, for example, has about 1.2TB of capacity, and as chips die off from repeated writes, new ones are brought online to keep the 1TB capacity.
But that's for gradual wear. Sometimes SSDs just up and die completely, and without warning after a whole chip fails, not just a few cells. So Samsung is trying to address that with a new generation of SSD memory chips with a technology it calls fail-in-place (FIP).
**Also read: [Inside Hyperconvergence: Combining compute, storage and networking][2]**
FIP technology allows a drive to cope with a failure by working around the dead chip and allowing the SSD to keep operating and just not using the bad chip. You will have less storage, but in all likelihood that drive will be replaced anyway, so this helps prevent data loss.
FIP also scans the data for any damage before copying it to the remaining NAND, which would be the first time I've ever seen a SSD with built-in data recovery.
### Built-in virtualization and machine learning technology
The new Samsung SSDs come with two other software innovations. The first is built-in virtualization technology, which allows a single SSD to be divided up into up to 64 smaller drives for a virtual environment.
The second is V-NAND machine learning technology, which helps to "accurately predict and verify cell characteristics, as well as detect any variation among circuit patterns through big data analytics," as Samsung put it. Doing so means much higher levels of performance from the drive.
As you can imagine, this technology is aimed at enterprises and large-scale data centers, not consumers. All told, Samsung is launching 19 models of these new SSDs called under the names PM1733 and PM1735.
**[ [Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][3] ]**
The PM1733 line features six models in a 2.5-inch U.2 form factor, offering storage capacity of between 960GB and 15.63TB, as well as four HHHL card-type drives with capacity ranging from 1.92TB to 30.72TB of storage. Each drive is guaranteed for one drive writes per day (DWPD) for five years. In other words, the warranty is good for writing the equivalent of the drive's total capacity once per day every day for five years.
The PM1735 drives have lower capacity, maxing out at 12.8TB, but they are far more durable, guaranteeing three DWPD for five years. Both drives support PCI Express 4, which has double the throughput of the widely used PCI Express 3. The PM1735 offers nearly 14 times the sequential performance of a SATA-based SSD, with 8GB/s for read operations and 3.8GB/s for writes.
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3440026/samsung-introduces-ssds-it-claims-will-never-die.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html
[2]: https://www.idginsiderpro.com/article/3409019/inside-hyperconvergence-combining-compute-storage-and-networking.html
[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -1,61 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale)
[#]: via: (https://opensourceforu.com/2019/09/global-tech-giants-form-presto-foundation-to-tackle-distributed-data-processing-at-scale/)
[#]: author: (Longjam Dineshwori https://opensourceforu.com/author/dineshwori-longjam/)
Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale
======
* _**The Foundation aims to make the database search engine “the fastest and most reliable SQL engine for massively distributed data processing.”**_
* _**Prestos architecture allows users to query a variety of data sources and move at scale and speed.**_
![Facebook][1]
Facebook, Uber, Twitter and Alibaba have joined hands to form a foundation to help Presto, a database search engine and processing tool, scale and diversify its community.
Under Presto will be now hosted under the Linux Foundation, the U.S.-based non-profit organization announced on Monday.
The newly established Presto Foundation will operate under a community governance model with representation from each of the founding members. It aims to make the engine “the fastest and most reliable SQL engine for massively distributed data processing.”
“The Linux Foundation is excited to work with the Presto community, collaborating to solve the increasing problem of massive distributed data processing at internet scale,” said Michael Dolan, VP of Strategic Programs at the Linux Foundation.”
**Presto can run on large clusters of machines**
Presto was developed at Facebook in 2012 as a high-performance distributed SQL query engine for large scale data analytics. Prestos architecture allows users to query a variety of data sources such as Hadoop, S3, Alluxio, MySQL, PostgreSQL, Kafka, MongoDB and move at scale and speed.
It can query data where it is stored without needing to move the data to a separate system. Its in-memory and distributed query processing results in query latencies of seconds to minutes.
“Presto has been designed for high performance exabyte-scale data processing on a large number of machines. Its flexible design allows processing data from a wide variety of data sources. From day one Presto has been designed with efficiency, scalability and reliability in mind, and it has been improved over the years to take on additional use cases at Facebook, such as batch and other application specific interactive use cases,” said Nezih Yigitbasi, Engineering Manager of Presto at Facebook.
Presto is being used by over a thousand Facebook employees for running several million queries and processing petabytes of data per day, according to Kathy Kam, Head of Open Source at Facebook.
**Expanding community for the benefit of all**
Facebook released the source code of Presto to developers in 2013 in the hope that other companies would help to drive the future direction of the project.
“It turns out many other companies were interested and so under The Linux Foundation, we believe the project can engage others and grow the community for the benefit of all,” said Kathy Kam.
Ubers data platform architecture uses Presto to extract critical insights from aggregated data. “Uber is honoured to partner with the Linux Foundation and major contributors from the tech community to bring the Presto Foundation to life. Our goal is to help create an open and collaborative community in which Presto developers can thrive,” asserted Brian Hsieh, Head of Open Source at Uber.
Liang Lin, Senior Director of Alibaba OLAP products, believes that the collaboration would eventually benefit the community as well as Alibaba and its customers.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/global-tech-giants-form-presto-foundation-to-tackle-distributed-data-processing-at-scale/
作者:[Longjam Dineshwori][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/dineshwori-longjam/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/06/Facebook-Like.jpg?resize=350%2C213&ssl=1

View File

@ -1,65 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco: 13 IOS, IOS XE security flaws you should patch now)
[#]: via: (https://www.networkworld.com/article/3441221/cisco-13-ios-ios-xe-security-flaws-you-should-patch-now.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco: 13 IOS, IOS XE security flaws you should patch now
======
Cisco says vulnerabilities in IOS/IOS XE could cause DOS situation; warns on Traceroute setting
Woolzian / Getty Images
Cisco this week warned its IOS and IOS XE customers of 13 vulnerabilities in the operating system software they should patch as soon as possible.
All of the vulnerabilities revealed in the companys semiannual [IOS and IOS XE Software Security Advisory Bundle][1] have a security impact rating (SIR) of "high". Successful exploitation of the vulnerabilities could allow an attacker to gain unauthorized access to, conduct a command injection attack on, or cause a denial of service (DoS) condition on an affected device, Cisco stated. 
["How to determine if Wi-Fi 6 is right for you"][2]
Two of the vulnerabilities affect both Cisco IOS Software and Cisco IOS XE Software. Two others affect Cisco IOS Software, and eight of the vulnerabilities affect Cisco IOS XE Software. The final one affects the Cisco IOx application environment. Cisco has confirmed that none of the vulnerabilities affect Cisco IOS XR Software or Cisco NX-OS Software.  Cisco [has released software updates][3] that address these problems.
Some of the worst exposures include:
* A [vulnerability in the IOx application environment][4] for Cisco IOS Software could let an authenticated, remote attacker gain unauthorized access to the Guest Operating System (Guest OS) running on an affected device. The vulnerability is due to incorrect role-based access control (RBAC) evaluation when a low-privileged user requests access to a Guest OS that should be restricted to administrative accounts. An attacker could exploit this vulnerability by authenticating to the Guest OS by using the low-privileged-user credentials. An exploit could allow the attacker to gain unauthorized access to the Guest OS as a root.This vulnerability affects Cisco 800 Series Industrial Integrated Services Routers and Cisco 1000 Series Connected Grid Routers (CGR 1000) that are running a vulnerable release of Cisco IOS Software with Guest OS installed.  While Cisco did not rate this vulnerability as critical, it did have a Common Vulnerability Scoring System (CVSS) of 9.9 out of 10.  Cisco recommends disabling the guest feature until a proper fix is installed.
* An exposure in the [Ident protocol handler of Cisco IOS and IOS XE][5] software could allow a remote attacker to cause an affected device to reload. The problem exists because the affected software incorrectly handles memory structures, leading to a NULL pointer dereference, Cisco stated. An attacker could exploit this vulnerability by opening a TCP connection to specific ports and sending traffic over that connection. A successful exploit could let the attacker cause the affected device to reload, resulting in a denial of service (DoS) condition. This vulnerability affects Cisco devices that are running a vulnerable release of Cisco IOS or IOS XE Software and that are configured to respond to Ident protocol requests.
* A vulnerability in the [common Session Initiation Protocol (SIP) library][6] of Cisco IOS and IOS XE Software could let an unauthenticated, remote attacker trigger a reload of an affected device, resulting in a denial of service (DoS). The vulnerability is due to insufficient sanity checks on an internal data structure. An attacker could exploit this vulnerability by sending a sequence of malicious SIP messages to an affected device. An exploit could allow the attacker to cause a NULL pointer dereference, resulting in a crash of the _iosd_ This triggers a reload of the device, Cisco stated.
* A [vulnerability in the ingress packet-processing][7] function of Cisco IOS Software for Cisco Catalyst 4000 Series Switches could let an aggressor cause a denial of service (DoS). The vulnerability is due to improper resource allocation when processing TCP packets directed to the device on specific Cisco Catalyst 4000 switches. An attacker could exploit this vulnerability by sending crafted TCP streams to an affected device. A successful exploit could cause the affected device to run out of buffer resources, impairing operations of control-plane and management-plane protocols, resulting in a DoS condition. This vulnerability can be triggered only by traffic that is destined to an affected device and cannot be exploited using traffic that transits an affected device Cisco stated.
In addition to the warnings, Cisco also [issued an advisory][8] for users to deal with problems in its IOS and IOS XE  Layer 2 (L2) traceroute utility program.  The traceroute identifies the L2 path that a packet takes from a source device to a destination device.
Cisco said that by design, the L2 traceroute server does not require authentication, but it allows certain information about an affected device to be read, including Hostname, hardware model, configured interfaces, IP addresses and other details.  Reading this information from multiple switches in the network could allow an attacker to build a complete L2 topology map of that network.
Depending on whether the L2 traceroute feature is used in the environment and whether the Cisco IOS or IOS XE Software release supports the CLI commands to implement the respective option, Cisco said there are several ways to secure the L2 traceroute server: disable it, restrict access to it through infrastructure access control lists (iACLs), restrict access through control plane policing (CoPP), and upgrade to a software release that disables the server by default.
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][9] ]**
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3441221/cisco-13-ios-ios-xe-security-flaws-you-should-patch-now.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://tools.cisco.com/security/center/viewErp.x?alertId=ERP-72547
[2]: https://www.networkworld.com/article/3356838/how-to-determine-if-wi-fi-6-is-right-for-you.html
[3]: https://tools.cisco.com/security/center/softwarechecker.x
[4]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190925-ios-gos-auth
[5]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190925-identd-dos
[6]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190925-sip-dos
[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190925-cat4000-tcp-dos
[8]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190925-l2-traceroute
[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -1,51 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (MG Motor Announces Developer Program and Grant in India)
[#]: via: (https://opensourceforu.com/2019/09/mg-motor-announces-developer-program-and-grant-in-india/)
[#]: author: (Mukul Yudhveer Singh https://opensourceforu.com/author/mukul-kumar/)
MG Motor Announces Developer Program and Grant in India
======
[![][1]][2]
* _**Launched in partnership with Adobe, Cognizant, SAP, Airtel, TomTom and Unlimit**_
* _**Initiative provides developers to build innovative mobility applications and experiences**_
![][3]MG Motor India has today announced the introduction of its MG Developer Program and Grant. Launched in collaboration with leading technology companies such as SAP, Cognizant, Adobe, Airtel, TomTom and Unlimit, the initiative is aimed at incentivizing Indian innovators and developers to build futuristic mobility applications and experiences. The program also brings in TiE Delhi NCR as the ecosystem partner.
Rajeev Chaba, president &amp; MD, MG Motor India said, “The automobile industry is currently witnessing sweeping transformations in the space of connected, electric and shared mobility. MG aims to take this revolution forward with its focus on attaining technological leadership in the automotive industry. We have partnered with leading tech giants to enable start-ups to build innovative applications that would enable unique experiences for customers across the entire automotive ecosystem. More partners are likely to join the program in due course.”
The company is encouraging developers to send in their ideas to the MG India Team. During the program, selected ideas will get access to resources from the likes of Airtel, SAP, Adobe, Unlimit and Cognizant.
**Grants ranging up to Rs 25 lakhs (2.5 million) for start-ups and innovators**
As part of the MG Developer Program &amp; Grant, MG Motor India will provide innovators with an unparalleled opportunity to secure mentorship and funding from industry leaders. Shortlisted ideas will receive specialized, high-level mentoring and networking opportunities to assist with the practical development of the solution, business plan and modelling, testing facilities, go-to-market strategy, etc. Winning ideas will also have access to a grant, the amount of which will be decided by the jury, on a case-to-case basis.
The MG Developer Program &amp; Grant will initially focus on driving innovation in the following verticals: electric vehicles and components, batteries and management,  harging infrastructure, connected mobility, voice recognition, AI &amp; ML, navigation technologies, customer experiences, car buying experiences, and autonomous vehicles.
“The MG Developer &amp; Grant Program is the latest in a series of initiatives as part of our commitment to innovation as a core organizational pillar. The program will ensure proper mentoring from over 20 industry leaders for start-ups, laying a foundation for them to excel in the future and trigger a stream of newer Internet Car use-cases that will, in turn, drive adoption of new technologies within the Indian automotive ecosystem. It has been our commitment in the market and Innovation is our key pillar,” added Chaba.
The program will award grants ranging from INR5 lakhs to INR25 Lakhs. The program will be open to both external developers including students, innovators, inventors, startups and other tech companies and internal employee teams at MG Motor and its program partners.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/mg-motor-announces-developer-program-and-grant-in-india/
作者:[Mukul Yudhveer Singh][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/mukul-kumar/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/MG-Developer-program.png?resize=660%2C440&ssl=1 (MG Developer program)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/MG-Developer-program.png?fit=660%2C440&ssl=1
[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/MG-Developer-program.png?resize=350%2C233&ssl=1

View File

@ -1,77 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fedora projects for Hacktoberfest)
[#]: via: (https://fedoramagazine.org/fedora-projects-for-hacktoberfest/)
[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/)
Fedora projects for Hacktoberfest
======
![][1]
Its October! That means its time for the annual [Hacktoberfest][2] presented by DigitalOcean and DEV. Hacktoberfest is a month-long event that encourages contributions to open source software projects. Participants who [register][3] and submit at least four pull requests to GitHub-hosted repositories during the month of October will receive a free t-shirt.
In a recent Fedora Magazine article, I listed some areas where would-be contributors could [get started contributing to Fedora][4]. In this article, I highlight some specific projects that provide an opportunity to help Fedora while you participate in Hacktoberfest.
### Fedora infrastructure
* [Bodhi][5] — When a package maintainer builds a new version of a software package to fix bugs or add new features, it doesnt go out to users right away. First it spends time in the updates-testing repository where in can receive some real-world usage. Bodhi manages the flow of updates from the testing repository into the updates repository and provides a web interface for testers to provide feedback.
* [the-new-hotness][6] — This project listens to [release-monitoring.org][7] (which is also on [GitHub][8]) and opens a Bugzilla issue when a new upstream release is published. This allows package maintainers to be quickly informed of new upstream releases.
* [koschei][9] — koschei enables continuous integration for Fedora packages. It is software for running a service for scratch-rebuilding RPM packages in Koji instance when their build-dependencies change or after some time elapses.
* [MirrorManager2][10] — Distributing Fedora packages to a global user base requires a lot of bandwidth. Just like developing Fedora, distributing Fedora is a collaborative effort. MirrorManager2 tracks the hundreds of public and private mirrors and routes each user to the “best” one.
* [fedora-messaging][11] — Actions within the Fedora community—from source code commits to participating in IRC meetings to…lots of things—generate messages that can be used to perform automated tasks or send notifications. fedora-messaging is the tool set that makes sending and receiving these messages possible.
* [fedocal][12] — When is that meeting? Which IRC channel was it in again? Fedocal is the calendar system used by teams in the Fedora community to coordinate meetings. Not only is it a good Hacktoberfest project, its also [looking for a new maintainer][13] to adopt it.
In addition to the projects above, the Fedora Infrastructure team has highlighted [good Hacktoberfest issues][14] across all of their GitHub projects.
### Community projects
* [bodhi-rs][15] — This project provides Rust bindings for Bodhi.
* [koji-rs][16] — Koji is the system used to build Fedora packages. Koji-rs provides bindings for Rust applications.
* [fedora-rs][17] — This project provides a Rust library for interacting with Fedora services like other languages like Python have.
* [feedback-pipeline][18] — One of the current Fedora Council objectives is [minimization][19]: work to reduce the installation and patching footprint of Fedora releases. feedback-pipeline is a tool developed by this team to generate reports of RPM sizes and dependencies.
### And many more
The projects above are only a small sample focused on software used to build Fedora. Many Fedora packages have upstreams hosted on GitHub—too many to list here. The best place to start is with a project thats important to you. Any contributions you make help improve the entire open source ecosystem. If youre looking for something in particular, the [Join Special Interest Group][20] can help. Happy hacking!
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-projects-for-hacktoberfest/
作者:[Ben Cotton][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bcotton/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/hacktoberfest-816x345.jpg
[2]: https://hacktoberfest.digitalocean.com/
[3]: https://hacktoberfest.digitalocean.com/register
[4]: https://fedoramagazine.org/how-to-contribute-to-fedora/
[5]: https://github.com/fedora-infra/bodhi
[6]: https://github.com/fedora-infra/the-new-hotness
[7]: https://release-monitoring.org/
[8]: https://github.com/release-monitoring/anitya
[9]: https://github.com/fedora-infra/koschei
[10]: https://github.com/fedora-infra/mirrormanager2
[11]: https://github.com/fedora-infra/fedora-messaging
[12]: https://github.com/fedora-infra/fedocal
[13]: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/message/GH4N3HYJ4ARFRP666O6EQCHDIQMXVUJB/
[14]: https://github.com/orgs/fedora-infra/projects/4
[15]: https://github.com/ironthree/bodhi-rs
[16]: https://github.com/ironthree/koji-rs
[17]: https://github.com/ironthree/fedora-rs
[18]: https://github.com/minimization/feedback-pipeline
[19]: https://docs.fedoraproject.org/en-US/minimization/
[20]: https://fedoraproject.org/wiki/SIGs/Join

View File

@ -0,0 +1,103 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (System76 will ship Coreboot-powered firmware, a new OS for the apocalypse, and more open source news)
[#]: via: (https://opensource.com/article/19/10/news-october-13)
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
System76 will ship Coreboot-powered firmware, a new OS for the apocalypse, and more open source news
======
Catch up on the biggest open source headlines from the past two weeks.
![Weekly news roundup with TV][1]
In this edition of our open source news roundup, we cover System76 shipping Coreboot-powered firmware, a new OS for the apocalypse, and more open source news!
### System76 will ship 2 Linux laptops with Coreboot-powered open source firmware
The Denver-based Linux PC manufacturer announced plans to start shipping two laptop models with its Coreboot-powered open source firmware later this month. Jason Evangelho, Senior Contributor at _Forbes_, cited this move as a march towards offering open source software and hardware from the ground up. 
System76, which also develops [Pop OS][2], is now taking pre-orders for its Galago Pro and Darter Pro laptops. It claims that Coreboot will let users boot from power off to the desktop 29% faster.
Coreboot is a lightweight firmware designed to simplify the boot cycle of systems using it. It requires the minimum number of tasks needed to load and run a modern 32-bit or 64-bit operating system. Coreboot can offer a replacement for proprietary firmware, though it omits features like execution environments. Our own [Don Watkins][3] asked if Coreboot will ship on other System76 machines. Their response, [as reported by _Forbes_][4]:
> _"Yes. Long term, System76 is working to open source all aspects of the computer. Thelio Io, the controller board in the Thelio desktop, is both open hardware and open firmware. This is a long journey but we're picking up speed. It's been less than a year since the our open hardware Thelio desktop was released and we're now producing two laptops with System76 Open Firmware."_
### Collapse OS is an operating system for the post-apocalypse
Virgil Dupras, a software developer based in Quebec, is convinced the world's global supply chain will collapse before 2030. And he's worried that most [electronics will get caught in the crosshairs][5] due to "a very complex supply chain that we won't be able to achieve again for decades (ever?)." 
To prepare for the worst, Dupras built Collapse OS. It's [designed to run][6] on "minimal or improvised machines" and perform simple tasks that are helpful in a post-apocalyptic society. These include editing text files, collecting sources files for MCUs and CPUs, and reading/writing from several storage devices.
Dupras says it's intended for worst-case scenarios, and that a "weak collapse" might not be enough to justify its use. If you err on the side of caution, the Collapse OS project is accepting new contributors [on GitHub][7]. 
Per the project website, Dupras says his goal is for Collapse OS to be as self-contained as possible with the ability for users to install the OS without Internet access or other resources. Ideally, the goal is for Collapse OS to not be used at all.
### ExpressionEngine will stay open source post-acquisition
The team behind open source CMS ExpressEngine was acquired by Packet Tide - EEHarbor's parent company - in early October. [This announcement ][8]comes one year after Digital Locations acquired EllisLab, which develops EE core. 
[In an announcement][9] on ExpressionEngine's website, EllisLab founder Rick Ellis said Digital Locations wasn't a good fit for ExpressionEngine. Citing Digital Location's goals to build an AI business, Ellis realized several months ago that ExpressionEngine needed a new home:
> _"We decided that what was best for ExpressionEngine was to seek a new owner, one that could devote all the resources necessary for ExpressionEngine to flourish. Our top candidate was Packet Tide due to their development capability, extensive catalog of add-ons, and deep roots in the ExpressionEngine community._
>
> _We are thrilled that they immediately expressed enthusiastic interest in becoming the caretakers of ExpressionEngine."_
Ellis says Packet Tide's first goal is to finish building ExpressionEngine 6.0, which will have a new control panel with a dark theme (who doesn't love dark mode?). ExpressionEngine adopted the Apache License Version 2.0 in November 2018, after 16 years as a proprietary tool.
The tool is still marketed as an open source CMS, and EE Harbor developer Tom Jaeger said [in the EE Slack][10] that their plan is to keep ExpressionEngine open source now. But he also left the door open to possible changes. 
### McAfee and IBM Security to lead the Open Source Cybersecurity Alliance
The two tech giants will contribute the initiative's first open source code and content, under guidance from the OASIS consortium. The Alliance aims to share best practices, tech stacks, and security solutions in an open source platform. 
Carol Geyer, chief development officer of OASIS, said the lack of standard language makes it hard for businesses to share data between tools and products. Despite efforts to collaborate, the lack of a standardized format yields more integration costs that are expensive and time-consuming.
In lieu of building connections and integrations, [the Alliance wants members][11] to "develop protocols and standards which enable tools to work together and share information across vendors." 
According to _Tech Republic_, IBM Security will contribute [STIX-Shifter][12], an open source library that offer a universal security system. Meanwhile, McAfee added its [OpenDXL Standard Ontology][13], a cybersecurity messaging format. Other members of the Alliance include CrowdStrike, CyberArk, and SafeBreach.
#### In other news
* [Paris uses open source to get closer to the citizen][14]
* [SD Times open source project of the week: ABAP SDK for IBM Watson][15]
* [Google's keeping Knative development under its thumb 'for the foreseeable future'][16]
* [Devs engage in soul-searching on future of open source][17]
* [Why leading Formula 1 teams back 'copycat' open source design idea][18]
_Thanks, as always, to Opensource.com staff members and moderators for their help this week._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/news-october-13
作者:[Lauren Maffeo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lmaffeo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
[2]: https://system76.com/pop
[3]: https://opensource.com/users/don-watkins
[4]: https://www.forbes.com/sites/jasonevangelho/2019/10/10/system76-will-begin-shipping-2-linux-laptops-with-coreboot-based-open-source-firmware/#15a4da174e64
[5]: https://collapseos.org/why.html
[6]: https://www.digitaltrends.com/cool-tech/collapse-os-after-societys-collapse/
[7]: https://github.com/hsoft/collapseos
[8]: https://wptavern.com/expressionengine-under-new-ownership-will-remain-open-source-for-now
[9]: https://expressionengine.com/blog/expressionengine-has-a-new-owner
[10]: https://eecms.slack.com/?redir=%2Farchives%2FC04CUNNR9%2Fp1570576465005500
[11]: https://www.techrepublic.com/article/mcafee-ibm-join-forces-for-global-open-source-cybersecurity-initiative/
[12]: https://github.com/opencybersecurityalliance/stix-shifter
[13]: https://www.opendxl.com/
[14]: https://www.smartcitiesworld.net/special-reports/special-reports/paris-uses-open-source-to-get-closer-to-the-citizen
[15]: https://sdtimes.com/os/sd-times-open-source-project-of-the-week-abap-sdk-for-ibm-watson/
[16]: https://www.datacenterknowledge.com/google-alphabet/googles-keeping-knative-development-under-its-thumb-foreseeable-future
[17]: https://www.linuxinsider.com/story/86282.html
[18]: https://www.autosport.com/f1/news/146407/why-leading-f1-teams-back-copycat-design-proposal

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: (runningwater)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -126,7 +126,7 @@ via: https://opensourceforu.com/2019/09/the-protocols-that-help-things-to-commun
作者:[Sapna Panchal][a]
选题:[lujun9972][b]
译者:[runningwater](https://github.com/runningwater)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,112 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How DevOps professionals can become security champions)
[#]: via: (https://opensource.com/article/19/9/devops-security-champions)
[#]: author: (Jessica Repka https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo)
How DevOps professionals can become security champions
======
Breaking down silos and becoming a champion for security will help you,
your career, and your organization.
![A lock on the side of a building][1]
Security is a misunderstood element in DevOps. Some see it as outside of DevOps' purview, while others find it important (and overlooked) enough to recommend moving to [DevSecOps][2]. No matter your perspective on where it belongs, it's clear that security affects everyone.
Each year, the [statistics on hacking][3] become more alarming. For example, there's a hacker attack every 39 seconds, which can lead to stolen records, identities, and proprietary projects you're writing for your company. It can take months (and possibly forever) for your security team to discover the who, what, where, or when behind a hack.
What are operations professionals to do about these dire problems? I say it is time for us to become part of the solution by becoming security champions.
### Silos and turf wars
Over my years of working side-by-side with my local IT security (ITSEC) teams, I've noticed a great many things. A big one is that tension is very common between DevOps and security. This tension almost always stems from the security team's efforts to protect against vulnerabilities (e.g., by setting rules or disabling things) that interrupt DevOps' work and hinder their ability to deploy apps quickly.
You've seen it, I've seen it, everyone you meet in the field has at least one story about it. A small set of grudges turns into a burned bridge that takes time to repair—or the groups begin a small turf war, and the resulting silos make achieving DevOps unlikely.
### Get a new perspective
To try to break down these silos and end the turf wars, I talk to at least one person on each security team to learn about the ins and outs of daily security operations in our organization. I started doing this out of general curiosity, but I've continued because it always gives me a valuable new perspective. For example, I've learned that for every deployment that's stopped due to failed security, the ITSEC team is feverishly trying to patch 10 other problems it sees. Their brashness and quickness to react are due to the limited time they have to fix something before it becomes a large problem.
Consider the immense amount of knowledge it takes to find, analyze, and undo what has been done. Or to figure out what the DevOps team is doing—without background information—then replicate and test it. And to do all of this with their usual greatly understaffed security team.
This is the daily life of your security team, and your DevOps team is not seeing it. ITSEC's daily work can mean overtime hours and overwork to make sure that the company, its teams, and the proprietary work its teams are producing are secure.
### Ways to be a security champion
This is where being your own security champion can help. This means—for everything you work on—you must take a good, hard look at all the ways someone could log into it and what could be taken from it.
Help your security team help you. Introduce tools into your pipelines to integrate what you know will work with what they will know will work. Start with small things, such as reading up on Common Vulnerabilities and Exposures (CVEs) and adding scanning functions to your [CI/CD][4] pipelines. For everything you build, there is an open source scanning tool, and adding small open source tools (such as the ones below) can go the extra mile in the long run.
**Container scanning tools:**
* [Anchore Engine][5]
* [Clair][6]
* [Vuls][7]
* [OpenSCAP][8]
**Code scanning tools:**
* [OWASP SonarQube][9]
* [Find Security Bugs][10]
* [Google Hacking Diggity Project][11]
**Kubernetes security tools:**
* [Project Calico][12]
* [Kube-hunter][13]
* [NeuVector][14]
### Keep your DevOps hat on
Learning about new technology and how to create new things with it is part of the job if you're in a DevOps-related role. Security is no different. Here's my list of ways to keep up to date on the security front while keeping your DevOps hat on.
* Read one article each week about something related to security in whatever you're working on.
* Look at the [CVE][15] website weekly to see what's new.
* Try doing a hackathon. Some companies do this once a month; check out the [Beginner Hack 1.0][16] site if yours doesn't and you'd like to learn more.
* Try to attend at least one security conference a year with a member of your security team to see things from their side.
### Be a champion for good
There are several reasons you should become your own security champion. The first and foremost is to further your knowledge and advance your career. The second reason is to help other teams, foster new relationships, and break down the silos that harm your organization. Creating friendships across your organization has multiple benefits, including setting a good example of bridging teams and encouraging people to work together. You will also foster sharing knowledge throughout the organization and provide everyone with a new lease on security and greater internal cooperation.
Overall, being a security champion will lead you to be a champion for good across your organization.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/devops-security-champions
作者:[Jessica Repka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
[2]: https://opensource.com/article/19/1/what-devsecops
[3]: https://hostingtribunal.com/blog/hacking-statistics/
[4]: https://opensource.com/article/18/8/what-cicd
[5]: https://github.com/anchore/anchore-engine
[6]: https://github.com/coreos/clair
[7]: https://vuls.io/
[8]: https://www.open-scap.org/
[9]: https://github.com/OWASP/sonarqube
[10]: https://find-sec-bugs.github.io/
[11]: https://resources.bishopfox.com/resources/tools/google-hacking-diggity/
[12]: https://www.projectcalico.org/
[13]: https://github.com/aquasecurity/kube-hunter
[14]: https://github.com/neuvector/neuvector-helm
[15]: https://cve.mitre.org/
[16]: https://www.hackerearth.com/challenges/hackathon/beginner-hack-10/

View File

@ -1,153 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Top 10 open source video players for Linux)
[#]: via: (https://opensourceforu.com/2019/10/top-10-open-source-video-players-for-linux/)
[#]: author: (Stella Aldridge https://opensourceforu.com/author/stella-aldridge/)
Top 10 open source video players for Linux
======
[![][1]][2]
_Choosing the right video player can help ensure that you get the optimum viewing experience, and give you the tools to [create a video website][3]. You can even allow you to customize the videos you are watching to your personal preferences too._
So to help enable you to pick the best player for your needs, weve curated a list of the top open source video players for Linux.
Lets take a look:
**1\. XBMC Kodi Media Center**
This is a flexible cross-platform player, written with C++ as core and with Python Scripts as add ons are available. The benefits of using Kodi include:
* Available in over 69 languages
* Users can play audio and video files and media player files from the network and local storage
* Can be used alongside JeOS in devices such as Smart TV and set-top boxes as an application suite
* Theres loads of impressive add ons such as video and audio streaming plugins, themes, screensaver and more
* It supports multiple formats such as MPEG-1,2,4, RealVideo, HVC, HEVC and so on
**2\. VLC Media Player**
This player was a no brainer for this list due to its impressive power and availability on a range of operating systems. It is written in C, C++ and Objective C and users can eliminate the need for figuring out plugins thanks to its extensive support of decoding libraries. The benefits of VLC Media Player include:
* Provides support for DVD players on Linux
* Ability to play .iso files
* Enables high definition recordings of D-VHS tapes to be played
* Can be run from a USB flash drive or external drive directly
* API support and browser support (via plugin)
**3\. Bomi (CMPlayer)**
This flexible and highly powered player ticks most of the boxes of a general user, and its positives are:
* Simple to use graphical user interface (GUI)
* Impressive playback ability
* Option to resume playback
* Supports subtitles and can render multiple subtitle files
**[![][4]][5]4\. Miro Music and Video Player**
Previously called Democracy Player (DTV), Miro was redeveloped by Participatory Culture Foundation and is a good cross-platform player for both video and audio. Its impressive because:
* Supportive of some HD Audio and Video
* Available in over 40 languages
* Can play numerous file formats, e.g., QuickTime, WMV, MPEG files, Audio Video Interface (AVI), XVID
* Can notify the user and download a video automatically once available
**5\. SMPlayer**
This cross-platform media player, written only using Qt library in C++ is a powerful, multi-functional player. We like it because:
* It has multi-language options
* Supportive of all default formats
* Supportive of EDL files, and you can configure subtitles fetched from the Internet
* A variety of Skins that can be downloaded from the Internet
* Multiple speed playback
**6\. MPV Player**
Written in C, Objective-C, Lua, and Python, its free, easy to use and has lots of new features which make it enjoyable to use. The main plus points are:
* Can be compiled as a library which uncovers client APIs which leads to increased control
* Functionality that allows Media Encoding
* Smooth-motion
**7\. Deepin Movie**
This player is an excellent example of an open source media player which has lots of positives, including:
* The ability to complete all play operations by keyboard
* Video files in various formats can be played through this player with ease
* The streaming function allows users to enjoy many online video resources
**8\. Gnome Videos**
Previously called Totem, for those with Gnome desktop environments, this is the player of choice.
Written purely in C this was built by using GStreamer multimedia framework for playback, and the other version (&gt; 2.7.1) was then configured using xine libraries as a backend. Its great because:
It has an impressive ability to support numerous formats including:
* SHOUTcast, SMIL, M3U, Windows Media Player format and more
* You can adjust light settings such as brightness and contrast during playback
* Loads SubRip subtitles
* Supports for direct video playback from Internet channels such as Apple
**9\. Xine Multimedia Player**
An additional cross-platform multimedia player in our list written in C. Its a good all-round player because:
* It supports physical media as well as Video Devices. Think 3gp, Matroska, MOV, Mp4, Audio formats,
* Network Protocols, V4L, DVB, and PVR to name but a few
* It can correct the synchronization of audio and video streams manually
**10\. ExMPlayer**
Last but not least, ExMPlayer is a stunning, powerfully built GUI front-end for MPlayer. Its benefits include:
* Can play any media format
* Supports network streaming and subtitles
* Easy to use an audio converter
* High-quality audio extraction without compromising on sound quality
The abovementioned video players work well on Linux. We would recommend you to try them out and choose the most suitable for you.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/top-10-open-source-video-players-for-linux/
作者:[Stella Aldridge][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/stella-aldridge/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Depositphotos_50337841_l-2015.jpg?resize=696%2C585&ssl=1 (Depositphotos_50337841_l-2015)
[2]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Depositphotos_50337841_l-2015.jpg?fit=900%2C756&ssl=1
[3]: https://www.ning.com/create-video-website/
[4]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Depositphotos_20380441_l-2015.jpg?resize=350%2C231&ssl=1
[5]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Depositphotos_20380441_l-2015.jpg?ssl=1

View File

@ -0,0 +1,55 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How the oil and gas industry exploits IoT)
[#]: via: (https://www.networkworld.com/article/3445204/how-the-oil-and-gas-industry-exploits-iot.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
How the oil and gas industry exploits IoT
======
The energy industry has embraced IoT technology in its operations, from monitoring well production to predicting when its gear will need maintenance.
Like many traditional industries that have long-standing, tried-and-true methods of operation, the oil-and-gas sector hasnt been the quickest to embrace [IoT][1] technology despite having had instrumentation on drilling rigs, pipelines and refining facilities for decades, the extraction industry has only recently begun to work with modern IoT.
Part of the issue has been interoperability, according to Mark Carrier, oil-and-gas development director for RTI, which produces connectivity software for industrial companies. Energy companies are most comfortable working with the same vendors theyve worked with before, but that tendency means there isnt a strong impetus toward sharing data across platforms.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
“On a very low level, things are pretty well-connected, at the connectivity to the back-end theyre well-connected, but theres a huge expense in understanding what that data is,” he said.
Christine Boles, a vice president in Intels IoT group, said that the older systems still being used by the industry have been tough to displace.
“The biggest challenge theyre facing is aging infrastructrure, and how they get to a more standardized, interoperable version,” she said.
Changes are coming, however, in part because energy prices have taken a hit in recent years. Oil companies have been looking to cut costs, and one of the easiest places to do that is in integration and automation. On a typical oil well, said Carrier, a driller will have up to 70 different companies products working sensors covering everything from flow rates to temperature and pressure to azimuth and incline, different components of the drill itself but until fairly recently, these all had to be independently monitored.
An IoT solution that can tie all these various threads of data together, of late, has become an attractive option for companies looking to minimize human error and glean real-time insights from the wide range of instrumentation present on the average oil rig.
Those threads are numerous, with a lot of vertically unique sensor and endpoint types. Mud pulse telemetry uses a module in a drill head to create slight fluctuations in the pressure of drilling fluid to pulse information to a receiver on the surface. Temperature and pressure sensors operating in the extreme environmental conditions of an active borehole might use heavily ruggedized serial cable to push data back aboveground.
Andre Kindness, a principal analyst at Forrester Research, said that the wide range of technologies, manufacturers and standards in use at any given oil-and-gas facility is the product of cutthroat competition
To continue reading this article register now
[Get Free Access][3]
[Learn More][4]   Existing Users [Sign In][3]
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3445204/how-the-oil-and-gas-industry-exploits-iot.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: javascript://
[4]: https://www.networkworld.com/learn-about-insider/

View File

@ -0,0 +1,170 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Role of Open Source Tools and Concepts in IoT Security)
[#]: via: (https://opensourceforu.com/2019/10/the-role-of-open-source-tools-and-concepts-in-iot-security/)
[#]: author: (Shashidhar Soppin https://opensourceforu.com/author/shashidhar-soppin/)
The Role of Open Source Tools and Concepts in IoT Security
======
[![][1]][2]
_With IoT devices permeating the commercial and personal space, their security becomes important. Hackers and malicious agents are always trying to exploit vulnerabilities in order to control these devices. The advantages of using open source rather than proprietary software needs no elaboration. Here is a selection of open source tools popular in the IoT field._
Security is one of the key factors to consider while choosing any IoT platform. The IoT market is very large and is growing constantly. Scary hacks and security breaches are happening to IoT related devices on a regular basis. These could be DoS (Denial of Service) attacks or may completely wipe out the firmware sitting on top of the device. Early detection and prevention of these attacks is a major concern for any enterprise or organisation. Many companies are adopting open source security tools and solutions and laying out predefined best practices for IoT security.
![Figure 1: The Mirai botnet][3]
**Recent security attacks**
As explained earlier, many attacks and threats targeting IoT systems have happened during the last few years. Lets look at some of the major ones.
_**The Silex malware attack**_: In 2019, a 14-year-old hacker bricked at least around 4,000 IoT devices with a new strain of malware called Silex, which was used to abruptly shut down command and control servers. Larry Cashdollar, who is a senior security intelligence response engineer at Akamai, first discovered this malware on his honeypot. Like the BrickerBot malware in 2017, Silex too targeted insecure IoT devices and made them unusable.
Silex trashes an IoT devices storage, dropping firewall rules, removing the network configuration and then halting the device completely. It is as destructive as it can get without actually frying the IoT devices circuits. To recover, victims must manually reinstall the devices firmware, a task too complicated for most device owners. (_<https://www.bleepingcomputer.com/news/security/new-silex-malware-trashes-iot-devices-using-default-passwords/>_)
_**The BrickerBot attack**_: The BrickerBot malware attack took place in 2017 and its author has claimed that 60,000 modems and routers across India lost Internet connectivity. The incident affected modems and routers belonging to two Indian state-owned telecommunications service providers, Bharat Sanchar Nigam Limited (BSNL) and Mahanagar Telephone Nigam Limited (MTNL). The attack was so intensive that from July 25 and up to July 29, users reported losing Internet connectivity as routers and modems became stuck with their red LED remaining always on. The main purpose of this bot is to brick devices so they will not be usable once attacked. (_<https://www.bleepingcomputer.com/news/security/brickerbot-author-retires-claiming-to-have-bricked-over-10-million-iot-devices/>_)
![Figure 2: Princeton IoT Icon][4]
**Note:** _Bricking is the process of changing the code of the device so that the hardware can no longer be used, thereby turning the device essentially into a brick (totally unusable)._
**The Mirai botnet attack:** The Mirai botnet attack took place in 2016. This was a major malware/virus attack. Mirai is a self-propagating botnet virus. The source code for it was made publicly available by the author after a successful and well-publicised attack on the Krebbs website. Since then, the source code has been built and used by many others to launch attacks on Internet infrastructure, causing major damage. The Mirai botnet code infects poorly protected Internet devices by using telnet to find those that are still using their factory default usernames and passwords. The effectiveness of Mirai is due to its ability to infect tens of thousands of these insecure devices and co-ordinate them to mount a DDoS attack against a chosen victim. (_<https://www.corero.com/resources/ddos-attack-types/mirai-botnet-ddos-attack>_)
**Detection and prevention of security attacks**
The major cause of concern in IoT devices is the use of the Linux OS by the device vendors instead of developing a custom OS, which is time consuming and expensive. Most of the attackers/hackers know this and so target these devices.
Some time back, Symantec came up with a solution for this by using a router called Norton Core (_<https://us.norton.com/core>_). This was not a success as it was expensive and had a monthly maintenance cost. In addition, people felt that it was too early to use such a router that came with a monthly subscription, since most homes still do not have enough IoT enabled devices to make such an investment worthwhile.
**Open source security tools**
Subsequently, many state-of-art security tools with multiple security features have been launched. Some of the most used and popular open source security tools are featured below.
**Princeton IoT-Inspector**
This is an open source desktop tool with a one-click, easy-to-install process. It has many built-in security validation features:
* Automatically discovers IoT devices and analyses their network traffic.
* Helps one to identify security and privacy issues with graphs and tables.
* Requires minimal technical skills and no special hardware.
This tool can be configured on Linux/Mac (Windows support is still under discussion).
**What data does IoT Inspector collect?** For each IoT device in the network, IoT Inspector collects the following information and sends it to identified secure servers at the Princeton University:
* Device manufacturers, based on the first six characters of the MAC address of each device on the network
* DNS requests and responses
* Destination IP addresses and ports contacted — but not the public-facing IP address (i.e., the one that your ISP assigns to you)
* Scrambled MAC addresses (i.e., those with a salted hash)
* Aggregate traffic statistics, i.e., the number of bytes sent and received over a period of time
* The names of devices on the identified network
By collecting the above types of data, there are some risks involved, such as:
* Performance degradation
* Data breaches
* Best-effort support
![Figure 3: OWASP IoT][5]
_**How the security validation is done:**_ Princeton releases its findings in a journal/conference publication. When consumers are unsure about whether to buy a new IoT device or not, they can read the relevant papers before making a decision, checking if the device of interest features in the Princeton data. Otherwise, the consumer can always buy the product, analyse it with IoT Inspector, and return it if the results are unsatisfactory.
**Open Web Application Security Project (OWASP) set of standards**
OWASP is an open community dedicated to enabling organisations to conceive, develop, acquire, operate and maintain applications that can be trusted.
_**Testing an IoT device for poor authentication/authorisation (OWASP I2):**_ When we think of weak authentication, we might think of passwords that are not changed on a regular basis, six-to-eight character passwords that are nonetheless easy to guess, or of systems without multi-factor authentication mechanisms. Unfortunately, with many smart devices, weak authentication causes major havoc.
Many of the IoT devices are secured with default passwords like 1234, password, or ABCD. Users put their password checks in client-side Java code, send credentials without using HTTPS or other encrypted transport protocols, or require no passwords at all. This kind of mismanagement of passwords causes a lot of damage to devices.
Many OWASP l2 to I10 standards provide different levels of security, which are listed in Figure 3.
* I1 Insecure Web interface
* I2 Insufficient authentication/authorisation
* I3 Insecure network services
* I4 Lack of transport encryption
* I5 Privacy concerns
* I6 Insecure cloud interface
* I7 Insecure mobile interface
* I8 Insufficient security configurability
* I9 Insecure software/firmware
* I10 Poor physical security
**Mainflux platform: For authentication and authorisation**
Mainflux is an open source IoT platform providing features like edge computing and consulting services. Mainflux Labs is a technology company offering an end-to-end, open source patent-free IoT platform, an LF EdgeX Foundry compliant IoT edge gateway with an interoperable ecosystem of plug-and-play components, and consulting services for the software and hardware layers of the IoT technology. It provides enhanced and fine-grained security via the deployment-ready Mainflux authentication and authorisation server, with an access control scheme based on customisable API keys and scoped JWT. It also offers mutual TLS (mTLS) authentication using X.509 certificates, NGINX reverse proxy for security, load balancing and termination of TLS and DTLS connections, etc. Many of these features can be explored and used according to the need of the hour.
**Best practices for building a secure IoT framework**
To prevent/avoid attacks on any IoT device, environment or ecosystem, the following best practices need to be applied:
* Always use strong passwords for device accounts and Wi-Fi networks.
* It is a best practice to always change default passwords.
* Use stronger and most recent encryption methods when setting up Wi-Fi networks such as WPA2.
* Develop the habit of disabling or protecting the remote access to IoT devices when not needed.
* Use wired connections instead of wireless, where possible.
* Be careful when buying used IoT devices, as they could have been tampered with. It is better to consult a genuine authority and confirm the devices validation or buy from a certified authority.
* Research the vendors device security measures as well as the features that they support.
* Modify the privacy and security settings of the device to meet your needs immediately after buying the device.
* It is better to disable features that are not used frequently.
* Install updates regularly, when they become available. It is a best practice to use the latest firmware updates.
* Ensure that an outage due to jamming or a network failure does not result in an insecure state of the installation.
* Verify if the smart features are required or if a normal device suffices for the purpose.
**Best practices for designers of IoT frameworks and device manufacturers**
* Always use SSL/TLS-encrypted connections for communication purposes.
* Check the SSL certificate and the certificate revocation list.
* Allow and encourage the use of strong passwords and change default passwords immediately.
* Provide a simple and secure update process with a chain of trust.
* Provide a standalone option that works without Internet and cloud connections.
* Prevent brute-force attacks at the login stage through account lockout measures or with multi-factor types of authentication mechanisms.
* Implement a smart fail-safe mechanism when the connection or power is lost or jammed.
* Remove unused tools and allow only trusted applications and software.
* Where applicable, security analytics features should be provided in the device management strategy.
IoT developers and designers should include security at the start of the device development process, irrespective of whether the device is for the consumer market, the enterprise or industry. Incorporating security at the design phase always helps. Enabling security by default is very critical, as is providing the most recent operating systems and using secure hardware with the latest firmware versions.
**Enabling PKI and digital certificates**
Public key infrastructure (PKI) and 509 digital certificates play important and critical roles in the development of secure IoT devices. It is always a best practice to provide the trust and control needed to distribute and identify public encryption keys, secure data exchanges over networks and verify the identity.
**API (application performance indicator) security**
For any IoT environment, API security is essential to protect the integrity of data. As this data is being sent from IoT devices to back-end systems, we always have to make sure only authorised devices, developers and apps communicate with these APIs.
**Patch management/continuous software updates**
This is one crucial aspect in IoT security management. Providing the means of updating devices and software either over network connections or through automation is critical. Having a coordinated disclosure of vulnerabilities is also important to updating devices as soon as possible. Consider end-of-life strategies as well.
Always remember that hard coded credentials should never be used nor be part of the design process. If there are any default credentials, users should immediately update them using strong passwords as described earlier, or follow multi-factor or biometric authentication mechanisms.
**Hardware security**
It is absolutely essential to make devices tamper-proof or tamper-evident, and this can be achieved by endpoint hardening.
Strong encryption is critical to securing communication between devices. It is always a best practice to encrypt data at rest and in transit using cryptographic algorithms.
IoT and operating system security are new to many security teams. It is critical to keep security staff up to date with new or unknown systems, enabling them to learn new architectures and programming languages to be ready for new security challenges. C-level and cyber security teams should receive regular training to keep up with modern threats and security measures.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/the-role-of-open-source-tools-and-concepts-in-iot-security/
作者:[Shashidhar Soppin][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/shashidhar-soppin/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Security-of-IoT-devices.jpg?resize=696%2C550&ssl=1 (Security of IoT devices)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Security-of-IoT-devices.jpg?fit=900%2C711&ssl=1
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-1-The-Mirai-botnet.jpg?resize=350%2C188&ssl=1
[4]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-2-Princeton-IoT-Icon.jpg?resize=350%2C329&ssl=1
[5]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-3-OWASP-IoT.jpg?resize=350%2C147&ssl=1

View File

@ -0,0 +1,148 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Primer on Open Source IoT Middleware for the Integration of Enterprise Applications)
[#]: via: (https://opensourceforu.com/2019/10/a-primer-on-open-source-iot-middleware-for-the-integration-of-enterprise-applications/)
[#]: author: (Gopala Krishna Behara https://opensourceforu.com/author/gopalakrishna-behara/)
A Primer on Open Source IoT Middleware for the Integration of Enterprise Applications
======
[![][1]][2]
_The Internet of Things (IoT) integrates a virtual world of information to the real world of devices through a layered architecture. IoT middleware is an interface between the physical world (hardware layer) of devices with the virtual world (application layer), which is responsible for interacting with devices and information management systems. This article discusses IoT middleware, the characteristics of open source IoT middleware, IoT middleware platform architecture and key open source IoT middleware platforms._
With billions of devices generating trillions of bytes of data, there is a need for heterogeneous IoT device management and application enablement. This requires a revamp of existing architectures. There is a need to identify industry agnostic application middleware to address the complexity of IoT solutions, future changes, the integration of IoT with mobile devices, various types of machinery, equipment and tablets, among other devices.
According to Statista, the total installed base of IoT connected devices is projected to be 75.44 billion worldwide by 2025.
Most of the IoT applications are heterogeneous, and domain specific. Deciding on the appropriate IoT middleware for app development is the major challenge faced by developers today. The functionalities provided by different middleware vendors are almost similar but differ mainly in their underlying technologies. Middleware services provided by different IoT vendors include data acquisition, device management, data storage, security and analytics. Selecting the right middleware platform is one of the critical steps in application development.
The important parameters for choosing the right middleware for an IoT application are scalability, availability, the ability to handle huge amounts of data, a high processing speed, flexibility, integration with varied analytical tools, security and cost.
**Industry adoption of open source middleware in IoT**
The market for IoT middleware was valued at US$ 6.44 billion in 2018 and is expected to reach a value of US$ 18.68 billion by 2024 at a CAGR of 19.72 per cent, over the forecast period 2019-2024 (_<https://www.mordorintelligence.com/industry-reports/iot-middle-ware-market>_).
According to an Ericsson forecast (_<https://www.ericsson.com/en/mobility-report/internet-of-things-forecast>_), there will be around 29 billion connected devices in use by 2022, of which around 18 billion will be related to IoT. Gartner forecasts that 14.2 billion connected things will be in use in 2019, and that the total will reach 25 billion by 2021, producing immense volume of data.
**IoT middleware and its features**
Middleware acts as an agent between the service providers (IoT devices) and service consumers (enterprise applications). It is a software layer that sits in between applications and objects. It is a mediator interface that enables the interaction between the Internet and things. It hides the heterogeneity among the devices, components and technology of an IoT system. Middleware provides solutions to frequently encountered problems, such as interoperability, security and dependability. The following are the important features of middleware, which improve the performance of devices.
**Flexibility:** This feature helps in establishing better connectivity, which improves the communication between applications and things. There are different kinds of flexibility (e.g., response time, faster to evolve and change).
**Transparency:** Middleware hides many complexities and architectural information details from both the application and the object sides, so that the two can communicate with minimum knowledge of either side.
**Interoperability:** This functionality allows two sets of applications on interconnected networks to exchange data and services meaningfully with different assumptions on protocols, data models, and configurations.
**Platform portability:** An IoT platform should be able to communicate from everywhere, anytime with any device. Middleware runs on the user side and can provide independence from network protocols, programming languages, OSs and others.
**Re-usability:** This feature makes designing and developing easier by modifying system components and assets for specific requirements, which results in cost efficiency.
**Maintainability:** Maintainability has a fault tolerance approximation. Middleware performs maintainability efficiently and extends the network.
**Security:** Middleware should provide different security measures for ubiquitous applications and pervasive environments. Authentication, authorisation and access control helps in verification and accountability.
**Characteristics of open source IoT middleware**
An open source IoT middleware platform should be fault-tolerant and highly available. It has the following characteristics:
* No vendor lock-in, and it comes with the surety of seamless integration of enterprise-wide tools, applications, products and systems developed and deployed by different organisations and vendors.
* Open source middleware increases the productivity, speeds up time to market, reduces risk and increases quality.
* Adoption of open source middleware enhances the interoperability with other enterprise applications because of the ability to reuse recommended software stacks, libraries and components.
* IoT middleware platforms should support open APIs, deployment models of the cloud, and be highly available.
* It should support open data formats like RestAPI, JSON, XML and Java, and be freely available
* An IoT middleware platform should support multi-service and heterogeneous devices, and be compatible with the hardware for sensing environmental information.
* Migration to any new platform or system should be seamless. It should be possible to adopt or integrate with any solution.
* The information data model should be distributed and extensible, providing availability and scalability to the system.
* An IoT middleware platform should support major communication protocols like MQTT, CoAP, HTTP, WebSockets, etc.
* An IoT middleware platform should support different security features like encryption, authentication, authorisation and auditing.
* It should support technologies such as M2M applications, real-time analytics, machine learning, artificial intelligence, analytics, visualisation and event reporting.
**IoT middleware architecture**
The middleware mediates between IoT data producers and the consumers. APIs for interactions with the middleware are based on standard application protocols.
API endpoints for accessing the data and services should be searchable via an open catalogue, and should contain linked metadata about the resources.
The device manager communicates messages to the devices. The database needs to access and deliver messages to the devices with minimum latency.
Data processing involves data translation, aggregation and data filtering on the incoming data, which enables real-time decision making at the edge. The database needs to support high-speed reads and writes with sub-millisecond latency. It helps in performing complex analytical computations on the data.
The IoT data stream normalises the data to a common format and sends it to enterprise systems. The database needs to perform the data transformation operations efficiently.
Middleware supports the authentication of users, organisations, applications and devices. It supports functionalities like certificates, password credentials, API keys, tokens, etc. It should also support single sign-on, time based credentials, application authentication (via signatures) and device authentication (via certificates).
Logging is necessary for both system debugging as well as auditing. Middleware manages the logging of system debugging and auditing details. It helps to track the status of the various services, APIs, etc, and administers them.
**Key open source IoT middleware platforms**
Holistically, an IoT implementation covers data collection and insertion through sensors as well as giving control back to devices. The different types of IoT middleware are categorised as:
* Application-centric (application and data management)
* Platform-centric (application enablement, device management and connectivity management)
* Industry-specific (manufacturing, healthcare, energy and utilities, transportation and logistics, agriculture, etc)
![Figure 1: IoT middleware architecture][3]
Selecting the right middleware during various stages of IoT implementation depends on multiple factors like the size of the enterprise, the nature of the business, the development and operational perspectives, etc.The following are some of the top open source middleware platforms for IoT based applications.
**Kaa** is platform-centric middleware. It manages an unlimited number of connected devices with cross-device interoperability. It performs real-time device monitoring, remote device provisioning and configuration, collection and analysis of sensor data. It has microservices based portability, horizontal scalability and a highly available IoT platform. It supports on premise, public cloud and hybrid models of deployment. Kaa is built on open components like Kafka, Cassandra, MongoDB, Redis, NATS, Spring, React, etc.
**SiteWhere** is platform-centric middleware. It provides ingestion, storage, processing and integration of device data. It supports multi-tenancy, MQTT, AMQP, Stomp, CoAP and WebSocket. It seamlessly integrates with Android, iOS, and multiple SDKs. It is built on open source technology stacks like MongoDB, Eclipse Californium, InfluxDB, HBase and many others.
**IoTSyS** is platform-centric and industry-specific middleware. It uses IPv6 for non-IP IoT devices and systems. It is used in smart city and smart grid projects to make the automation infrastructure smart. IoTSyS provides interoperable Web technologies for sensors and actuator networks.
**DeviceHive** is cloud agnostic, microservices based, platform-centric middleware used for device connectivity and management. It has the ability to connect to any device via MQTT, REST or WebSockets. It supports Big Data solutions like ElasticSearch, Apache Spark, Cassandra and Kafka for real-time and batch processing.
**EclipseIoT (Kura)** provides device connectivity, data transformation and business logic with intelligent routing, edge processing/analytics and real-time decisions.
**Zetta** is application-centric middleware. It is a service-oriented open source IoT platform built on Node.js combining REST API, WebSockets and reactive programming. It can run on cloud platforms like Heroku to create geo-distributed networks and stream data into machine analytics systems like Splunk.
**MainFlux** is application-centric middleware providing solutions based on the cloud. It has been developed as microservices, containerised by Docker and orchestrated with Kubernetes. The architecture is flexible and allows seamless integration with enterprise systems like ERP, BI and CRM. It can also integrate with databases, analytics, backend systems and cloud services easily. In addition, it supports REST, MQTT, WebSocket and CoAP.
**Distributed Services Architecture (DSA)** facilitates decentralised device inter-communication, allowing protocol translation and data integration to and from third party data sources.
**OpenRemote** is application-centric middleware used to connect any device regardless of vendor or protocol, to create meaningful connections by converting data points into smart applications. It finds use in home automation, commercial buildings, public space and healthcare. Data visualisation is integrated with devices and sensors, and turns data into smart applications.
**OpenIoT** is application-centric open source middleware for pulling information from sensors. It incorporates Sensing-as-a-Service for deploying and providing services in cloud environments.
**ThingsBoard** is an open source IoT platform for data collection, processing, data visualisation and device management. It supports IoT protocols like MQTT, CoA, and HTTP with on-premise and cloud deployment. It is horizontally scalable, stores data in Cassandra DB, HSQLDB and PostgreSQL.
**NATS.io** is a simple, secure and high performance open source messaging system for cloud native solutions. It is developed as microservices architecture with high performance, secure and resilient capabilities.
**Benefits of open source IoT middleware**
Open source middleware for the IoT has the following advantages over proprietary options:
* It is easy to upgrade to new technologies with open source middleware.
* It has the ability to connect with upcoming device protocols and backend applications.
* Open source middleware ensures lower overall software costs, and is easier to use when changing technology and open source APIs for integration.
* It has a microservices based architecture and is built using open source technologies, resulting in high performance, scalability and fault-tolerance.
* It provides multi-protocol support and is hardware-agnostic. It supports connectivity for any device and any application.
* It has the flexibility to allow the cloud service provider to be changed.
* It is very important to choose the right set of open source middleware for an IoT solution. This is a big challenge as the market offers a vast choice.
Analyse the business problem and arrive at the solution as a first step. Break the solution into services and understand the middleware needs of these services. This will help to narrow down the middleware choices.
IoT middleware helps overcome the problems associated with the heterogeneity of the entire Internet of Things by enabling smooth communication among devices and components from different vendors and based on different technologies.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/a-primer-on-open-source-iot-middleware-for-the-integration-of-enterprise-applications/
作者:[Gopala Krishna Behara][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/gopalakrishna-behara/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Middle-were-architecture-illustration.jpg?resize=696%2C426&ssl=1 (Middle were architecture illustration)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Middle-were-architecture-illustration.jpg?fit=800%2C490&ssl=1
[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Figure-1-IoT-middleware-architecture.jpg?resize=350%2C162&ssl=1

View File

@ -0,0 +1,160 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Pros and cons of event-driven security)
[#]: via: (https://opensource.com/article/19/10/event-driven-security)
[#]: author: (Yuriy Andamasov https://opensource.com/users/yuriy-andamasov)
Pros and cons of event-driven security
======
Event-driven security is not an impenetrable wall, but it may be cheaper
and better than what you've been doing to prevent data breaches.
![Three closed doors][1]
Great news, everyone! Forrester Research says that [95% of all recorded breaches][2] in 2016 came from only three industries: government, technology, and retail. Everyone else is safe... ish, right?
Hold on for a moment. Tech? Retail? What kind of industry diversification is this? We are, after all, living in 2019, where every business is a tech business. And all of us are continuously selling something, whether its an innovative product or an amazing service.
So what the report should have said is that 95% of all recorded breaches came from attacks on 95% of all businesses both online and offline. And some of the attackers went for the .gov.
More on the matter, [43% of attackers target small businesses][3]—and thats a lot considering that, on average, a hack attempt takes place every 39 seconds.
To top things off, the average cost of a data breach in 2020 is expected to exceed [$150 million][4]. These stats sound a bit more terrifying out of context, but the threat is still very much real. Ouch.
What are our options then?
Well, either the developers, stakeholders, decision-makers, and business owners willingly risk the integrity and security of their solutions by doing nothing, or they can consider fortifying their digital infrastructure.
Sure, the dilemma doesnt seem like it offers too many options, and thats only because it doesnt. That said, establishing efficient network security is easier said than done.
### The cost of safety
Clearly, security is an expensive endeavor, a luxury even.
* Cybersecurity costs increased by 22.7% in only a year from 2016 to 2017.
* According to Gartner, organizations spent a total of $81.6 billion on cybersecurity, a $17.7 billion increase!
* And the worst part yet—the problem doesnt seem like its going away regardless of how much money we throw at it.
Perhaps we are doing something wrong? Maybe its the way that we perceive network security thats flawed? Maybe, just maybe, theres a cheaper AND better solution?
### Scalability: Bad?
Software, network, and architecture development have evolved dramatically over the last decade. Weve moved from the age-old monolithic approach to leaner, more dynamic methodologies that allow faster reactions to the ever-shifting demands of the modern market.
That said, flexibility comes at a cost. A monolith is a solid, stable element of infrastructure where a small change can crash the whole thing like a house of cards. But said change—regardless of its potential danger—is easily traceable.
Today, the architecture is mostly service-based, where every single piece of functionality is like a self-contained Lego block. An error in one of the blocks wont discard the entire system. It may not even affect the blocks standing near it.
This approach, while adding scalability, has a downside—its really hard to trace a single malicious change, especially in an environment where every element is literally bombarded with new data coming anywhere from an HR or security update to, well, a malicious code attack.
Does this mean its best if we sacrifice scalability in favor of security?
Not at all. Weve moved away from the monolith for a reason. Going back now will probably cost you your entire project. The tricky part is in effectively identifying what is and what isnt a threat, as this is where the flaw of microservices lies.
We need preventive measures.
### Events, alerts, and incidents
Everything that happens within your network can be described in one of three words: event, alert, or incident.
An **event** is any observed change taking place in a network, environment, or workflow. So, for example, when a new firewall policy is pushed, you may consider that the event has happened. When the routers are updated, another event has happened, and so on and so forth.
An **alert** is an event that requires action. In simpler words, if you or your team need to do something due to the event taking place, it is considered an alert.
According to the CERT NISTT 800-61 definition, an **incident** is an event that violates your security policies. Or, in simpler words, it is an event that negatively impacts the business like a worm spreading through the network, a phishing attempt, or the loss of sensitive data.
By this logic, your infrastructure developers, security officers, and net admins are tasked with a very simple mission: establishing efficient preventive measures against any and all incidents.
Again, easier said than done.
There are simply too many different events taking place at one time. Every change, shift, or update differs, one from another, resulting in dozens of false-positive incidents. Add the fact that the mischievous events are very keen on disguising themselves, and youll get why your net admins look like theyve lived on coffee and Red Bull for (at least) the past few weeks.
Is there anything we, as a responsible community of developers, managers, stakeholders, product, and business owners, can do?
### Event-driven security in a nutshell
Whats the one thing everything you ignore, act upon, or react to shares in common?
An event.
Something needs to happen for you to respond to it in any shape or form. Additionally, many events are similar to one another and can be categorized as a stream.
Heres an example.
Lets say you have an e-commerce store. One of your clients adds an item to his cart (event) and then removes it (event) or proceeds with the purchase (event).
Need I say that, other than simply categorizing them, we can analyze these events to identify behavioral patterns, and this makes it easier to identify threats in real time (or even empower HR/dev/marketing teams with additional data)?
#### Event vs. command
So event-driven security is essentially based on following up events. Were we ever _not_ following up on them? Didnt we have commands for that?
Yes, yes, we have, and thats partially the problem. Heres an example of an event versus a command:
_&gt; Event: I sit on the couch, and the TV turns on._
_&gt; Command: I sit on the couch and turn on the TV._
See the difference? I had to perform an action in the second scenario; in the first, the TV reacted to the event (me sitting on the couch generated the TV turning on) on its own.
The first approach ensures the integrity of your network through efficient use of automation, essentially allowing the software to operate on its own and decide whether to launch the next season of _Black Mirror_ on Netflix or to quarantine an upcoming threat.
#### Isolation
Any event is essentially a trigger that launches the next application in the architecture. A user inputs his login, and the system validates its correctness, requests confirmation from the database, and tests the input for the presence of code.
So far, so good. Not much has changed, right?
Heres the thing—every process and every app run autonomously like separate events, each triggering their own chains. None of the apps know if other apps have been triggered and whether they are running any processes or not.
Think of them as separate, autonomous clusters. If one is compromised, it will not affect the entirety of the system, as it simply doesnt know if anything else exists. That said, a malfunction in one of the clusters will trigger an alert, thus preventing the incident.
#### An added bonus
Isolated apps are not dependent on one another, meaning youll be able to plug in as many of them as you need without any of them risking or affecting the rest of the system.
Call it scalability out of the box, if you will.
### Pros of the event-driven approach
Weve already discussed most of the pros of the event-driven approach. Lets summarize them here in the form of short bullet points.
* **Encapsulation:** Every process has a set of clear, easily executed boundaries.
* **Decoupling:** The processes are independent and unaware of one another.
* **Scalability:** Its easy to add new functionality, apps, and processes.
* **Data generation:** Event strings generate predictable data patterns you can easily analyze.
### Cons of the event-driven approach
Sadly, despite an impressive array of benefits to the business, event-driven architecture and security are not silver bullets. The approach has its flaws.
For starters, developing any architecture with an insane level of scalability is hard, expensive, and time-consuming.
Event-driven security is far from being a truly impenetrable wall. Hackers evolve and adapt rather quickly. Theyll likely find a breach in any system if they put their mind to it, whether through coding or through phishing.
Luckily, you dont have to build a Fort Knox. All you need is a solid system thats hard enough to crack for the hacker to give up and move to an easier target. The event-driven approach to network security does just that.
Moreover, it minimizes your losses if an incident actually happens, so you have that going for you, which is nice.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/event-driven-security
作者:[Yuriy Andamasov][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/yuriy-andamasov
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA (Three closed doors)
[2]: https://www.techrepublic.com/article/forrester-what-can-we-learn-from-a-disastrous-year-of-hacks-and-breaches/
[3]: https://www.cybintsolutions.com/industries-likely-to-get-hacked/
[4]: https://www.cybintsolutions.com/cyber-security-facts-stats/

View File

@ -0,0 +1,106 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Beamforming explained: How it makes wireless communication faster)
[#]: via: (https://www.networkworld.com/article/3445039/beamforming-explained-how-it-makes-wireless-communication-faster.html)
[#]: author: (Josh Fruhlinger https://www.networkworld.com/author/Josh-Fruhlinger/)
Beamforming explained: How it makes wireless communication faster
======
Beamforming uses the science of electromagnetic interference to make Wi-Fi and 5G connections more precise.
Thinkstock
Beamforming is a technique that focuses a wireless signal towards a specific receiving device, rather than having the signal spread in all directions from a broadcast antenna, as it normally would. The resulting more direct connection is faster and more reliable than it would be without beamforming.
Although the principles of beamforming have been known since the 1940s, in recent years beamforming technologies have introduced incremental improvements in Wi-Fi networking. Today, beamforming is crucial to the [5G networks][1] that are just beginning to roll out.
[5G versus 4G: How speed, latency and application support differ][1]
### How beamforming works
A single antenna broadcasting a wireless signal radiates that signal in all directions (unless it's blocked by some physical object). That's the nature of how electromagnetic waves work. But what if you wanted to focus that signal in a specific direction, to form a targeted beam of electromagnetic energy? One technique for doing this involves having multiple antennas in close proximity, all broadcasting the same signal at slightly different times. The overlapping waves will produce interference that in some areas is _constructive_ (it makes the signal stronger) and in other areas is _destructive_ (it makes the signal weaker, or undetectable). If executed correctly, this beamforming process can focus your signal where you want it to go.
[][2]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][2]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
[][3] ojogabonitoo / Getty Images
The mathematics behind beamforming is very complex (the [Math Encounters blog][4] has an introduction, if you want a taste), but the application of beamforming techniques is not new. Any form of energy that travels in waves, including sound, can benefit from beamforming techniques; they were first developed to [improve sonar during World War II][5] and are [still important to audio engineering][6]. But we're going to limit our discussion here to wireless networking and communications.  
### Beamforming benefits and limitations
By focusing a signal in a specific direction, beamforming allows you deliver higher signal quality to your receiver — which in practice means faster information transfer and fewer errors — without needing to boost broadcast power. That's basically the holy grail of wireless networking and the goal of most techniques for improving wireless communication. As an added benefit, because you aren't broadcasting your signal in directions where it's not needed, beamforming can reduce interference experienced by people trying to pick up other signals.
The limitations of beamforming mostly involve the computing resources it requires; there are many scenarios where the time and power resources required by beamforming calculations end up negating its advantages. But continuing improvements in processor power and efficiency have made beamforming techniques affordable enough to build into consumer networking equipment.
### Wi-Fi beamforming routers: 802.11n vs. 802.11ac
Beamforming began to appear in routers back in 2008, with the advent of the [802.11n Wi-Fi standard][7]. 802.11n was the first version of Wi-Fi to support multiple-input multiple-output, or MIMO, technology, which beamforming needs in order to send out multiple overlapping signals. Beamforming with 802.11n equipment never really took off, however, because the spec doesn't lay out how beamforming should be implemented. A few vendors put out proprietary implementations that required purchasing matching routers and wireless cards to work, and they were not popular.
With the emergence of the [802.11ac standard][8] in 2016, that all changed. There's now a set of specified beamforming techniques for Wi-Fi gear, and while 802.11ac routers aren't required by the specification to implement beamforming, if they do (and almost all on the market now do) they do so in a vendor-neutral and interoperable way. While some offerings might tout branded names, such as D-Link's AC Smart Beam, these are all implementations of the same standard. (The even newer [802.11ax standard][9] continues to support ac-style beamforming.)
### Beamforming and MU-MIMO
Beamforming is key for the support of multiuser MIMO, or [MU-MIMO][10], which is becoming more popular as 802.11ax routers roll out. As the name implies, MU-MIMO involves multiple users that can each communicate to multiple antennas on the router. MU-MIMO [uses beamforming][11] to make sure communication from the router is efficiently targeted to each connected client.
### Explicit beamforming vs. implicit beamforming
There are a couple of ways that Wi-Fi beamforming can work. If both the router and the endpoint support 802.11ac-compliant beamforming, they'll begin their communication session with a little "handshake" that helps both parties establish their respective locations and the channel on which they'll communicate; this improves the quality of the connection and is known as _explicit_ beamforming. But there are still plenty of network cards in use that only support 802.11n or even older versions of Wi-Fi. A beamforming router can still attempt to target these devices, but without help from the endpoint, it won't be able to zero in as precisely. This is known as _implicit_ beamforming, or sometimes as _universal_ beamforming, because it works in theory with any Wi-Fi device.
In many routers, implicit beamforming is a feature you can turn on and off. Is enabling implicit beamforming worth it? The [Router Guide suggests][12] that you test how your network operates with it on and off to see if you get a boost from it. It's possible that devices such as phones that you carry around your house can see dropped connections with implicit beamforming.
### 5G beamforming
To date, local Wi-Fi networks is where the average person is most likely to encounter beamforming in the wild. But with the rollout of wide-area 5G networks now under way, that's about to change. 5G uses radio frequencies between 30 and 300 GHz, which can transmit data much more quickly but are also much more prone to interference and encounter more difficulty passing through physical objects. A host of technologies are required to overcome these problems, including smaller cells, massive MIMO — basically cramming tons of antennas onto 5G base stations — and, yes, [beamforming][13]. If 5G takes off in the way that vendors are counting on, the time will come soon enough when we'll all be using beamforming (behind the scenes) every day.
**Learn more about wireless networking**
* [How to determine if Wi-Fi 6 is right for you][14]
* [What is 5G? How is it better than 4G?][13]
* [Cisco exec details how Wi-Fi 6 and 5G will fire-up enterprises][15]
* [Wi-Fi 6 is coming to a router near you][16]
* [Wi-Fi analytics get real][17]
Join the Network World communities on [Facebook][18] and [LinkedIn][19] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3445039/beamforming-explained-how-it-makes-wireless-communication-faster.html
作者:[Josh Fruhlinger][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Josh-Fruhlinger/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3330603/5g-versus-4g-how-speed-latency-and-application-support-differ.html
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[3]: https://images.idgesg.net/images/article/2019/10/nw_wifi_router_traditional-and-beamformer_foundational_networking_internet-100814037-orig.jpg
[4]: https://www.mathscinotes.com/2012/01/beamforming-math/
[5]: https://apps.dtic.mil/dtic/tr/fulltext/u2/a250189.pdf
[6]: https://www.mathworks.com/company/newsletters/articles/making-all-the-right-noises-shaping-sound-with-audio-beamforming.html
[7]: https://www.networkworld.com/article/2270718/the-role-of-beam-forming-in-11n.html
[8]: https://www.networkworld.com/article/3067702/mu-mimo-makes-wi-fi-better.html
[9]: https://www.networkworld.com/article/3258807/what-is-802-11ax-wi-fi-and-what-will-it-mean-for-802-11ac.html
[10]: https://www.networkworld.com/article/3250268/what-is-mu-mimo-and-why-you-need-it-in-your-wireless-routers.html
[11]: https://www.networkworld.com/article/3256905/13-things-you-need-to-know-about-mu-mimo-wi-fi.html
[12]: https://routerguide.net/enable-beamforming-on-or-off/
[13]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
[14]: https://www.networkworld.com/article/3356838/how-to-determine-if-wi-fi-6-is-right-for-you.html
[15]: https://www.networkworld.com/article/3342158/cisco-exec-details-how-wi-fi-6-and-5g-will-fire-up-enterprises-in-2019-and-beyond.html
[16]: https://www.networkworld.com/article/3311921/mobile-wireless/wi-fi-6-is-coming-to-a-router-near-you.html
[17]: https://www.networkworld.com/article/3305583/wi-fi/wi-fi-analytics-get-real.html
[18]: https://www.facebook.com/NetworkWorld/
[19]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,60 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How GNOME uses Git)
[#]: via: (https://opensource.com/article/19/10/how-gnome-uses-git)
[#]: author: (Molly de Blanc https://opensource.com/users/mollydb)
How GNOME uses Git
======
The GNOME project's decision to centralize on GitLab is creating
benefits across the community—even beyond the developers.
![red panda][1]
“Whats your GitLab?” is one of the first questions I was asked on my first day working for the [GNOME Foundation][2]—the nonprofit that supports GNOME projects, including the [desktop environment][3], [GTK][4], and [GStreamer][5]. The person was referring to my username on [GNOMEs GitLab instance][6]. In my time with GNOME, Ive been asked for my GitLab a lot.
We use GitLab for basically everything. In a typical day, I get several issues and reference bug reports, and I occasionally need to modify a file. I dont do this in the capacity of being a developer or a sysadmin. Im involved with the Engagement and Inclusion &amp; Diversity (I&amp;D) teams. I write newsletters for Friends of GNOME and interview contributors to the project. I work on sponsorships for GNOME events. I dont write code, and I use GitLab every day.
The GNOME project has been managed a lot of ways over the past two decades. Different parts of the project used different systems to track changes to code, collaborate, and share information both as a project and as a social space. However, the project made the decision that it needed to become more integrated and it took about a year from conception to completion.
There were a number of reasons GNOME wanted to switch to a single tool for use across the community. External projects touch GNOME, and providing them an easier way to interact with resources was important for the project, both to support the community and to grow the ecosystem. We also wanted to better track metrics for GNOME—the number of contributors, the type and number of contributions, and the developmental progress of different parts of the project.
When it came time to pick a collaboration tool, we considered what we needed. One of the most important requirements was that it must be hosted by the GNOME community; being hosted by a third party didnt feel like an option, so that discounted services like GitHub and Atlassian. And, of course, it had to be free software. It quickly became obvious that the only real contender was GitLab. We wanted to make sure contribution would be easy. GitLab has features like single sign-on, which allows people to use GitHub, Google, GitLab.com, and GNOME accounts.
We agreed that GitLab was the way to go, and we began to migrate from many tools to a single tool. GNOME board member [Carlos Soriano][7] led the charge. With lots of support from GitLab and the GNOME community, we completed the process in May 2018.
There was a lot of hope that moving to GitLab would help grow the community and make contributing easier. Because GNOME previously used so many different tools, including Bugzilla and CGit, its hard to quantitatively measure how the switch has impacted the number of contributions. We can more clearly track some statistics though, such as the nearly 10,000 issues closed and 7,085 merge requests merged between June and November 2018. People feel that the community has grown and become more welcoming and that contribution is, in fact, easier.
People come to free software from all sorts of different starting points, and its important to try to even out the playing field by providing better resources and extra support for people who need them. Git, as a tool, is widely used, and more people are coming to participate in free software with those skills ready to go. Self-hosting GitLab provides the perfect opportunity to combine the familiarity of Git with the feature-rich, user-friendly environment provided by GitLab.
Its been a little over a year, and the change is really noticeable. Continuous integration (CI) has been a huge benefit for development, and it has been completely integrated into nearly every part of GNOME. Teams that arent doing code development have also switched to using the GitLab ecosystem for their work. Whether its using issue tracking to manage assigned tasks or version control to share and manage assets, even teams like Engagement and I&amp;D have taken up using GitLab.
It can be hard for a community, even one developing free software, to adapt to a new technology or tool. It is especially hard in a case like GNOME, a project that [recently turned 22][8]. After more than two decades of building a project like GNOME, with so many parts used by so many people and organizations, the migration was an endeavor that was only possible thanks to the hard work of the GNOME community and generous assistance from GitLab.
I find a lot of convenience in working for a project that uses Git for version control. Its a system that feels comfortable and is familiar—its a tool that is consistent across workplaces and hobby projects. As a new member of the GNOME community, it was great to be able to jump in and just use GitLab. As a community builder, its inspiring to see the results: more associated projects coming on board and entering the ecosystem; new contributors and community members making their first contributions to the project; and increased ability to measure the work were doing to know its effective and successful.
Its great that so many teams doing completely different things (such as what theyre working on and what skills theyre using) agree to centralize on any tool—especially one that is considered a standard across open source. As a contributor to GNOME, I really appreciate that were using GitLab.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/how-gnome-uses-git
作者:[Molly de Blanc][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mollydb
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/redpanda_firefox_pet_animal.jpg?itok=aSpKsyna (red panda)
[2]: https://www.gnome.org/foundation/
[3]: https://gnome.org/
[4]: https://www.gtk.org/
[5]: https://gstreamer.freedesktop.org/
[6]: https://gitlab.gnome.org/
[7]: https://twitter.com/csoriano1618?lang=en
[8]: https://opensource.com/article/19/8/poll-favorite-gnome-version

View File

@ -0,0 +1,93 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Emergence of Edge Analytics in the IoT Ecosystem)
[#]: via: (https://opensourceforu.com/2019/10/the-emergence-of-edge-analytics-in-the-iot-ecosystem/)
[#]: author: (Shashidhar Soppin https://opensourceforu.com/author/shashidhar-soppin/)
The Emergence of Edge Analytics in the IoT Ecosystem
======
[![][1]][2]
_Edge analytics involves processing, collecting and analysing data near its source so that not all of it is sent to the cloud. This is crucial when massive volumes of data are generated by numerous devices. It saves a lot of time and reduces latency._
The exponential proliferation of IoT devices and IoT based platforms is leading to multiple complex scenarios. Such sudden growth and scalability need a lot of analytics to be done for better predictions and choices. IoT analytics will become more complex and challenging when the number of IoT devices grows even further. A good analytics system will become crucial for any IoT environment to succeed and be more robust.
Making any sense out of IoT data needs good and efficient analytics. This may not always be Big Data (volume, velocity and variety of data). There are many other categories of analytics, like simple past event review, or more advanced analytics using historical data to make predictions about outcomes, etc.
**What is analytics?**
Analytics (for IoT) is the science and art of trying to find matching patterns in the huge quantity of data (it could be Big Data, or otherwise) generated by connected and smart devices. In laymans terms, this can be defined as simply as monitoring trends and finding any abnormalities.
Listed below are some of the well-known analytics methodologies or types:
* Descriptive analytics tells us what is happening
* Diagnostics analytics tells us why it happened
* Predictive analytics tells us what is likely to happen
* Prescriptive analytics tells us what should be done to prevent something from happening
Today, there are various tools and technologies available for analytics. Many of the enterprises now expect to have intelligence or analytics sit on the platform itself for better monitoring and live streaming capabilities. Processing historical data from the cloud or any other outside means takes time and adds latency to the entire response time. Analytics using microservices based technologies is also one trend observed recently.
Each of the existing IoT solutions may not need analytics to be done. Listed below are some types of analytics that can be performed:
* Generating basic reports
* Real-time stream analytics
* Long-term data analytics
* Large-scale data analytics
* Advanced data analytics
Once the data is acquired from any selected source, it has to be pre-processed to identify missing values and for the purpose of scaling the data. Once the pre-processing is done,feature extraction needs to be carried out. From feature extraction, we can identify any significant information that helps in improving subsequent steps.
**Basic analysis**
This analysis typically involves Big Data and will, most of the time, involve descriptive and diagnostic kinds of analytics.
This is the era of edge analytics. Many of the industries that work on IoT based solutions and frameworks use it, and it is expected to become the next big thing in the coming days. At present, there are many tools and technologies that deal with edge analytics.
**Edge analytics and the IoT tools**
This is an analysis done at the source, by processing, collecting and analysing the data instead of sending it to the cloud or server for processing and analysis. This saves a lot of time and avoids latency. By using edge analytics, most problems in todays world can be solved.
According to Gartners predictions, by 2020, more than half the major new business processes and systems will incorporate some element of IoT. Analytics will become one of the key aspects of any of these IoT systems/sub-systems. It will support the decision-making process in related operations and help in business transformation.
A brief introduction of some of the well-known open source IoT tools for edge analytics follows.
**EdgeX Foundry:** This is an open source project that is hosted by the Linux Foundation. It provides interoperability, can be hosted within the hardware and is an OS-agnostic software platform. The EdgeX ecosystem can be used as plug-and-play with support for multiple components, and this can be easily deployed as an IoT solution with edge analytics. EdgeX Foundry can be deployed using microservices as well.
**Website:** _<https://github.com/edgexfoundry/edgex-go>_
**Eclipse Kura:** Eclipse Kura is one more popular open source IoT edge framework. It is based on Java/OSGi and offers API based access to the underlying hardware interface of IoT gateways (for serial ports, watchdog, GPS, I2C, etc). The Kura architecture can be found at _<http://eclipse.github.io/kura/intro/intro.html>_.
Kura components are designed as configurable OSGi based and declarative services, which are exposed service APIs. Most of the Kura components are purely Java based, while others can be invoked through JNI, and generally have a dependency on the Linux operating system.
**Eclipse Kapua:** Eclipse Kapua is a modular IoT cloud platform that is mainly used to manage and integrate devices and their data. Kapua comes loaded with the following features.
* **Connect:** Many of the IoT devices can be connected to Kapua using MQTT and other protocols.
* **Manage:** The management of a devices applications, configurations and resources can be easily done.
* **Store and analyse:** Kapua helps in storing and indexing the data published by IoT devices for quick and easy analysis, and later visualisation in the dashboard.
* **Integrate:** Kapua services can be easily integrated with various IT applications through flexible message routing and ReST based APIs.
**Website:** _<https://www.eclipse.org/kapua/>_
As mentioned earlier and based on Gartner analysis, edge analytics will become one of the leading tech trends in the coming months. The more analysis that is done at the edge, the more sophisticated and advanced the whole IoT ecosystem will become. There will come a day where M2M communication may happen independently, without much human intervention.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/the-emergence-of-edge-analytics-in-the-iot-ecosystem/
作者:[Shashidhar Soppin][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/shashidhar-soppin/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Graph-Analytics-1.jpg?resize=696%2C464&ssl=1 (Graph Analytics)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Graph-Analytics-1.jpg?fit=800%2C533&ssl=1

View File

@ -0,0 +1,102 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The software-defined data center drives agility)
[#]: via: (https://www.networkworld.com/article/3446040/the-software-defined-data-center-drives-agility.html)
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
The software-defined data center drives agility
======
The value of SDN is doing as much as possible in the software so you dont depend on the delivery of new features to come from a new generation of the hardware.
monsitj / Getty Images
In this day and age, demands on networks are coming from a variety of sources, internal end-users, external customers and via changes in the application architecture. Such demands put pressure on traditional architectures.
To deal effectively with these demands requires the network domain to become more dynamic. For this, we must embrace digital transformation. However, current methods are delaying this much-needed transition. One major pain point that networks suffer from is the necessity to dispense with manual working, which lacks fabric wide automation. This must be addressed if organizations are to implement new products and services ahead of the competition.
So, to evolve, to be in line with the current times and use technology as an effective tool, one must drive the entire organization to become a digital enterprise. The network components do play a key role, but the digital transformation process is an enterprise-wide initiative.
[][1]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][1]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
### Digital transformation and network visibility
As a part of this transformation, the network is no longer considered a black box. Now the network is a source for deep visibility that can aid a large set of use cases like network performance, monitoring, security and capacity planning, to name a few. However, in spite of such critical importance visibility is often overlooked.
We need the ability to provide deep visibility for the application at a flow level. Rationally, today if you want anything comparable, you would deploy a redundant monitoring network. Such a network would consist of probes, packet brokers and various tools to process the packet for metadata.
A more viable solution would integrate the network visibility into the fabric and therefore would not need a bunch of components. This enables us to do more with the data and aids with agility for ongoing network operations. There will always be some kind of requirement for application optimization or a security breach. And this is where visibility can help you iron out such issues quickly.
### Gaining agility with SDN
When increasing agility, what is useful is the building of a complete network overlay. An overlay is a solution that is abstracted from the underlying physical infrastructure in a certain way.
**[ Related: [MPLS explained What you need to know about multi-protocol label switching][2]**
What this means is that we are separating and disaggregating the customer applications or services from the network infrastructure. This is more like a sandbox or private network for each application that is on an existing network. This way we are empowered with both the SDN controller and controllerless options. Both [data center architectures][3] _[Disclaimer: The author is employed by Network Insight]_ have their advantages and disadvantages.
Traditionally, the deployment of an application to the network involves propagating the policy to work through the entire infrastructure. Why? Because the network simply acts as an underlay and the segmentation rules configured on this underlay are needed to separate different applications and services. This, ultimately, creates a very rigid architecture that is unable to react quickly and adapt to the changes, therefore lacking agility. In essence, the applications and the physical network are tightly coupled.
[Virtual networks][4] are mostly built from either the servers or ToR switches. Either way, the underlying network transports the traffic and doesnt need to be configured to accommodate the customer application. That is all carried in the overlay. By and large, everything happens in the overlay network which is most efficient when done in a fully distributed manner.
Now the application and service deployment occur without touching the network. Once the tight coupling between the application and network is removed, increased agility and simplicity of deploying applications and services are created.
### Where do your features live?
Some vendors build into the hardware the differentiator of the offering. With different hardware, you can accelerate the services. With this design, the hardware level is manipulated, but it does not use the standard Open Networking protocols. The result is that you are 100% locked, unable to move as the cost of moving is too much.
You could have numerous generations: for example, line cards, all the line cards have different capabilities, resulting in a complex feature matrix. When the Cisco Nexus platform first came out, I was onsite as a TDA trying to bring in some redundancy into the edge/access layer.
When the virtual PortChannel (vPC) came out they were several topologies and some of these topologies were only available on certain hardware. As its just a topology, it would have been useful to have it across all line cards. This is the world of closed networking, which has been accepted as the norm until now.
### Open networking
Traditionally, networking products were a combination of the hardware and software that had to be purchased together as an integrated solution. Open Networking, on the other hand, is the disaggregation of hardware from the software. This basically allows IT to mix and match at will.
With Open Networking, you are not reinventing the way packets are forwarded, or the way routers communicate with each other. Why? Because, with Open Networking, you are never alone and never the only vendor. You need to adapt and fit, and for this, you need to use open protocols.
The value of SDN is doing as much as possible in the software so you dont depend on the delivery of new features to come from a new generation of the hardware. You want to put as much intelligence as possible into the software, thus removing the intelligence from the physical layer.
You dont want to build the hardware features; instead, you want to use the software to provide the new features. This is an important philosophy and is the essence of Open Networking. From the customer's point of view, they get more agility as they can move from generation to generation of services without having hardware dependency. They dont have to incur the operational costs of swapping out the hardware constantly.
### First steps
It is agreed that agility is a necessity. So, what is the prime step? One of the key steps is to create a software-defined data center that will allow the rapid deployment of compute and storage for the workloads. In addition to software-defined compute and storage, the network must be automated and not be an impediment.
Many organizations assume that to achieve agility, we must move everything to the cloud. Migrating workloads to the cloud indeed allow organizations to be competitive and equipped with the capabilities of a much larger organization.
Only a small proportion can adopt a fully cloud-native design. More often than not, there will always be some kind of application that needs to stay on-premise. In this case, the agility in the cloud needs to be matched by the on-premise infrastructure. This requires the virtualization of the on-premise compute, storage and network components.
Compute and storage, affordable software control, and virtualization have progressed dramatically. However, the network can cause a lag. Solutions do exist but they are complex, expensive and return on investment (ROI) is a stretch. Therefore, such solutions are workable only for the largest enterprises. This creates a challenge for mid-sized businesses that want to virtualize the network components.
**This article is published as part of the IDG Contributor Network. [Want to Join?][5]**
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3446040/the-software-defined-data-center-drives-agility.html
作者:[Matt Conran][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Matt-Conran/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[2]: https://www.networkworld.com/article/2297171/sd-wan/network-security-mpls-explained.html
[3]: https://network-insight.net/2014/08/data-center-topologies/
[4]: https://youtu.be/-Yjk0GiysLI
[5]: https://www.networkworld.com/contributor-network/signup.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,135 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Drupal shows leadership on diversity and inclusion)
[#]: via: (https://opensource.com/article/19/10/drupals-diversity-initiatives)
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
Drupal shows leadership on diversity and inclusion
======
Drupal's Diversity & Inclusion Group is taking an innovative approach to
bringing attention to underrepresented groups in open source.
![Two diverse hands holding a globe][1]
I didn't expect [DrupalCon Seattle][2]'s opening keynote to address the barriers that hold people back from making open source contributions. So imagine my surprise when Dries Buytaert, Drupal's project lead and co-founder and CTO of Acquia, which created Drupal, [used his time onstage][3] to share an apology.
> _"I used to think anyone could contribute to Drupal if they just had the will. I was wrong—many people don't contribute to open source because they don't have the time."_
> — Dries Buytaert
Buytaert [disproved the long-held belief][4] that open source is a meritocracy. The truth is that anyone who has free time to do ongoing, unpaid work is more privileged than most. If you're working a second job, caring for aging parents, or earning less due to systemic wage gaps for people of color, you can't start your open source career on equal ground.
> I wonder if a personal story will help :) In the past year, I dedicated my life to [#drupaldiversity][5]. The bridges that helped us achieve success at the highest level this year were often at a personal cost of my nights, weekends, and personal life.
>
> — Fatima (she/her) (@sugaroverflow) [April 15, 2019][6]
### Increasing diversity and inclusion in Drupal
Buytaert's keynote highlighted the Drupal project's commitment to diversity—and diversifying.
For example, the Drupal Association awards grants, scholarships, and money from its inclusion fund to contributors of minority status so they can travel and attend the Association's DrupalCon events. Recently, 18 contributors [received awards][7] to attend [DrupalCon Amsterdam][8] in late October.
In addition, the all-volunteer [Drupal Diversity &amp; Inclusion][9] (DDI) collective works to diversify Drupal's speaker base. All of DDI's projects are open for anyone to work on in the [Drupal Diversity][10] or [DDI Contrib][11] project repositories.
In its August newsletter, DDI shared another way it seeks to expand awareness of diverse populations. The group starts each of its weekly meetings by asking members where they're from and to acknowledge the indigenous history of the land they live on. In July, DDI launched a related project: [Land Acknowledgements][12], which invites Drupal community members to research their homelands' indigenous histories and share them in a community blog post.
This project caught my eye, and I made a [contribution][13] about the indigenous history of Montgomery County, Maryland, where I live. This project is still open: [anyone can add their research][14] to the blog post.
To learn more, I interviewed [Alanna Burke][15], a Drupal developer who helps lead the Land Acknowledgements project. Our conversation has been edited for length and clarity.
### Acknowledging indigenous history
**Lauren Maffeo:** Describe Drupal's Land Acknowledgments project. How did the idea for this project come about, and what is its end goal?
**Alanna Burke:** In our weekly Slack meetings, we ask everyone to introduce themselves and do a land acknowledgment. I'm not sure when we started doing that. One week, I had the idea that it would be really neat to have folks do a little more research and contribute it back into a blog post—we do these acknowledgments, but without more context or research, they're not very meaningful. We wanted people to find out more about the land that they live on and the indigenous people who, in many cases, still live there.
**LM:** How long will you accept contributions to the project? Do you have an ultimate goal for project contributions?
**AB:** Right now, we intend to accept contributions for as long as people want to send them in!
**LM:** How many contributions have you received thus far?
**AB:** We've had four contributions, plus mine. I think folks have been busy, but that's why we made the decision to keep contributions open. We don't have a goal in terms of numbers, but I'd love to see more people do the research into their land, learn about it, and find out something interesting they didn't know before.
**LM:** Describe the response to this project so far. Do you have plans to expand it beyond Drupal to the broader open source community?
**AB:** Folks seemed to think it was a really great idea! There were definitely a lot of people who wanted to participate but haven't found the time or who just mentioned that it was cool. We haven't discussed any plans to expand it, since we focus on the Drupal community, but I'd encourage any community to take this idea and run with it, see what your community members come up with!
**LM:** Which leaders in the Drupal community have created and maintained this project?
**AB:** Myself and the other members of the DDI leadership team: Tara King, Fatima Khalid, Marc Drummond, Elli Ludwigson, Alex Laughnan, and Alex McCabe
**LM:** What are some 2019 highlights of Drupal's Diversity &amp; Inclusion initiative? Which goals do you aim to achieve in 2020?
**AB:** Our biggest highlight this year was the Speaker Diversity Workshop, which we held on September 28th. Jill Binder of the WordPress community led this free online workshop aimed at helping underrepresented folks prepare for speaking at camps and conferences.
We are also going to hold an online [train-the-trainers workshop][16] on November 16th so communities can hold their own speaker workshops.
In 2020, we'd like to build on these successes. We did a lot of fundraising and created a lot of great relationships in order to make this happen. We have a [handful of initiatives][17] that we are working on at any given time, so we'll be continuing to work on those.
**LM:** How can Opensource.com readers contribute to the Land Acknowledgements post and other Drupal D&amp;I initiatives?
**AB:** Check out [the doc][14] in the issue. Spend a little time doing research, write up a few paragraphs, and submit it! Or, start up an initiative in your community to do land acknowledgments in your meetings or do a collaborative post like ours. Do land acknowledgments as part of events like camps and conferences.
To get involved in DDI, check out our [guide][18]. We have [an issue queue][10], and we meet in Slack every Thursday for a text-only meeting.
**LM:** Do you have statistics for how many women and people of minority status (gender, sexuality, religion, etc.) contribute to the Drupal project? If so, what are they?
**AB:** We have some numbers—we'd love to have more. This [post][19] has some breakdowns, but here's the gist from 2017-2018, the latest we have:
* [Drupal.org][20] received code contributions from 7,287 different individuals and 1,002 different organizations.
* The reported data shows that only 7% of the recorded contributions were made by contributors that do not identify as male, which continues to indicate a steep gender gap.
* When measuring geographic diversity, we saw individual contributors from six different continents and 123 different countries.
Recently, we have implemented [Open Demographics][21] on Drupal.org, a project by DDI's founder, Nikki Stevens. We hope this will give us better demographic data in the future.
### Closing the diversity gap in open source
Drupal is far from alone among [open source communities with a diversity gap][22], and I think it deserves a lot of credit for tackling these issues head-on. Diversity and inclusion is a much broader topic than most of us realize. Before I read DDI's August newsletter, the history of indigenous people in my community was something that I hadn't really thought about before. Thanks to DDI's project, I'm not only aware of the people who lived in Maryland long before me, but I've come to appreciate and respect what they brought to this land.
I encourage you to learn about the native people in your homeland and record their history in DDI's Land Acknowledgements blog. If you're a member of another open source project, consider replicating this project there. 
Open source research often paints the community as a homogeneous landscape. I have collected...
Egle Sigler, Kavit Munshi, and Carol Barrett talk about the importance of diversity in the...
Diversity has a new full-time ally. Marina Zhurakhinskaya recently won an O'Reilly award for her...
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/drupals-diversity-initiatives
作者:[Lauren Maffeo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lmaffeo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/world_hands_diversity.png?itok=zm4EDxgE (Two diverse hands holding a globe)
[2]: https://events.drupal.org/seattle2019
[3]: https://www.youtube.com/watch?v=BNoCn6T9Xf8
[4]: https://dri.es/the-privilege-of-free-time-in-open-source
[5]: https://twitter.com/hashtag/drupaldiversity?src=hash&ref_src=twsrc%5Etfw
[6]: https://twitter.com/sugaroverflow/status/1117876869590728705?ref_src=twsrc%5Etfw
[7]: https://events.drupal.org/amsterdam2019/grants-scholarships
[8]: https://events.drupal.org/amsterdam2019
[9]: https://opencollective.com/drupal-diversity-and-inclusion
[10]: https://www.drupal.org/project/issues/diversity
[11]: https://www.drupal.org/project/issues/ddi_contrib
[12]: https://www.drupaldiversity.com/blog/2019/land-acknowledgments
[13]: https://www.drupal.org/project/diversity/issues/3063065#comment-13234777
[14]: https://www.drupal.org/project/diversity/issues/3063065
[15]: https://www.drupal.org/u/aburke626
[16]: https://www.drupaldiversity.com/blog/2019/learn-how-hold-your-own-speaker-diversity-workshop-saturday-november-16
[17]: https://www.drupaldiversity.com/initiatives
[18]: https://www.drupaldiversity.com/get-involved
[19]: https://dri.es/who-sponsors-drupal-development-2018
[20]: http://Drupal.org
[21]: https://www.drupal.org/project/open_demographics
[22]: https://opensource.com/resources/diversity-open-source

View File

@ -0,0 +1,75 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Data center liquid-cooling to gain momentum)
[#]: via: (https://www.networkworld.com/article/3446027/data-center-liquid-cooling-to-gain-momentum.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Data center liquid-cooling to gain momentum
======
The serious number-crunching demands of AI, IoT and big data - and the heat they generate - may mean air cooling is on its way out.
artisteer / Getty Images
Concern over escalating energy costs is among reasons liquid-cooling solutions could gain traction in the [data center][1].
Schneider Electric, a major energy-management specialist, this month announced refreshed impetus to a collaboration conceived in 2014 with [liquid-cooling specialist Iceotope][2]. Now, [technology solutions company Avnet has been brought into that collaboration][3].
[[Get regularly scheduled insights by signing up for Network World newsletters.]][4]
The three companies will develop chassis-level immersive liquid cooling for data centers, Schneider Electric says in a [press release][5]. Liquid-cooling systems submerge server components in a dielectric fluid as opposed to air-cooled systems which create ambient cooled air.
[][6]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][6]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
One reason for the shift: “Compute-intensive applications like AI and [IoT][7] are driving the need for better chip performance,” Kevin Brown, CTO and SVP of Innovation, Secure Power, Schneider Electric, is quoted as saying.
“Liquid Cooling [is] more efficient and less costly for power-dense applications,” the company explains. Thats in part because the use of Graphical Processing Units (GPUs) is replacing some traditional processing, and is gaining ground. GPUs are better suited to data-mining-type applications than traditional processors. They parallel-process and are now used extensively in artificial intelligence compute environments and processor-hungry analytics churning big data.
“This makes traditional data-center air-cooled architectures impractical, or costly and less efficient than liquid-cooled approaches.” Reasons liquid-cooling may become a new go-to cooling solution is also related to “space constraints, water usage restrictions and harsh IT environments,” [Schneider said in a white paper earlier this year][8]:
As chip density increases, and the resulting rack-space that is required to hold the gear decreases, the need for traditional air-based cooling-equipment space keeps going up. So even as greater computing density decreases the space the equipment occupies, the space required for air-cooling it increases. The heat created is so great with GPUs that it stops being practical to air-cool.
Additionally, as edge data centers become more important theres an advantage to using IT that can be placed anywhere. “As the demand for IT deployments in urban areas, high rise buildings, and at the Edge increase, the need for placement in constrained locations will increase,” the paper says. In such scenarios, not requiring space for hot and cold aisles would be an advantage.
Liquid cooling would allow for silent operation, too; there arent any fans and pumps making disruptive noise.
Liquid cooling would also address restrictions on water useage that can affect the ability to use evaporative cooling and cooling towers to carry off heat generated by data centers. Direct-to-chip liquid-cooling systems of the kind the three companies want to concentrate their efforts on narrowly target the cooling at the server, not at the building level.
In harsh environments such as factories and [industrial IoT][9] deployments, heat and air quality can hinder air-cooling systems. Liquid-cooling systems can be self-contained in sealed units, thus being protected from dust, for example.
Interestingly, as serious computer gamers will know, liquid cooling isnt a new technology, [Wendy Torell points out in a Schneider blog post][10] pitching the technology. “Its been around for decades and has historically focused on mainframes, high-performance computing (HPC), and gaming applications,” she explains. “Demand for IoT, artificial intelligence, machine learning, big data analytics, and edge applications is once again bringing it into the limelight.”
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3446027/data-center-liquid-cooling-to-gain-momentum.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
[2]: http://www.iceotope.com/about
[3]: https://www.avnet.com/wps/portal/us/about-avnet/overview/
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.prnewswire.com/news-releases/schneider-electric-announces-partnership-with-avnet-and-iceotope-to-develop-liquid-cooled-data-center-solutions-300929586.html
[6]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[7]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
[8]: https://www.schneider-electric.us/en/download/search/liquid%20cooling/?langFilterDisabled=true
[9]: https://www.networkworld.com/article/3243928/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
[10]: https://blog.se.com/datacenter/2019/07/11/not-just-about-chip-density-five-reasons-consider-liquid-cooling-data-center/
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,117 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Measuring the business value of open source communities)
[#]: via: (https://opensource.com/article/19/10/measuring-business-value-open-source)
[#]: author: (Jon Lawrence https://opensource.com/users/the3rdlaw)
Measuring the business value of open source communities
======
Corporate constituencies are interested in finding out the business
value of open source communities. Find out how to answer key questions
with the right metrics.
![Lots of people in a crowd.][1]
In _[Measuring the health of open source communities][2]_, I covered some of the key questions and metrics that weve explored as part of the [CHAOSS project][3] as they relate to project founders, maintainers, and contributors. In this article, we focus on open source corporate constituents (such as open source program offices, business risk and legal teams, human resources, and others) and end users.
Where the bulk of the metrics for core project teams are quantitative, for the remaining constituents our metrics must reflect a much broader range of interests, and address many more qualitative measures. From the metrics collection standpoint, much of the data collection for qualitative measures is much more manual and subjective, but it is nonetheless within the scope CHAOSS hopes to be able to address as the project matures.
While people on the business side of things do sometimes care about the metrics in use by the project itself, there are only two fundamental questions that corporate constituencies have. The first is about _value_: "Will this choice help our business make more money sooner?" The second is about _risk_: "Will this choice hurt our businesss chances of making money?"
Those questions can come in many different iterations across disciplines, from human resources to legal counsel and executive offices. But, at the end of the day, having answers that are based on data can make open source engagement more efficient, effective, and less risky.
Once again, the information below is structured in a Goal-Question-Metric format:
* Open source program offices (OSPOs)
* As an OSPO leader, I care about prioritizing our resources toward healthy communities:
* How [active][4] is the community?
**Metric:** [Code development][5] \- The number of commits and pull requests, review time for new code commits and pull requests, code reviews and merges, the number of accepted vs. rejected pull requests, and the frequency of new version releases.
**Metric:** [Issue resolution][6] \- The number of new issues, closed issues, the ratio of new vs. closed issues, and the average open time per issue.
**Metric:** Social - Social media mention counts, social media sentiment analysis, the activity of community blog, and news releases (_future release_).
* What is the [value][7] of our contributions to the project? (This is an area in active development.)
**Metric:** Time value - Time saved for training developers on new technologies, and time saved maintaining custom development once the improvements are upstreamed.
**Metric:** Dollar value - How much would it have cost to maintain changes and custom solutions internally, versus contributing upstream and ensuring compatibility with future community releases
* What is the value of contributions to the project by other contributors and organizations?
**Metric:** Time value - Time to market, new community-developed features released, and support for the project by the community versus the company.
**Metric:** Dollar value - How much would it cost to internally rebuild the features provided by the community, and what is the opportunity cost of lagging behind innovations in open source projects?
* Downstream value: How many other projects list our project as a dependency?
**Metric:** The value of the ecosystem that is around a project.
* How many forks of our project have there been?
**Metric:** Are core developers more active in the mainline or a fork?
**Metric:** Are the forks contributing back to the mainline, or developing in new directions?
* Engineering leadership
* As an approving architect, I care most about good design patterns that introduce a minimum of technical debt.
**Metric:** [Test Coverage][8] \- What percentage of the code is tested?
**Metric:** What is the percentage of code undergoing code reviews?
**Metric:** Does the project follow [Core][9] [Infrastructure][9] [Initiative (CII) Best Practices][9]?
* As an engineering executive, I care most about minimizing time-to-market and bugs, and maximizing platform stability and reliability.
**Metric:** The defect resolution velocity.
**Metric:** The defect density.
**Metric:** The feature development velocity.
* I also want social proofs that give me a level of comfort.
**Metric:** Sentiment analysis of social media related to the project.
**Metric:** Count of white papers.
**Metric:** Code Stability - Project version numbers and the frequency of new releases.
There is also the issue of legal counsel. This goal statement is: "As legal counsel, I care most about minimizing our companys chances of getting sued." The question is: "What kind of license does the software have, and what obligations do we have under the license?"
The metrics involved here are:
* **Metric:** [License Count][10] \- How many different licenses are declared in a given project?
* **Metric:** [License Declaration][11] \- What kinds of licenses are declared in a given project?
* **Metric:** [License Coverage][12] \- How much of a given codebase is covered by the declared license?
Lastly, there are further goals our project is considering to measure the impact of corporate open source policy as it relates to talent acquisition and retention. The goal for human resource managers is: "As an HR manager, I want to attract and retain the best talent I can." The questions and metrics are as follows:
* What impact do our open source policies have on talent acquisition?
**Metric:** Talent acquisition - Measure over time how many candidates report that its important to them that they get to work with open source technologies.
* What impact do our open source policies have on talent retention?
**Metric:** Talent retention - Measure how much employee churn can be reduced because of people being able to work with or use open source technologies.
* What is the impact on training employees who can learn from engaging in open source projects?
**Metric:** Talent development - Measure over time the importance to employees of being able to use open source tech effectively.
* How does allowing employees to work in a community outside of the company impact job satisfaction?
**Metric:** Talent satisfaction - Measure over time the importance to employees of being able to contribute to open source tech.
**Source:** Internal surveys.
**Source:** Exit interviews. Did our policies around open source technologies at all influence your decision to leave?
### Wrapping up
It is still the early days of building a platform for bringing together these disparate data sources. The CHAOSS core of [Augur][13] and [GrimoireLab][14] currently supports over two dozen sources, and Im excited to see what lies ahead for this project.
As the CHAOSS frameworks mature, Im optimistic that teams and projects that implement these types of measurement will be able to make better real-world decisions that result in healthier and more productive software development lifecycles.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/measuring-business-value-open-source
作者:[Jon Lawrence][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/the3rdlaw
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_community_1.png?itok=rT7EdN2m (Lots of people in a crowd.)
[2]: https://opensource.com/article/19/8/measure-project
[3]: https://github.com/chaoss/
[4]: https://github.com/chaoss/wg-evolution/blob/master/focus_areas/community_growth.md
[5]: https://github.com/chaoss/wg-evolution#metrics
[6]: https://github.com/chaoss/wg-evolution/blob/master/focus_areas/issue_resolution.md
[7]: https://github.com/chaoss/wg-value
[8]: https://chaoss.community/metric-test-coverage/
[9]: https://github.com/coreinfrastructure/best-practices-badge
[10]: https://github.com/chaoss/wg-risk/blob/master/metrics/License_Count.md
[11]: https://github.com/chaoss/wg-risk/blob/master/metrics/License_Declared.md
[12]: https://github.com/chaoss/wg-risk/blob/master/metrics/License_Coverage.md
[13]: https://github.com/chaoss/augur
[14]: https://github.com/chaoss/grimoirelab

View File

@ -0,0 +1,88 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Pennsylvania school district tackles network modernization)
[#]: via: (https://www.networkworld.com/article/3445976/pennsylvania-school-district-tackles-network-modernization.html)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
Pennsylvania school district tackles network modernization
======
NASD upgrades its campus core to be the foundation for digital learning.
Wenjie Dong / Getty Images
Success in business and education today starts with infrastructure modernization. In fact, my research has found that digitally-forward organizations spend more than twice what their non-digital counterparts spend on evolving their IT infrastructure. However, most of the focus from IT has been on upgrading the application and compute infrastructure with little thought given to a critical ingredient the network. Organizations can only be as agile as the least agile component of their infrastructure, and for most companies, thats the network.
### Manual processes plague network reliability
Legacy networks have outlived their useful life. The existing three+ tier architecture was designed for an era when network traffic was considered “best-effort,” where there was no way to guarantee performance or reserve bandwidth, and delivered non-mission-critical applications. Employees and educators ran applications locally, and the majority of critical data resided on workstations.
Today, everything has changed. Applications have moved to the cloud, workers are constantly on the go, and companies are connecting things to business networks at an unprecedented rate. One could argue that, for most organizations, the network is the business. Consider whats happened in our personal lives. People stream content, communicate using video, shop online, and rely on the network for almost every aspect of their lives.
[][1]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][1]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
The same thing is happening to digital organizations. Companies today must support the needs of applications that are becomingly increasingly dynamic and distributed. An unavailable or poorly performing network means the organization comes to a screeching halt.
Yet network engineering teams working with legacy networks cant keep up with demands; the rigid and manual processes required to hard-code configuration are slow and error-prone. In fact, ZK Research found that the largest cause of downtime with legacy networks is from human errors.
Given the importance of the network, this kind of madness must stop. Businesses will never be able to harness the potential of digital transformation without modernizing the network.
Whats required is a network that is more dynamic and intelligent, one that simplifies operations via automation. This can lead to better control and faster error detection, diagnosis and resolution. These buzzwords have been tossed around by many vendors and customers as the vision of where we are headed yet it's been difficult to find actual customer deployments.
### NASD modernizes wired and wireless network to support digital curriculum
The Nazareth Area School District (NASD).recently went through a network modernization project.
The Eastern Pennsylvania school district, which has roughly 4,800 students, has a bold vision: to inspire students to be innovative, collaborative and constructive members of the community who embrace the tenets of diversity, value, education and honesty. NASD aims to accomplish its vision by helping students build a strong worth ethic and sense of responsibility and by challenging them to be leaders and good global citizens.
To support its goals, NASD set out to completely revamp the way it teaches. The district embraced a number of modern technologies that would foster immersive learning and collaboration.
There's a heavy emphasis on science, technology, engineering, arts and mathematics (STEAM), which drives more focus on coding, robotics, and virtual and augmented reality. For example, the teachers are using Google Expeditions VR Classroom kits to integrate VR into the classroom. In addition, NASD has converted many of its classrooms into “affinity rooms” where students can work together on different projects in the areas of VR, AR, robotics, stop motion photography, and other advanced technologies.
NASD understood that modernizing education requires a modernized network. If new tools and applications dont perform as expected, it can hurt the learning process as students sit around waiting while network problems are solved. The district knew it needed to upgrade its network to one that was more intelligent, reliable and easier to diagnose.
NASD chose Aruba, a Hewlett Packard Enterprise company, to be its wired and wireless networking supplier.
In my opinion, the decision to upgrade the wired and wireless networks at the same time is a smart one. Many organizations put in a new Wi-Fi network only to find the wired backbone cant support the traffic or doesnt have the necessary reliability.
The high-availability switches are running the new ArubaOS-CX operating system designed for the digital transformation era. The network devices are configured through a centralized graphical interface and not a command line interface (CLI), and they have an onboard Network Analytics Engine to reduce the complexity of running the network.
NASD selected two Aruba 8320 switches to be the core of its network, to provide “utility-grade networking” that is always on and always available, much like power.
“By running two switches in tandem, we would gain a fully redundant network that made failovers, whether planned or unplanned, completely undetectable by our users,” said Mike Fahey, senior application and network administrator at NASD.
### Wanted: utility-grade Wi-Fi
Utility-grade Wi-Fi was a must for NASD as almost all of the new learning tools connect via Wi-Fi only. The school system had been using two Wi-Fi vendors, neither of which performed well and required long troubleshooting periods.
The Nazareth IT staff initially replaced the most problematic APs with Aruba APs. As this happened, Michael Uelses, director of IT, said that the teachers noticed a marked difference in Wi-Fi performance. Now, the entire school has standardized on Arubas gigabit Wi-Fi and has expanded it to outdoor locations. This has enabled the school to expand its security strategy and new emergency preparedness application to include playgrounds, parking lots and other outdoor areas where Wi-Fi previously did not reach.
Supporting gigabit Wi-Fi required upgrading the backbone network to 10 Gigabit, which the Aruba 8320 switches support. The switches can also be upgraded to high speeds, up to 100 Gigabit, if the need arises. NASD is planning to expand the use of bandwidth-hungry apps such as VR to immerse students in subjects including biology and engineering. The option to upgrade the switches gives NASD the confidence it has made the right network choices for the future.
What NASD is doing should be a message to all schools. Digital tools are here to stay and can change the way students learn. Success with digital education requires a rock-solid wired and wireless network to deliver utility-like services that are always on so students can always be learning.
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3445976/pennsylvania-school-district-tackles-network-modernization.html
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (To space and beyond with open source)
[#]: via: (https://opensource.com/article/19/10/open-source-space-exploration)
[#]: author: (Jaouhari Youssef https://opensource.com/users/jaouhari)
To space and beyond with open source
======
Open source projects are helping to satisfy our curiosity about what
lies far beyond Earth's atmosphere.
![Person looking up at the stars][1]
Carl Sagan once said, "The universe is a pretty big place. If it's just us, seems like an awful waste of space." In that vast desert of seeming nothingness hides some of the most mysterious and beautiful creations humankind ever has—or ever will—witness.
Our ancient ancestors looked up into the night sky and dreamed about space, just as we do today. Starting with simple, naked-eye observations of the sky and progressing to create [space telescopes][2] that uncover far reaches of the universe, we've come a long way toward understanding and redefining the concepts of time, space, and matter. Our exploration has provided some answers to humanity's enduring questions about the existence of extraterrestrial life, about the finite or infinite nature and origin of the universe, and so much more. And we still have so much to discover.
### Curiosity, a crucial component for space exploration
The Cambridge Dictionary defines [curiosity][3] as "an eager wish to know or learn about something." It's curiosity that fuels our drive to acquire knowledge about outer space, but what drives our curiosity, our "eager wish," in the first place?
I believe that our curiosity is driven by the desire to escape the unpleasant feeling of uncertainty that is triggered by acknowledging our lack of knowledge. The intrinsic reward that comes from escaping uncertainty pushes us to find a correct (or at least a less wrong) answer to whatever question is at hand.
If we want space discovery to advance at a faster pace, we need more people to become aware of the rewards that are waiting for them when they make the effort and discover answers for their questions about the universe. Space discovery is admittedly not an easy task, because finding correct answers requires following rigorous methods on a long-term scale.
Luckily, open source initiatives are emerging that make it easier for people to get started exploring and enjoying the beauty of outer space.
### Two open source initiatives for space discovery
#### OpenSpace Project
One of the most beautiful tools for exploring space is [OpenSpace][4], an open source visualization tool of the entire known universe. It is an incredible way to visualize the environment of other planets, such as Mars and Jupiter, galaxies, and more.
![The Moon visualized by the OpenSpace project][5]
To enjoy a smooth experience from the OpenSpace simulation (e.g., a minimum 30fps), you need a powerful GPU; check the [GitHub repository][6] for more information.
#### Libre Space Foundation
The [Libre Space Foundation][7]'s mission is "to promote, advance, and develop libre (free and open source) technologies and knowledge for space." Among other things, the project is working to create an open source network of satellite ground stations that can communicate with satellites, spaceships, and space stations. It also supports the [UPSat project][8], which aspires to be the first completely open source satellite launched.
### Advancing the human species
I believe that the efforts made by these open source initiatives are contributing to the advancement of the human species in space. By increasing our interest in space, we are creating opportunities to upgrade our civilization's technological level, moving further up on the [Kardashev scale][9] and possibly becoming a multi-planetary species. Maybe one day, we will build a [Dyson sphere][10] around the sun to capture energy emissions, thereby harnessing an energy resource that exceeds any found on Earth and opening up a whole new world of possibilities.
### Satisfy your curiosity
Our solar system is only a tiny dot swimming in a universe of gems, and the outer space environment has never stopped amazing and intriguing us.
If your curiosity is piqued and you want to learn more about outer space, check out [Kurzgesagt's][11] YouTube videos, which cover topics ranging from the origin of the universe to the strangest stars in a beautiful and concise manner.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/open-source-space-exploration
作者:[Jaouhari Youssef][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jaouhari
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/space_stars_cosmos_person.jpg?itok=XUtz_LyY (Person looking up at the stars)
[2]: https://en.wikipedia.org/wiki/List_of_space_telescopes
[3]: https://dictionary.cambridge.org/us/dictionary/english/curiosity
[4]: https://www.openspaceproject.com/
[5]: https://opensource.com/sites/default/files/uploads/moon.png (The Moon visualized by the OpenSpace project)
[6]: https://github.com/OpenSpace/OpenSpace
[7]: https://libre.space/
[8]: https://upsat.gr/
[9]: https://en.wikipedia.org/wiki/Kardashev_scale
[10]: https://en.wikipedia.org/wiki/Dyson_sphere
[11]: https://kurzgesagt.org/

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Project Trident Ditches BSD for Linux)
[#]: via: (https://itsfoss.com/bsd-project-trident-linux/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
Project Trident Ditches BSD for Linux
======
Recently a BSD distribution announced that it was going to rebase on Linux. Yep, you heard me correctly. Project Trident is moving to Void Linux.
### What is Going on with Project Trident?
Recently, Project Trident [announced][1] that they had been working behind the scenes to move away from FreeBSD. This is quite a surprising move (and an unprecedented one).
According to a [later post][2], the move was motivated by long-standing issues with FreeBSD. These issues include “hardware compatibility, communications standards, or package availability continue to limit Project Trident users”. According to a conversation on [Telegram][3], FreeBSD has just updated its build of the Telegram client and it was nine release behind everyone else.
The lead dev of Project Trident, [Ken Moore][4], is also the main developer of the Lumina Desktop. The [Lumina Desktop][5] has been on hold for a while because the [Project Trident][6] team had to do so much work just to keep their packages updated. (Once they complete the transition to Void Linux, Ken will start working on Lumina again.)
After much searching and testing, the Project Trident team decided to use [Void Linux][7] as their new base.
According to the Project Trident team, the move to Void Linux will have the [following benefits][2]:
* Better GPU support
* Better sound card and streaming support
* Better wireless support
* Bluetooth support for the first time
* Up to date versions of applications
* Faster boot times
* Hybrid EFI/Legacy installation and boot support
### Moving Plans
![][8]
Project Trident currently has two different versions available: Trident-stable and Trident-release. Trident-stable is based on FreeBSD 12 and will continue to get updates until January of 2020 with the ports repo being deleted in April of 2020. On the other hand, Trident-release (which is based on FreeBSD 13) will receive no further updates. That ports repo will be deleted in January of 2020.
The first Void Linux-based releases should be available in January of 2020. Ken said that they might issue an alpha iso or two to show off their progress, but they would be for testing purposes only.
Currently, Ken said that they are working to port all of their “in-house utilities over to work natively on Void Linux”. Void Linux does not support ZFS-on-root, which is a big part of the BSDs. However, Project Trident is planning to use their knowledge of ZFS to add support for it to Void.
There will not be a migration path from the FreeBSD-based version to the Void-based version. If you are currently using Project Trident, you will need to backup your `/home/*` directory before performing a clean install of the new version.
### Final Thoughts
Im looking forward to trying out the new Void Linux-based Project Trident. I have installed and used Void Linux in the past. I have also tried out [TrueOS][9] (the precursor of Project Trident). However, I could never get Project Trident to work on my laptop.
When I was using Void Linux, I ran into two main issues: installing a desktop environment was a pain and the GUI package manager wasnt that great. Project Trident plans to address these issues. Their original goal was to find an operating system that didnt come with a desktop environment as default and their distro would add desktop support out-of-the-box. They wont be able to port the AppCafe package manager to Void because it is a part of TrueOS SysAdm utility. They do plan to “develop a new graphical front-end to the XBPS package manager for Void Linux”.
Interestingly, Void Linux was created by a former NetBSD developer. I asked Ken if that fact influenced their decision. He said, “Actually none! I liked the way that Void Linux was set up and that most/all of the utilities were either MIT or BSD licensed, but I never guessed that it was created by a former NetBSD developer. That definitely helps to explain why Void Linux “feels” more comfortable to me since I have been using FreeBSD exclusively for the last 7 or more years.”
Ive seen some people on the web speaking disparagingly of the move to Void Linux. They mentioned that the name changes (from PC-BSD to TrueOS to Project Trident) and the changes in architecture (from FreeBSD to TrueOS/FreeBSD to Void Linux) show that the developers dont know what they are doing. On the other hand, I believe that Project Trident has finally found its niche where it will be able to grow and blossom. I will be watching the future of Project Trident with much anticipation. You will probably be reading a review of the new version when it is released.
Have you ever used Project Trident? What is your favorite BSD? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][10].
--------------------------------------------------------------------------------
via: https://itsfoss.com/bsd-project-trident-linux/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://project-trident.org/post/train_changes/
[2]: https://project-trident.org/post/os_migration/
[3]: https://t.me/ProjectTrident
[4]: https://github.com/beanpole135
[5]: https://lumina-desktop.org/
[6]: https://itsfoss.com/project-trident-interview/
[7]: https://voidlinux.org/
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/bsd-linux.jpg?resize=800%2C450&ssl=1
[9]: https://itsfoss.com/trueos-bsd-review/
[10]: https://reddit.com/r/linuxusersgroup

View File

@ -1,3 +1,4 @@
luming translating
23 open source audio-visual production tools
======

View File

@ -1,266 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How writers can get work done better with Git)
[#]: via: (https://opensource.com/article/19/4/write-git)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/noreplyhttps://opensource.com/users/seth)
How writers can get work done better with Git
======
If you're a writer, you could probably benefit from using Git. Learn how
in our series about little-known uses of Git.
![Writing Hand][1]
[Git][2] is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git. Today, we'll look at ways writers can use Git to get work done.
### Git for writers
Some people write fiction; others write academic papers, poetry, screenplays, technical manuals, or articles about open source. Many do a little of each. The common thread is that if you're a writer, you could probably benefit from using Git. While Git is famously a highly technical tool used by computer programmers, it's ideal for the modern author, and this article will demonstrate how it can change the way you write—and why you'd want it to.
Before talking about Git, though, it's important to talk about what _copy_ (or _content_ , for the digital age) really is, and why it's different from your delivery _medium_. It's the 21 st century, and the tool of choice for most writers is a computer. While computers are deceptively good at combining processes like copy editing and layout, writers are (re)discovering that separating content from style is a good idea, after all. That means you should be writing on a computer like it's a typewriter, not a word processor. In computer lingo, that means writing in _plaintext_.
### Writing in plaintext
It used to be a safe assumption that you knew what market you were writing for. You wrote content for a book, or a website, or a software manual. These days, though, the market's flattened: you might decide to use content you write for a website in a printed book project, and the printed book might release an EPUB version later. And in the case of digital editions of your content, the person reading your content is in ultimate control: they may read your words on the website where you published them, or they might click on Firefox's excellent [Reader View][3], or they might print to physical paper, or they could dump the web page to a text file with Lynx, or they may not see your content at all because they use a screen reader.
It makes sense to write your words as words, leaving the delivery to the publishers. Even if you are also your own publisher, treating your words as a kind of source code for your writing is a smarter and more efficient way to work, because when it comes time to publish, you can use the same source (your plaintext) to generate output appropriate to your target (PDF for print, EPUB for e-books, HTML for websites, and so on).
Writing in plaintext not only means you don't have to worry about layout or how your text is styled, but you also no longer require specialized tools. Anything that can produce text becomes a valid "word processor" for you, whether it's a basic notepad app on your mobile or tablet, the text editor that came bundled with your computer, or a free editor you download from the internet. You can write on practically any device, no matter where you are or what you're doing, and the text you produce integrates perfectly with your project, no modification required.
And, conveniently, Git specializes in managing plaintext.
### The Atom editor
When you write in plaintext, a word processor is overkill. Using a text editor is easier because text editors don't try to "helpfully" restructure your input. It lets you type the words in your head onto the screen, no interference. Better still, text editors are often designed around a plugin architecture, such that the application itself is woefully basic (it edits text), but you can build an environment around it to meet your every need.
A great example of this design philosophy is the [Atom][4] editor. It's a cross-platform text editor with built-in Git integration. If you're new to working in plaintext and new to Git, Atom is the easiest way to get started.
#### Install Git and Atom
First, make sure you have Git installed on your system. If you run Linux or BSD, Git is available in your software repository or ports tree. The command you use will vary depending on your distribution; on Fedora, for instance:
```
`$ sudo dnf install git`
```
You can also download and install Git for [Mac][5] and [Windows][6].
You won't need to use Git directly, because Atom serves as your Git interface. Installing Atom is the next step.
If you're on Linux, install Atom from your software repository through your software installer or the appropriate command, such as:
```
`$ sudo dnf install atom`
```
Atom does not currently build on BSD. However, there are very good alternatives available, such as [GNU Emacs][7]. For Mac and Windows users, you can find installers on the [Atom website][4].
Once your installs are done, launch the Atom editor.
#### A quick tour
If you're going to live in plaintext and Git, you need to get comfortable with your editor. Atom's user interface may be more dynamic than what you are used to. You can think of it more like Firefox or Chrome than as a word processor, in fact, because it has tabs and panels that can be opened and closed as they are needed, and it even has add-ons that you can install and configure. It's not practical to try to cover all of Atom's many features, but you can at least get familiar with what's possible.
When Atom opens, it displays a welcome screen. If nothing else, this screen is a good introduction to Atom's tabbed interface. You can close the welcome screens by clicking the "close" icons on the tabs at the top of the Atom window and create a new file using **File > New File**.
Working in plaintext is a little different than working in a word processor, so here are some tips for writing content in a way that a human can connect with and that Git and computers can parse, track, and convert.
#### Write in Markdown
These days, when people talk about plaintext, mostly they mean Markdown. Markdown is more of a style than a format, meaning that it intends to provide a predictable structure to your text so computers can detect natural patterns and convert the text intelligently. Markdown has many definitions, but the best technical definition and cheatsheet is on [CommonMark's website][8].
```
# Chapter 1
This is a paragraph with an *italic* word and a **bold** word in it.
And it can even reference an image.
![An image will render here.](drawing.jpg)
```
As you can tell from the example, Markdown isn't meant to read or feel like code, but it can be treated as code. If you follow the expectations of Markdown defined by CommonMark, then you can reliably convert, with just one click of a button, your writing from Markdown to .docx, .epub, .html, MediaWiki, .odt, .pdf, .rtf, and a dozen other formats _without_ loss of formatting.
You can think of Markdown a little like a word processor's styles. If you've ever written for a publisher with a set of styles that govern what chapter titles and section headings look like, this is basically the same thing, except that instead of selecting a style from a drop-down menu, you're adding little notations to your text. These notations look natural to any modern reader who's used to "txt speak," but are swapped out with fancy text stylings when the text is rendered. It is, in fact, what word processors secretly do behind the scenes. The word processor shows bold text, but if you could see the code generated to make your text bold, it would be a lot like Markdown (actually it's the far more complex XML). With Markdown, that barrier is removed, which looks scarier on the one hand, but on the other hand, you can write Markdown on literally anything that generates text without losing any formatting information.
The popular file extension for Markdown files is .md. If you're on a platform that doesn't know what a .md file is, you can associate the extension to Atom manually or else just use the universal .txt extension. The file extension doesn't change the nature of the file; it just changes how your computer decides what to do with it. Atom and some platforms are smart enough to know that a file is plaintext no matter what extension you give it.
#### Live preview
Atom features the **Markdown Preview** plugin, which shows you both the plain Markdown you're writing and the way it will (commonly) render.
![Atom's preview screen][9]
To activate this preview pane, select **Packages > Markdown Preview > Toggle Preview** or press **Ctrl+Shift+M**.
This view provides you with the best of both worlds. You get to write without the burden of styling your text, but you also get to see a common example of what your text will look like, at least in a typical digital format. Of course, the point is that you can't control how your text is ultimately rendered, so don't be tempted to adjust your Markdown to force your render preview to look a certain way.
#### One sentence per line
Your high school writing teacher doesn't ever have to see your Markdown.
It won't come naturally at first, but maintaining one sentence per line makes more sense in the digital world. Markdown ignores single line breaks (when you've pressed the Return or Enter key) and only creates a new paragraph after a single blank line.
![Writing in Atom][10]
The advantage of writing one sentence per line is that your work is easier to track. That is, if you've changed one word at the start of a paragraph, then it's easy for Atom, Git, or any application to highlight that change in a meaningful way if the change is limited to one line rather than one word in a long paragraph. In other words, a change to one sentence should only affect that sentence, not the whole paragraph.
You might be thinking, "many word processors track changes, too, and they can highlight a single word that's changed." But those revision trackers are bound to the interface of that word processor, which means you can't look through revisions without being in front of that word processor. In a plaintext workflow, you can review revisions in plaintext, which means you can make or approve edits no matter what you have on hand, as long as that device can deal with plaintext (and most of them can).
Writers admittedly don't usually think in terms of line numbers, but it's a useful tool for computers, and ultimately a great reference point in general. Atom numbers the lines of your text document by default. A _line_ is only a line once you have pressed the Enter or Return key.
![Writing in Atom][11]
If a line has a dot instead of a number, that means it's part of the previous line wrapped for you because it couldn't fit on your screen.
#### Theme it
If you're a visual person, you might be very particular about the way your writing environment looks. Even if you are writing in plain Markdown, it doesn't mean you have to write in a programmer's font or in a dark window that makes you look like a coder. The simplest way to modify what Atom looks like is to use [theme packages][12]. It's conventional for theme designers to differentiate dark themes from light themes, so you can search with the keyword Dark or Light, depending on what you want.
To install a theme, select **Edit > Preferences**. This opens a new tab in the Atom interface. Yes, tabs are used for your working documents _and_ for configuration and control panels. In the **Settings** tab, click on the **Install** category.
In the **Install** panel, search for the name of the theme you want to install. Click the **Themes** button on the right of the search field to search only for themes. Once you've found your theme, click its **Install** button.
![Atom's themes][13]
To use a theme you've installed or to customize a theme to your preference, navigate to the **Themes** category in your **Settings** tab. Pick the theme you want to use from the drop-down menu. The changes take place immediately, so you can see exactly how the theme affects your environment.
You can also change your working font in the **Editor** category of the **Settings** tab. Atom defaults to monospace fonts, which are generally preferred by programmers. But you can use any font on your system, whether it's serif or sans or gothic or cursive. Whatever you want to spend your day staring at, it's entirely up to you.
On a related note, by default Atom draws a vertical marker down its screen as a guide for people writing code. Programmers often don't want to write long lines of code, so this vertical line is a reminder to them to simplify things. The vertical line is meaningless to writers, though, and you can remove it by disabling the **wrap-guide** package.
To disable the **wrap-guide** package, select the **Packages** category in the **Settings** tab and search for **wrap-guide**. When you've found the package, click its **Disable** button.
#### Dynamic structure
When creating a long document, I find that writing one chapter per file makes more sense than writing an entire book in a single file. Furthermore, I don't name my chapters in the obvious syntax **chapter-1.md** or **1.example.md** , but by chapter titles or keywords, such as **example.md**. To provide myself guidance in the future about how the book is meant to be assembled, I maintain a file called **toc.md** (for "Table of Contents") where I list the (current) order of my chapters.
I do this because, no matter how convinced I am that chapter 6 just couldn't possibly happen before chapter 1, there's rarely a time that I don't swap the order of one or two chapters or sections before I'm finished with a book. I find that keeping it dynamic from the start helps me avoid renaming confusion, and it also helps me treat the material less rigidly.
### Git in Atom
Two things every writer has in common is that they're writing for keeps and their writing is a journey. You don't sit down to write and finish with a final draft; by definition, you have a first draft. And that draft goes through revisions, each of which you carefully save in duplicate and triplicate just in case one of your files turns up corrupted. Eventually, you get to what you call a final draft, but more than likely you'll be going back to it one day, either to resurrect the good parts or to fix the bad.
The most exciting feature in Atom is its strong Git integration. Without ever leaving Atom, you can interact with all of the major features of Git, tracking and updating your project, rolling back changes you don't like, integrating changes from a collaborator, and more. The best way to learn it is to step through it, so here's how to use Git within the Atom interface from the beginning to the end of a writing project.
First thing first: Reveal the Git panel by selecting **View > Toggle Git Tab**. This causes a new tab to open on the right side of Atom's interface. There's not much to see yet, so just keep it open for now.
#### Starting a Git project
You can think of Git as being bound to a folder. Any folder outside a Git directory doesn't know about Git, and Git doesn't know about it. Folders and files within a Git directory are ignored until you grant Git permission to keep track of them.
You can create a Git project by creating a new project folder in Atom. Select **File > Add Project Folder** and create a new folder on your system. The folder you create appears in the left **Project Panel** of your Atom window.
#### Git add
Right-click on your new project folder and select **New File** to create a new file in your project folder. If you have files you want to import into your new project, right-click on the folder and select **Show in File Manager** to open the folder in your system's file viewer (Dolphin or Nautilus on Linux, Finder on Mac, Explorer on Windows), and then drag-and-drop your files.
With a project file (either the empty one you created or one you've imported) open in Atom, click the **Create Repository** button in the **Git** tab. In the pop-up dialog box, click **Init** to initialize your project directory as a local Git repository. Git adds a **.git** directory (invisible in your system's file manager, but visible to you in Atom) to your project folder. Don't be fooled by this: The **.git** directory is for Git to manage, not you, so you'll generally stay out of it. But seeing it in Atom is a good reminder that you're working in a project actively managed by Git; in other words, revision history is available when you see a **.git** directory.
In your empty file, write some stuff. You're a writer, so type some words. It can be any set of words you please, but remember the writing tips above.
Press **Ctrl+S** to save your file and it will appear in the **Unstaged Changes** section of the **Git** tab. That means the file exists in your project folder but has not yet been committed over to Git's purview. Allow Git to keep track of your file by clicking on the **Stage All** button in the top-right of the **Git** tab. If you've used a word processor with revision history, you can think of this step as permitting Git to record changes.
#### Git commit
Your file is now staged. All that means is Git is aware that the file exists and is aware that it has been changed since the last time Git was made aware of it.
A Git commit sends your file into Git's internal and eternal archives. If you're used to word processors, this is similar to naming a revision. To create a commit, enter some descriptive text in the **Commit** message box at the bottom of the **Git** tab. You can be vague or cheeky, but it's more useful if you enter useful information for your future self so that you know why the revision was made.
The first time you make a commit, you must create a branch. Git branches are a little like alternate realities, allowing you to switch from one timeline to another to make changes that you may or may not want to keep forever. If you end up liking the changes, you can merge one experimental branch into another, thereby unifying different versions of your project. It's an advanced process that's not worth learning upfront, but you still need an active branch, so you have to create one for your first commit.
Click on the **Branch** icon at the very bottom of the **Git** tab to create a new branch.
![Creating a branch][14]
It's customary to name your first branch **master**. You don't have to; you can name it **firstdraft** or whatever you like, but adhering to the local customs can sometimes make talking about Git (and looking up answers to questions) a little easier because you'll know that when someone mentions **master** , they really mean **master** and not **firstdraft** or whatever you called your branch.
On some versions of Atom, the UI may not update to reflect that you've created a new branch. Don't worry; the branch will be created (and the UI updated) once you make your commit. Press the **Commit** button, whether it reads **Create detached commit** or **Commit to master**.
Once you've made a commit, the state of your file is preserved forever in Git's memory.
#### History and Git diff
A natural question is how often you should make a commit. There's no one right answer to that. Saving a file with **Ctrl+S** and committing to Git are two separate processes, so you will continue to do both. You'll probably want to make commits whenever you feel like you've done something significant or are about to try out a crazy new idea that you may want to back out of.
To get a feel for what impact a commit has on your workflow, remove some text from your test document and add some text to the top and bottom. Make another commit. Do this a few times until you have a small history at the bottom of your **Git** tab, then click on a commit to view it in Atom.
![Viewing differences][15]
When viewing a past commit, you see three elements:
1. Text in green was added to a document when the commit was made.
2. Text in red was removed from the document when the commit was made.
3. All other text was untouched.
#### Remote backup
One of the advantages of using Git is that, by design, it is distributed, meaning you can commit your work to your local repository and push your changes out to any number of servers for backup. You can also pull changes in from those servers so that whatever device you happen to be working on always has the latest changes.
For this to work, you must have an account on a Git server. There are several free hosting services out there, including GitHub, the company that produces Atom but oddly is not open source, and GitLab, which is open source. Preferring open source to proprietary, I'll use GitLab in this example.
If you don't already have a GitLab account, sign up for one and start a new project. The project name doesn't have to match your project folder in Atom, but it probably makes sense if it does. You can leave your project private, in which case only you and anyone you give explicit permissions to can access it, or you can make it public if you want it to be available to anyone on the internet who stumbles upon it.
Do not add a README to the project.
Once the project is created, it provides you with instructions on how to set up the repository. This is great information if you decide to use Git in a terminal or with a separate GUI, but Atom's workflow is different.
Click the **Clone** button in the top-right of the GitLab interface. This reveals the address you must use to access the Git repository. Copy the **SSH** address (not the **https** address).
In Atom, click on your project's **.git** directory and open the **config**. Add these configuration lines to the file, adjusting the **seth/example.git** part of the **url** value to match your unique address.
* * *
```
[remote "origin"]
url = [git@gitlab.com][16]:seth/example.git
fetch = +refs/heads/*:refs/remotes/origin/*
[branch "master"]
remote = origin
merge = refs/heads/master
```
At the bottom of the **Git** tab, a new button has appeared, labeled **Fetch**. Since your server is brand new and therefore has no data for you to fetch, right-click on the button and select **Push**. This pushes your changes to your Gitlab account, and now your project is backed up on a Git server.
Pushing changes to a server is something you can do after each commit. It provides immediate offsite backup and, since the amount of data is usually minimal, it's practically as fast as a local save.
### Writing and Git
Git is a complex system, useful for more than just revision tracking and backups. It enables asynchronous collaboration and encourages experimentation. This article has covered the basics, but there are many more articles—and entire books—on Git and how to use it to make your work more efficient, more resilient, and more dynamic. It all starts with using Git for small tasks. The more you use it, the more questions you'll find yourself asking, and eventually the more tricks you'll learn.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/write-git
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/noreplyhttps://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/write-hand_0.jpg?itok=Uw5RJD03 (Writing Hand)
[2]: https://git-scm.com/
[3]: https://support.mozilla.org/en-US/kb/firefox-reader-view-clutter-free-web-pages
[4]: http://atom.io
[5]: https://git-scm.com/download/mac
[6]: https://git-scm.com/download/win
[7]: http://gnu.org/software/emacs
[8]: https://commonmark.org/help/
[9]: https://opensource.com/sites/default/files/uploads/atom-preview.jpg (Atom's preview screen)
[10]: https://opensource.com/sites/default/files/uploads/atom-para.jpg (Writing in Atom)
[11]: https://opensource.com/sites/default/files/uploads/atom-linebreak.jpg (Writing in Atom)
[12]: https://atom.io/themes
[13]: https://opensource.com/sites/default/files/uploads/atom-theme.jpg (Atom's themes)
[14]: https://opensource.com/sites/default/files/uploads/atom-branch.jpg (Creating a branch)
[15]: https://opensource.com/sites/default/files/uploads/git-diff.jpg (Viewing differences)
[16]: mailto:git@gitlab.com

View File

@ -1,158 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What is a Java constructor?)
[#]: via: (https://opensource.com/article/19/6/what-java-constructor)
[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/ashleykoree)
What is a Java constructor?
======
Constructors are powerful components of programming. Use them to unlock
the full potential of Java.
![][1]
Java is (disputably) the undisputed heavyweight in open source, cross-platform programming. While there are many [great][2] [cross-platform][2] [frameworks][3], few are as unified and direct as [Java][4].
Of course, Java is also a pretty complex language with subtleties and conventions all its own. One of the most common questions about Java relates to **constructors** : What are they and what are they used for?
Put succinctly: a constructor is an action performed upon the creation of a new **object** in Java. When your Java application creates an instance of a class you have written, it checks for a constructor. If a constructor exists, Java runs the code in the constructor while creating the instance. That's a lot of technical terms crammed into a few sentences, but it becomes clearer when you see it in action, so make sure you have [Java installed][5] and get ready for a demo.
### Life without constructors
If you're writing Java code, you're already using constructors, even though you may not know it. All classes in Java have a constructor because even if you haven't created one, Java does it for you when the code is compiled. For the sake of demonstration, though, ignore the hidden constructor that Java provides (because a default constructor adds no extra features), and take a look at life without an explicit constructor.
Suppose you're writing a simple Java dice-roller application because you want to produce a pseudo-random number for a game.
First, you might create your dice class to represent a physical die. Knowing that you play a lot of [Dungeons and Dragons][6], you decide to create a 20-sided die. In this sample code, the variable **dice** is the integer 20, representing the maximum possible die roll (a 20-sided die cannot roll more than 20). The variable **roll** is a placeholder for what will eventually be a random number, and **rand** serves as the random seed.
```
import java.util.Random;
public class DiceRoller {
private int dice = 20;
private int roll;
private [Random][7] rand = new [Random][7]();
```
Next, create a function in the **DiceRoller** class to execute the steps the computer must take to emulate a die roll: Take an integer from **rand** and assign it to the **roll** variable, add 1 to account for the fact that Java starts counting at 0 but a 20-sided die has no 0 value, then print the results.
```
public void Roller() {
roll = rand.nextInt(dice);
roll += 1;
[System][8].out.println (roll);
}
```
Finally, spawn an instance of the **DiceRoller** class and invoke its primary function, **Roller** :
```
// main loop
public static void main ([String][9][] args) {
[System][8].out.printf("You rolled a ");
DiceRoller App = new DiceRoller();
App.Roller();
}
}
```
As long as you have a Java development environment installed (such as [OpenJDK][10]), you can run your application from a terminal:
```
$ java dice.java
You rolled a 12
```
In this example, there is no explicit constructor. It's a perfectly valid and legal Java application, but it's a little limited. For instance, if you set your game of Dungeons and Dragons aside for the evening to play some Yahtzee, you would need 6-sided dice. In this simple example, it wouldn't be that much trouble to change the code, but that's not a realistic option in complex code. One way you could solve this problem is with a constructor.
### Constructors in action
The **DiceRoller** class in this example project represents a virtual dice factory: When it's called, it creates a virtual die that is then "rolled." However, by writing a custom constructor, you can make your Dice Roller application ask what kind of die you'd like to emulate.
Most of the code is the same, with the exception of a constructor accepting some number of sides. This number doesn't exist yet, but it will be created later.
```
import java.util.Random;
public class DiceRoller {
private int dice;
private int roll;
private [Random][7] rand = new [Random][7]();
// constructor
public DiceRoller(int sides) {
dice = sides;
}
```
The function emulating a roll remains unchanged:
```
public void Roller() {
roll = rand.nextInt(dice);
roll += 1;
[System][8].out.println (roll);
}
```
The main block of code feeds whatever arguments you provide when running the application. Were this a complex application, you would parse the arguments carefully and check for unexpected results, but for this sample, the only precaution taken is converting the argument string to an integer type:
```
public static void main ([String][9][] args) {
[System][8].out.printf("You rolled a ");
DiceRoller App = new DiceRoller( [Integer][11].parseInt(args[0]) );
App.Roller();
}
}
```
Launch the application and provide the number of sides you want your die to have:
```
$ java dice.java 20
You rolled a 10
$ java dice.java 6
You rolled a 2
$ java dice.java 100
You rolled a 44
```
The constructor has accepted your input, so when the class instance is created, it is created with the **sides** variable set to whatever number the user dictates.
Constructors are powerful components of programming. Practice using them to unlock the full potential of Java.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/what-java-constructor
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth/users/ashleykoree
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag
[2]: https://opensource.com/resources/python
[3]: https://opensource.com/article/17/4/pyqt-versus-wxpython
[4]: https://opensource.com/resources/java
[5]: https://openjdk.java.net/install/index.html
[6]: https://opensource.com/article/19/5/free-rpg-day
[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+random
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[10]: https://openjdk.java.net/
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+integer

View File

@ -1,267 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Install and Configure PostgreSQL on Ubuntu)
[#]: via: (https://itsfoss.com/install-postgresql-ubuntu/)
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
How to Install and Configure PostgreSQL on Ubuntu
======
_**In this tutorial, youll learn how to install and use the open source database PostgreSQL on Ubuntu Linux.**_
[PostgreSQL][1] (or Postgres) is a powerful, free and open-source relational database management system ([RDBMS][2]) that has a strong reputation for reliability, feature robustness, and performance. It is designed to handle various tasks, of any size. It is cross-platform, and the default database for [macOS Server][3].
PostgreSQL might just be the right tool for you if youre a fan of a simple to use SQL database manager. It supports SQL standards and offers additional features, while also being heavily extendable by the user as the user can add data types, functions, and do many more things.
Earlier I discussed [installing MySQL on Ubuntu][4]. In this article, Ill show you how to install and configure PostgreSQL, so that you are ready to use it to suit whatever your needs may be.
![][5]
### Installing PostgreSQL on Ubuntu
PostgreSQL is available in Ubuntu main repository. However, like many other development tools, it may not be the latest version.
First check the PostgreSQL version available in [Ubuntu repositories][6] using this [apt command][7] in the terminal:
```
apt show postgresql
```
In my Ubuntu 18.04, it showed that the available version of PostgreSQL is version 10 (10+190 means version 10) whereas PostgreSQL version 11 is already released.
```
Package: postgresql
Version: 10+190
Priority: optional
Section: database
Source: postgresql-common (190)
Origin: Ubuntu
```
Based on this information, you can make your mind whether you want to install the version available from Ubuntu or you want to get the latest released version of PostgreSQL.
Ill show both methods to you.
#### Method 1: Install PostgreSQL from Ubuntu repositories
In the terminal, use the following command to install PostgreSQL
```
sudo apt update
sudo apt install postgresql postgresql-contrib
```
Enter your password when asked and you should have it installed in a few seconds/minutes depending on your internet speed. Speaking of that, feel free to check various [network bandwidth in Ubuntu][8].
What is postgresql-contrib?
The postgresql-contrib or the contrib package consists some additional utilities and functionalities that are not part of the core PostgreSQL package. In most cases, its good to have the contrib package installed along with the PostgreSQL core.
[][9]
Suggested read  Fix gvfsd-smb-browse Taking 100% CPU In Ubuntu 16.04
#### Method 2: Installing the latest version 11 of PostgreSQL in Ubuntu
To install PostgreSQL 11, you need to add the official PostgreSQL repository in your sources.list, add its certificate and then install it from there.
Dont worry, its not complicated. Just follow these steps.
Add the GPG key first:
```
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
```
Now add the repository with the below command. If you are using Linux Mint, youll have to manually replace the `lsb_release -cs` the Ubuntu version your Mint release is based on.
```
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" >> /etc/apt/sources.list.d/pgdg.list'
```
Everything is ready now. Install PostgreSQL with the following commands:
```
sudo apt update
sudo apt install postgresql postgresql-contrib
```
PostgreSQL GUI application
You may also install a GUI application (pgAdmin) for managing PostgreSQL databases:
_sudo apt install pgadmin4_
### Configuring PostgreSQL
You can check if **PostgreSQL** is running by executing:
```
service postgresql status
```
Via the **service** command you can also **start**, **stop** or **restart** **postgresql**. Typing in **service postgresql** and pressing **Enter** should output all options. Now, onto the users.
By default, PostgreSQL creates a special user postgres that has all rights. To actually use PostgreSQL, you must first log in to that account:
```
sudo su postgres
```
Your prompt should change to something similar to:
```
[email protected]:/home/ubuntu$
```
Now, run the **PostgreSQL Shell** with the utility **psql**:
```
psql
```
You should be prompted with:
```
postgress=#
```
You can type in **\q** to **quit** and **\?** for **help**.
To see all existing tables, enter:
```
\l
```
The output will look similar to this (Hit the key **q** to exit this view):
![PostgreSQL Tables][10]
With **\du** you can display the **PostgreSQL users**:
![PostgreSQLUsers][11]
You can change the password of any user (including **postgres**) with:
```
ALTER USER postgres WITH PASSWORD 'my_password';
```
**Note:** _Replace **postgres** with the name of the user and **my_password** with the wanted password._ Also, dont forget the **;** (**semicolumn**) after every statement.
It is recommended that you create another user (it is bad practice to use the default **postgres** user). To do so, use the command:
```
CREATE USER my_user WITH PASSWORD 'my_password';
```
If you run **\du**, you will see, however, that **my_user** has no attributes yet. Lets add **Superuser** to it:
```
ALTER USER my_user WITH SUPERUSER;
```
You can **remove users** with:
```
DROP USER my_user;
```
To **log in** as another user, quit the prompt (**\q**) and then use the command:
```
psql -U my_user
```
You can connect directly to a database with the **-d** flag:
```
psql -U my_user -d my_db
```
You should call the PostgreSQL user the same as another existing user. For example, my use is **ubuntu**. To log in, from the terminal I use:
```
psql -U ubuntu -d postgres
```
**Note:** _You must specify a database (by default it will try connecting you to the database named the same as the user you are logged in as)._
If you have a the error:
```
psql: FATAL: Peer authentication failed for user "my_user"
```
Make sure you are logging as the correct user and edit **/etc/postgresql/11/main/pg_hba.conf** with administrator rights:
```
sudo vim /etc/postgresql/11/main/pg_hba.conf
```
**Note:** _Replace **11** with your version (e.g. **10**)._
Here, replace the line:
```
local all postgres peer
```
With:
```
local all postgres md5
```
Then restart **PostgreSQL**:
```
sudo service postgresql restart
```
Using **PostgreSQL** is the same as using any other **SQL** type database. I wont go into the specific commands, since this article is about getting you started with a working setup. However, here is a [very useful gist][12] to reference! Also, the man page (**man psql**) and the [documentation][13] are very helpful.
[][14]
Suggested read  [How To] Share And Sync Any Folder With Dropbox in Ubuntu
**Wrapping Up**
Reading this article has hopefully guided you through the process of installing and preparing PostgreSQL on an Ubuntu system. If you are new to SQL, you should read this article to know the [basic SQL commands][15]:
[Basic SQL Commands][15]
If you have any issues or questions, please feel free to ask in the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-postgresql-ubuntu/
作者:[Sergiu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sergiu/
[b]: https://github.com/lujun9972
[1]: https://www.postgresql.org/
[2]: https://www.codecademy.com/articles/what-is-rdbms-sql
[3]: https://www.apple.com/in/macos/server/
[4]: https://itsfoss.com/install-mysql-ubuntu/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-postgresql-ubuntu.png?resize=800%2C450&ssl=1
[6]: https://itsfoss.com/ubuntu-repositories/
[7]: https://itsfoss.com/apt-command-guide/
[8]: https://itsfoss.com/network-speed-monitor-linux/
[9]: https://itsfoss.com/fix-gvfsd-smb-high-cpu-ubuntu/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/07/postgresql_tables.png?fit=800%2C303&ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/07/postgresql_users.png?fit=800%2C244&ssl=1
[12]: https://gist.github.com/Kartones/dd3ff5ec5ea238d4c546
[13]: https://www.postgresql.org/docs/manuals/
[14]: https://itsfoss.com/sync-any-folder-with-dropbox/
[15]: https://itsfoss.com/basic-sql-commands/

View File

@ -1,191 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (amwps290)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Install Linux on Intel NUC)
[#]: via: (https://itsfoss.com/install-linux-on-intel-nuc/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
How to Install Linux on Intel NUC
======
The previous week, I got myself an [Intel NUC][1]. Though it is a tiny device, it is equivalent to a full-fledged desktop CPU. Most of the [Linux-based mini PCs][2] are actually built on top of the Intel NUC devices.
I got the barebone NUC with 8th generation Core i3 processor. Barebone means that the device has no RAM, no hard disk and obviously, no operating system. I added an [8GB RAM from Crucial][3] (around $33) and a [240 GB Western Digital SSD][4] (around $45).
Altogether, I had a desktop PC ready in under $400. I already have a screen and keyboard-mouse pair so I am not counting them in the expense.
![A brand new Intel NUC NUC8i3BEH at my desk with Raspberry Pi 4 lurking behind][5]
The main reason why I got Intel NUC is that I want to test and review various Linux distributions on real hardware. I have a [Raspberry Pi 4][6] which works as an entry-level desktop but its an [ARM][7] device and thus there are only a handful of Linux distributions available for Raspberry Pi.
_The Amazon links in the article are affiliate links. Please read our [affiliate policy][8]._
### Installing Linux on Intel NUC
I started with Ubuntu 18.04 LTS version because thats what I had available at the moment. You can follow this tutorial for other distributions as well. The steps should remain the same at least till the partition step which is the most important one in the entire procedure.
#### Step 1: Create a live Linux USB
Download Ubuntu 18.04 from its website. Use another computer to [create a live Ubuntu USB][9]. You can use a tool like [Rufus][10] or [Etcher][11]. On Ubuntu, you can use the default Startup Disk Creator tool.
#### Step 2: Make sure the boot order is correct
Insert your USB and power on the NUC. As soon as you see the Intel NUC written on the screen, press F2 to go to BIOS settings.
![BIOS Settings in Intel NUC][12]
In here, just make sure that boot order is set to boot from USB first. If not, change the boot order.
If you had to make any changes, press F10 to save and exit. Else, use Esc to exit the BIOS.
#### Step 3: Making the correct partition to install Linux
Now when it boots again, youll see the familiar Grub screen that allows you to try Ubuntu live or install it. Choose to install it.
[][13]
Suggested read  3 Ways to Check Linux Kernel Version in Command Line
First few installation steps are simple. You choose the keyboard layout, and the network connection (if any) and other simple steps.
![Choose the keyboard layout while installing Ubuntu Linux][14]
You may go with the normal installation that has a handful of useful applications installed by default.
![][15]
The interesting screen comes next. You have two options:
* **Erase disk and install Ubuntu**: Simplest option that will install Ubuntu on the entire disk. If you want to use only one operating system on the Intel NUC, choose this option and Ubuntu will take care of the rest.
* **Something Else**: This is the advanced option if you want to take control of things. In my case, I want to install multiple Linux distribution on the same SSD. So I am opting for this advanced option.
![][16]
_**If you opt for “Erase disk and install Ubuntu”, click continue and go to the step 4.**_
If you are going with the advanced option, follow the rest of the step 3.
Select the SSD disk and click on New Partition Table.
![][17]
It will show you a warning. Just hit Continue.
![][18]
Now youll see a free space of the size of your SSD disk. My idea is to create an EFI System Partition for the EFI boot loader, a root partition and a home partition. I am not creating a [swap partition][19]. Ubuntu creates a swap file on its own and if the need be, I can extend the swap by creating additional swap files.
Ill leave almost 200 GB of free space on the disk so that I could install other Linux distributions here. You can utilize all of it for your home partitions. Keeping separate root and home partitions help you when you want to save reinstall the system
Select the free space and click on the plus sign to add a partition.
![][20]
Usually 100 MB is sufficient for the EFI but some distributions may need more space so I am going with 500 MB of EFI partition.
![][21]
Next, I am using 20 GB of root space. If you are going to use only one distributions, you can increase it to 40 GB easily.
Root is where the system files are kept. Your program cache and installed applications keep some files under the root directory. I recommend [reading about the Linux filesystem hierarchy][22] to get more knowledge on this topic.
[][23]
Suggested read  Share Folders On Local Network Between Ubuntu And Windows
Provide the size, choose Ext4 file system and use / as the mount point.
![][24]
The next is to create a home partition. Again, if you want to use only one Linux distribution, go for the remaining free space. Else, choose a suitable disk space for the Home partition.
Home is where your personal documents, pictures, music, download and other files are stored.
![][25]
Now that you have created EFI, root and home partitions, you are ready to install Ubuntu Linux. Hit the Install Now button.
![][26]
It will give you a warning about the new changes being written to the disk. Hit continue.
![][27]
#### Step 4: Installing Ubuntu Linux
Things are pretty straightforward from here onward. Choose your time zone right now or change it later.
![][28]
On the next screen, choose a username, hostname and the password.
![][29]
Its a wait an watch game for next 7-8 minutes.
![][30]
Once the installation is over, youll be prompted for a restart.
![][31]
When you restart, you should remove the live USB otherwise youll boot into the installation media again.
Thats all you need to do to install Linux on an Intel NUC device. Quite frankly, you can use the same procedure on any other system.
**Intel NUC and Linux: how do you use it?**
I am loving the Intel NUC. It doesnt take space on the desk and yet it is powerful enough to replace the regular bulky desktop CPU. You can easily upgrade it to 32GB of RAM. You can install two SSD on it. Altogether, it provides some scope of configuration and upgrade.
If you are looking to buy a desktop computer, I highly recommend [Intel NUC][1] mini PC. If you are not comfortable installing the OS on your own, you can [buy one of the Linux-based mini PCs][2].
Do you own an Intel NUC? Hows your experience with it? Do you have any tips to share it with us? Do leave a comment below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-linux-on-intel-nuc/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.amazon.com/Intel-NUC-Mainstream-Kit-NUC8i3BEH/dp/B07GX4X4PW?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07GX4X4PW (Intel NUC)
[2]: https://itsfoss.com/linux-based-mini-pc/
[3]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B01BIWKP58?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01BIWKP58 (8GB RAM from Crucial)
[4]: https://www.amazon.com/Western-Digital-240GB-Internal-WDS240G1G0B/dp/B01M9B2VB7?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01M9B2VB7 (240 GB Western Digital SSD)
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/intel-nuc.jpg?resize=800%2C600&ssl=1
[6]: https://itsfoss.com/raspberry-pi-4/
[7]: https://en.wikipedia.org/wiki/ARM_architecture
[8]: https://itsfoss.com/affiliate-policy/
[9]: https://itsfoss.com/create-live-usb-of-ubuntu-in-windows/
[10]: https://rufus.ie/
[11]: https://www.balena.io/etcher/
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/boot-screen-nuc.jpg?ssl=1
[13]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/
[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-1_tutorial.jpg?ssl=1
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-2_tutorial.jpg?ssl=1
[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-3_tutorial.jpg?ssl=1
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-4_tutorial.jpg?ssl=1
[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-5_tutorial.jpg?ssl=1
[19]: https://itsfoss.com/swap-size/
[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-6_tutorial.jpg?ssl=1
[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-7_tutorial.jpg?ssl=1
[22]: https://linuxhandbook.com/linux-directory-structure/
[23]: https://itsfoss.com/share-folders-local-network-ubuntu-windows/
[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-8_tutorial.jpg?ssl=1
[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-9_tutorial.jpg?ssl=1
[26]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-10_tutorial.jpg?ssl=1
[27]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-11_tutorial.jpg?ssl=1
[28]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-12_tutorial.jpg?ssl=1
[29]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-13_tutorial.jpg?ssl=1
[30]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-14_tutorial.jpg?ssl=1
[31]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-15_tutorial.jpg?ssl=1

View File

@ -1,222 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Installation Guide of Manjaro 18.1 (KDE Edition) with Screenshots)
[#]: via: (https://www.linuxtechi.com/install-manjaro-18-1-kde-edition-screenshots/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
Installation Guide of Manjaro 18.1 (KDE Edition) with Screenshots
======
Within a year of releasing **Manjaro 18.0** (**Illyria**), the team has come out with their next big release with **Manjaro 18.1**, codenamed “**Juhraya**“. The team also have come up with an official announcement saying that Juhraya comes packed with a lot of improvements and bug fixes.
### New Features in Manjaro 18.1
Some of the new features and enhancements in Manjaro 18.1 are listed below:
* Option to choose between LibreOffice or Free Office
* New Matcha theme for Xfce edition
* Redesigned messaging system in KDE edition
* Support for Snap and FlatPak packages using “bhau” tool
### Minimum System Requirements for Manjaro 18.1
* 1 GB RAM
* One GHz Processor
* Around 30 GB Hard disk space
* Internet Connection
* Bootable Media (USB/DVD)
### Step by Step Guide to Install Manjaro 18.1 (KDE Edition)
To start installing Manjaro 18.1 (KDE Edition) in your system, please follow the steps outline below:
### Step 1) Download Manjaro 18.1 ISO
Before installing, you need to download the latest copy of Manjaro 18.1 from its official download page located **[here][1]**. Since we are seeing about the KDE version, we chose to install the KDE version. But the installation process is the same for all desktop environments including Xfce, KDE and Gnome editions.
### Step 2) Create a USB Bootable Disk
Once you have successfully downloaded the ISO file from Manjaro downloads page, it is time to create an USB disk. Copy the downloaded ISO file in a USB disk and create a bootable disk. Make sure to change your boot settings to boot using a USB and restart your system
### Step 3) Manjaro Live Installation Environment
When the system restarts, it will automatically detect the USB drive and starts booting into the Manjaro Live Installation Screen.
[![Boot-Manjaro-18-1-kde-installation][2]][3]
Next use the arrow keys to choose “**Boot: Manjaro x86_64 kde**” and hit enter to launch the Manjaro Installer.
### Step 4) Choose Launch Installer
Next the Manjaro installer will be launched and If you are connected to the internet, Manjaro will automatically detect your location and time zone. Click “**Launch Installer**” start installing Manjaro 18.1 KDE edition in your system.
[![Choose-Launch-Installaer-Manjaro18-1-kde][2]][4]
### Step 5) Choose Your Language
Next the installer will take you to choose your preferred language.
[![Choose-Language-Manjaro18-1-Kde-Installation][2]][5]
Select your desired language and click “Next”
### Step 6) Choose Your time zone and region
In the next screen, select your desired time zone and region and click “Next” to continue
[![Select-Location-During-Manjaro18-1-KDE-Installation][2]][6]
### Step 7) Choose Keyboard layout
In the next screen, select your preferred keyboard layout and click “Next” to continue.
[![Select-Keyboard-Layout-Manjaro18-1-kde-installation][2]][7]
### Step 8) Choose Partition Type
This is a very critical step in the installation process. It will allow you to choose between:
* Erase Disk
* Manual Partitioning
* Install Alongside
* Replace a Partition
If you are installing Manjaro 18.1 in a VM (Virtual Machine), then you wont be able to see the last 2 options.
If you are new to Manjaro Linux then I would suggest you should go with first option (**Erase Disk**), it will automatically create required partitions for you. If you want to create custom partitions then choose the second option “**Manual Partitioning**“, as its name suggests it will allow us to create our own custom partitions.
In this tutorial I will be creating custom partitions by selecting “Manual Partitioning” option,
[![Manual-Partition-Manjaro18-1-KDE][2]][8]
Choose the second option and click “Next” to continue.
As we can see i have 40 GB hard disk, so I will create following partitions on it,
* /boot         2GB (ext4 file system)
* /                 10 GB (ext4 file system)
* /home       22 GB (ext4 file system)
* /opt           4 GB (ext4 file system)
* Swap         2 GB
When we click on Next in above window, we will get the following screen, choose to create a **new partition table**,
[![Create-Partition-Table-Manjaro18-1-Installation][2]][9]
Click on Ok,
Now choose the free space and then click on **create** to setup the first partition as /boot of size 2 GB,
[![boot-partition-manjaro-18-1-installation][2]][10]
Click on OK to proceed with further, in the next window choose again free space and then click on create  to setup second partition as / of size 10 GB,
[![slash-root-partition-manjaro18-1-installation][2]][11]
Similarly create next partition as /home of size 22 GB,
[![home-partition-manjaro18-1-installation][2]][12]
As of now we have created three partitions as primary, now create next partition as extended,
[![Extended-Partition-Manjaro18-1-installation][2]][13]
Click on OK to proceed further,
Create /opt and Swap partition of size 5 GB and 2 GB respectively as logical partitions
[![opt-partition-manjaro-18-1-installation][2]][14]
[![swap-partition-manjaro18-1-installation][2]][15]
Once are done with all the partitions creation, click on Next
[![choose-next-after-partition-creation][2]][16]
### Step 9) Provide User Information
In the next screen, you need to provide the user information including your name, username, password, computer name etc.
[![User-creation-details-manjaro18-1-installation][2]][17]
Click “Next” to continue with the installation after providing all the information.
In the next screen you will be prompted to choose the office suite, so make a choice that suits to your installation,
[![Office-Suite-Selection-Manjaro18-1][2]][18]
Click on Next to proceed further,
### Step 10) Summary Information
Before the actual installation is done, the installer will show you all the details youve chosen including the language, time zone, keyboard layout and partitioning information etc. Click “**Install**” to proceed with the installation process.
[![Summary-manjaro18-1-installation][2]][19]
### Step 11) Install Manjaro 18.1 KDE Edition
Now the actual installation process begins and once it gets completed, restart the system to login to Manjaro 18.1 KDE edition ,
[![Manjaro18-1-Installation-Progress][2]][20]
[![Restart-Manjaro-18-1-after-installation][2]][21]
### Step:12) Login after successful installation
After the restart we will get the following login screen, use the users credentials that we created during the installation
[![Login-screen-after-manjaro-18-1-installation][2]][22]
Click on Login,
[![KDE-Desktop-Screen-Manjaro-18-1][2]][23]
Thats it! Youve successfully installed Manjaro 18.1 KDE edition in your system and explore all the exciting features. Please post your feedback and suggestions in the comments section below.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/install-manjaro-18-1-kde-edition-screenshots/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://manjaro.org/download/official/kde/
[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Boot-Manjaro-18-1-kde-installation.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Launch-Installaer-Manjaro18-1-kde.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Language-Manjaro18-1-Kde-Installation.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Location-During-Manjaro18-1-KDE-Installation.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Keyboard-Layout-Manjaro18-1-kde-installation.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Manual-Partition-Manjaro18-1-KDE.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Create-Partition-Table-Manjaro18-1-Installation.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/boot-partition-manjaro-18-1-installation.jpg
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/09/slash-root-partition-manjaro18-1-installation.jpg
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/09/home-partition-manjaro18-1-installation.jpg
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Extended-Partition-Manjaro18-1-installation.jpg
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/09/opt-partition-manjaro-18-1-installation.jpg
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/09/swap-partition-manjaro18-1-installation.jpg
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/09/choose-next-after-partition-creation.jpg
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/09/User-creation-details-manjaro18-1-installation.jpg
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Office-Suite-Selection-Manjaro18-1.jpg
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Summary-manjaro18-1-installation.jpg
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Manjaro18-1-Installation-Progress.jpg
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Restart-Manjaro-18-1-after-installation.jpg
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Login-screen-after-manjaro-18-1-installation.jpg
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/09/KDE-Desktop-Screen-Manjaro-18-1.jpg

View File

@ -1,118 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Essential Accessories for Intel NUC Mini PC)
[#]: via: (https://itsfoss.com/intel-nuc-essential-accessories/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Essential Accessories for Intel NUC Mini PC
======
I bought a [barebone Intel NUC mini PC][1] a few weeks back. I [installed Linux on it][2] and I am totally enjoying it. This tiny fanless gadget replaces that bulky CPU of the desktop computer.
Intel NUC mostly comes in barebone format which means it doesnt have any RAM, hard disk and obviously no operating system. Many [Linux-based mini PCs][3] customize the Intel NUC and sell them to end users by adding disk, RAM and operating systems.
Needless to say that it doesnt come with keyboard, mouse or screen just like most other desktop computers out there.
[Intel NUC][4] is an excellent device and if you are looking to buy a desktop computer, I highly recommend it. And if you are considering to get Intel NUC, here are a few accessories you should have in order to start using the NUC as your computer.
### Essential Intel NUC accessories
![][5]
_The Amazon links in the article are affiliate links. Please read our [affiliate policy][6]._
#### The peripheral devices: monitor, keyboard and mouse
This is a no-brainer. You need to have a screen, keyboard and mouse to use a computer. Youll need a monitor with HDMI connection and USB or wireless keyboard-mouse. If you have these things already, you are good to go.
If you are looking for recommendations, I suggest LG IPS LED monitor. I have two of them in 22 inch model and I am happy with the sharp visuals it provides.
These monitors have a simple stand that doesnt move. If you want a monitor that can move up and down and rotate in portrait mode, try [HP EliteDisplay monitors][7].
![HP EliteDisplay Monitor][8]
I connect all three monitors at the same time in a multi-monitor setup. One monitor connects to the given HDMI port. Two monitors connect to thunderbolt port via a [thunderbolt to HDMI splitter from Club 3D][9].
You may also opt for the ultrawide monitors. I dont have a personal experience with them.
#### A/C power cord
This will be a surprise for you When you get your NUC, youll notice that though it has power adapter, its not complete with the plug.
![][10]
Since different countries have different plug points, Intel decided to simply drop it from the NUC kit. I am using the power cord of an old dead laptop but if you dont have one, chances are that you may have to get one for yourself.
#### RAM
Intel NUC has two RAM slots and it can support up to 32 GB of RAM. Since I have the core i3 processor, I opted from [8GB DDR4 RAM from Crucial][11] that costs around $33.
![][12]
8 GB RAM is fine for most cases but if you have core i7 processor, you may opt for a [16 GB RAM][13] one that costs almost $67. You can double it up and get the maximum 32 GB. The choice is all yours.
#### Hard disk [Important]
Intel NUC supports both 2.5 drive and M.2 SSD and you can use both at the same time to get more storage.
The 2.5 inches slot can hold both SSD and HDD. I strongly recommend to opt for SSD because its way faster than HDD. A [480 GB 2.5][14] costs $60. Which is a fair price in my opinion.
![][15]
The 2.5″ drive is limited with the standard SATA interface speed of 6Gb/sec. The M.2 slot could be faster depending upon whether you are choosing a NVMe SSD or not. The NVMe (non volatile memory express) SSDs are up to 4 times faster than the normal SSDs (also called SATA SSD). But they may also be slightly more expensive than SATA M2 SSD.
While buying the M.2 SSD, check the product image. It should be mentioned on the image of the disk itself whether its a NVMe or SATA SSD. [Samsung EVO is a cost effective NVMe M.2 SSD][16] that you may consider.
![Make sure that your are buying the faster NVMe M2 SSD][17]
A SATA SSD in both M.2 slot and 2.5″ slot has the same speed. This is why if you dont want to opt for the expensive NVMe SSD, I suggest you go for the 2.5″ SATA SSD and keep the M.2 slot free for future upgrades.
#### Other supporting accessories
Youll need HDMI cable to connect your monitor. If you are buying a new monitor, you should usually get a cable with it.
You may need a screw driver if you are going to use the M.2 slot. Intel NUC is an excellent device and you can unscrew the bottom panel just by rotating the four pods simply by your hands. Youll have to open the device in order to place the RAM and disk.
![Intel NUC with Security Cable | Image Credit Intel][18]
NUC also has the antitheft key lock hole that you can use with security cables. Keeping computers secure with cables is a recommended security practices in a business environment. Investing a [few dollars in the security cable][19] could save you hundreds of dollars.
**What accessories do you use?**
Thats the Intel NUC accessories I use and I suggest. How about you? If you own a NUC, what accessories you use and recommend to other NUC users?
--------------------------------------------------------------------------------
via: https://itsfoss.com/intel-nuc-essential-accessories/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.amazon.com/Intel-NUC-Mainstream-Kit-NUC8i3BEH/dp/B07GX4X4PW?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07GX4X4PW (barebone Intel NUC mini PC)
[2]: https://itsfoss.com/install-linux-on-intel-nuc/
[3]: https://itsfoss.com/linux-based-mini-pc/
[4]: https://www.intel.in/content/www/in/en/products/boards-kits/nuc.html
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/intel-nuc-accessories.png?ssl=1
[6]: https://itsfoss.com/affiliate-policy/
[7]: https://www.amazon.com/HP-EliteDisplay-21-5-Inch-1FH45AA-ABA/dp/B075L4VKQF?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B075L4VKQF (HP EliteDisplay monitors)
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/hp-elitedisplay-monitor.png?ssl=1
[9]: https://www.amazon.com/Club3D-CSV-1546-USB-C-Multi-Monitor-Splitter/dp/B06Y2FX13G?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B06Y2FX13G (thunderbolt to HDMI splitter from Club 3D)
[10]: https://itsfoss.com/wp-content/uploads/2019/09/ac-power-cord-3-pongs.webp
[11]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B01BIWKP58?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01BIWKP58 (8GB DDR4 RAM from Crucial)
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/crucial-ram.jpg?ssl=1
[13]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B019FRBHZ0?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B019FRBHZ0 (16 GB RAM)
[14]: https://www.amazon.com/Green-480GB-Internal-SSD-WDS480G2G0A/dp/B01M3POPK3?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01M3POPK3 (480 GB 2.5)
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/wd-green-ssd.png?ssl=1
[16]: https://www.amazon.com/Samsung-970-EVO-500GB-MZ-V7E500BW/dp/B07BN4NJ2J?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BN4NJ2J (Samsung EVO is a cost effective NVMe M.2 SSD)
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/samsung-evo-nvme.jpg?ssl=1
[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/intel-nuc-security-cable.jpg?ssl=1
[19]: https://www.amazon.com/Kensington-Combination-Laptops-Devices-K64673AM/dp/B005J7Y99W?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B005J7Y99W (few dollars in the security cable)

View File

@ -1,236 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (7 Bash history shortcuts you will actually use)
[#]: via: (https://opensource.com/article/19/10/bash-history-shortcuts)
[#]: author: (Ian Miell https://opensource.com/users/ianmiell)
7 Bash history shortcuts you will actually use
======
Save time on the command line with these essential Bash shortcuts.
![Command line prompt][1]
Most guides to Bash history shortcuts exhaustively list every single one available. The problem with that is I would use a shortcut once, then glaze over as I tried out all the possibilities. Then I'd move onto my working day and completely forget them, retaining only the well-known [**!!** trick][2] I learned when I first started using Bash.
So most of them were never committed to memory.
This article outlines the shortcuts I _actually use_ every day. It is based on some of the contents of my book, [_Learn Bash the hard way_][3]; (you can read a [preview][4] of it to learn more).
When people see me use these shortcuts, they often ask me, "What did you do there!?" There's minimal effort or intelligence required, but to really learn them, I recommend using one each day for a week, then moving to the next one. It's worth taking your time to get them under your fingers, as the time you save will be significant in the long run.
### 1\. The "last argument" one: !$
If you only take one shortcut from this article, make it this one. It substitutes in the last argument of the last command into your line.
Consider this scenario:
```
$ mv /path/to/wrongfile /some/other/place
mv: cannot stat '/path/to/wrongfile': No such file or directory
```
Ach, I put the **wrongfile** filename in my command. I should have put **rightfile** instead.
You might decide to retype the last command and replace wrongfile with rightfile completely. Instead, you can type:
```
$ mv /path/to/rightfile !$
mv /path/to/rightfile /some/other/place
```
and the command will work.
There are other ways to achieve the same thing in Bash with shortcuts, but this trick of reusing the last argument of the last command is one I use the most.
### 2\. The "_n_th argument" one: !:2
Ever done anything like this?
```
$ tar -cvf afolder afolder.tar
tar: failed to open
```
Like many others, I get the arguments to **tar** (and **ln**) wrong more often than I would like to admit.
[![xkcd comic][5]][6]
When you mix up arguments like that, you can run:
```
$ !:0 !:1 !:3 !:2
tar -cvf afolder.tar afolder
```
and your reputation will be saved.
The last command's items are zero-indexed and can be substituted in with the number after the **!:**.
Obviously, you can also use this to reuse specific arguments from the last command rather than all of them.
### 3\. The "all the arguments" one: !:1-$
Imagine I run a command like:
```
`$ grep '(ping|pong)' afile`
```
The arguments are correct; however, I want to match **ping** or **pong** in a file, but I used **grep** rather than **egrep**.
I start typing **egrep**, but I don't want to retype the other arguments. So I can use the **!:1$** shortcut to ask for all the arguments to the previous command from the second one (remember theyre zero-indexed) to the last one (represented by the **$** sign).
```
$ egrep !:1-$
egrep '(ping|pong)' afile
ping
```
You don't need to pick **1-$**; you can pick a subset like **1-2** or **3-9** (if you had that many arguments in the previous command).
### 4\. The "last but _n_" one: !-2:$
The shortcuts above are great when I know immediately how to correct my last command, but often I run commands _after_ the original one, which means that the last command is no longer the one I want to reference.
For example, using the **mv** example from before, if I follow up my mistake with an **ls** check of the folder's contents:
```
$ mv /path/to/wrongfile /some/other/place
mv: cannot stat '/path/to/wrongfile': No such file or directory
$ ls /path/to/
rightfile
```
I can no longer use the **!$** shortcut.
In these cases, I can insert a **-_n_:** (where _**n**_ is the number of commands to go back in the history) after the **!** to grab the last argument from an older command:
```
$ mv /path/to/rightfile !-2:$
mv /path/to/rightfile /some/other/place
```
Again, once you learn it, you may be surprised at how often you need it.
### 5\. The "get me the folder" one: !$:h
This one looks less promising on the face of it, but I use it dozens of times daily.
Imagine I run a command like this:
```
$ tar -cvf system.tar /etc/system
 tar: /etc/system: Cannot stat: No such file or directory
 tar: Error exit delayed from previous errors.
```
The first thing I might want to do is go to the **/etc** folder to see what's in there and work out what I've done wrong.
I can do this at a stroke with:
```
$ cd !$:h
cd /etc
```
This one says: "Get the last argument to the last command (**/etc/system**) and take off its last filename component, leaving only the **/etc**."
### 6\. The "the current line" one: !#:1
For years, I occasionally wondered if I could reference an argument on the current line before finally looking it up and learning it. I wish I'd done so a long time ago. I most commonly use it to make backup files:
```
$ cp /path/to/some/file !#:1.bak
cp /path/to/some/file /path/to/some/file.bak
```
but once under the fingers, it can be a very quick alternative to …
### 7\. The "search and replace" one: !!:gs
This one searches across the referenced command and replaces what's in the first two **/** characters with what's in the second two.
Say I want to tell the world that my **s** key does not work and outputs **f** instead:
```
$ echo my f key doef not work
my f key doef not work
```
Then I realize that I was just hitting the **f** key by accident. To replace all the **f**s with **s**es, I can type:
```
$ !!:gs/f /s /
echo my s key does not work
my s key does not work
```
It doesn't work only on single characters; I can replace words or sentences, too:
```
$ !!:gs/does/did/
echo my s key did not work
my s key did not work
```
### Test them out
Just to show you how these shortcuts can be combined, can you work out what these toenail clippings will output?
```
$ ping !#:0:gs/i/o
$ vi /tmp/!:0.txt
$ ls !$:h
$ cd !-2:h
$ touch !$!-3:$ !! !$.txt
$ cat !:1-$
```
### **Conclusion**
Bash can be an elegant source of shortcuts for the day-to-day command-line user. While there are thousands of tips and tricks to learn, these are my favorites that I frequently put to use.
If you want to dive even deeper into all that Bash can teach you, pick up my book, [_Learn Bash the hard way_][3] or check out my online course, [Master the Bash shell][7].
* * *
_This article was originally posted on Ian's blog, [Zwischenzugs.com][8], and is reused with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/bash-history-shortcuts
作者:[Ian Miell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ianmiell
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
[2]: https://opensource.com/article/18/5/bash-tricks
[3]: https://leanpub.com/learnbashthehardway
[4]: https://leanpub.com/learnbashthehardway/read_sample
[5]: https://opensource.com/sites/default/files/uploads/tar_2x.png (xkcd comic)
[6]: https://xkcd.com/1168/
[7]: https://www.educative.io/courses/master-the-bash-shell
[8]: https://zwischenzugs.com/2019/08/25/seven-god-like-bash-history-shortcuts-you-will-actually-use/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (wenwensnow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,192 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Install and Configure VNC Server on Centos 8 / RHEL 8)
[#]: via: (https://www.linuxtechi.com/install-configure-vnc-server-centos8-rhel8/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
How to Install and Configure VNC Server on Centos 8 / RHEL 8
======
A **VNC** (Virtual Network Computing) Server is a GUI based desktop sharing platform that allows you to access remote desktop machines. In **Centos 8** and **RHEL 8** systems, VNC servers are not installed by default and need to be installed manually. In this article, well look at how to install VNC Server on CentOS 8 / RHEL 8 systems with a simple step-by-step installation guide.
### Prerequisites to Install VNC Server on Centos 8 / RHEL 8
To install VNC Server in your system, make sure you have the following requirements readily available on your system:
* CentOS 8 / RHEL 8
* GNOME Desktop Environment
* Root access
* DNF / YUM Package repositories
### Step by Step Guide to Install VNC Server on Centos 8 / RHEL 8
### Step 1)  Install GNOME Desktop environment
Before installing VNC Server in your CentOS 8 / RHEL 8, make sure you have a desktop Environment (DE) installed. In case GNOME desktop is already installed or you have installed your server with gui option then you can skip this step.
In CentOS 8 / RHEL 8, GNOME is the default desktop environment. if you dont have it in your system, install it using the following command:
```
[root@linuxtechi ~]# dnf groupinstall "workstation"
Or
[root@linuxtechi ~]# dnf groupinstall "Server with GUI
```
Once the above packages are installed successfully then run the following command to enable the graphical mode
```
[root@linuxtechi ~]# systemctl set-default graphical
```
Now reboot the system so that we get GNOME login screen.
```
[root@linuxtechi ~]# reboot
```
Once the system is rebooted successfully uncomment the line “**WaylandEnable=false**” from the file “**/etc/gdm/custom.conf**” so that remote desktop session request via vnc is handled by xorg of GNOME desktop in place of wayland display manager.
**Note:** Wayland is the default display manager (GDM) in GNOME and it not is configured to handled remote rendering API like X.org
### Step 2) Install VNC Server (tigervnc-server)
Next well install the VNC Server, there are lot of VNC Servers available, and for installation purposes, well be installing **TigerVNC Server**. It is one of the most popular VNC Server and a high-performance and platform-independent VNC that allows users to interact with remote machines easily.
Now install TigerVNC Server using the following command:
```
[root@linuxtechi ~]# dnf install tigervnc-server tigervnc-server-module -y
```
### Step 3) Set VNC Password for Local User
Lets assume we want pkumar user to use VNC for remote desktop session, then switch to the user and set its password using vncpasswd command,
```
[root@linuxtechi ~]# su - pkumar
[root@linuxtechi ~]$ vncpasswd
Password:
Verify:
Would you like to enter a view-only password (y/n)? n
A view-only password is not used
[root@linuxtechi ~]$
[root@linuxtechi ~]$ exit
logout
[root@linuxtechi ~]#
```
### Step 4) Setup VNC Server Configuration File
Next step is to configure VNC Server Configuration file. Create a file “**/etc/systemd/system/[root@linuxtechi][1]**” with the following content so that tigervnc-servers service started for above local user “pkumar”.
```
[root@linuxtechi ~]# vim /etc/systemd/system/root@linuxtechi
[Unit]
Description=Remote Desktop VNC Service
After=syslog.target network.target
[Service]
Type=forking
WorkingDirectory=/home/pkumar
User=pkumar
Group=pkumar
ExecStartPre=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'
ExecStart=/usr/bin/vncserver -autokill %i
ExecStop=/usr/bin/vncserver -kill %i
[Install]
WantedBy=multi-user.target
```
Save and exit the file,
**Note:** Replace the user name in above file which suits to your setup.
By default, VNC server listen on tcp port 5900+n, where n is the display number, if the display number is “1” then VNC server will listen its request on TCP port 5901.
### Step 5) Start VNC Service and allow port in firewall
I am using display number as 1, so use the following commands to start and enable vnc service on display number “1”,
```
[root@linuxtechi ~]# systemctl daemon-reload
[root@linuxtechi ~]# systemctl start root@linuxtechi:1.service
[root@linuxtechi ~]# systemctl enable vnroot@linuxtechi:1.service
Created symlink /etc/systemd/system/multi-user.target.wants/root@linuxtechi:1.service → /etc/systemd/system/root@linuxtechi
[root@linuxtechi ~]#
```
Use below **netstat** or **ss** command to verify whether VNC server start listening its request on 5901,
```
[root@linuxtechi ~]# netstat -tunlp | grep 5901
tcp 0 0 0.0.0.0:5901 0.0.0.0:* LISTEN 8169/Xvnc
tcp6 0 0 :::5901 :::* LISTEN 8169/Xvnc
[root@linuxtechi ~]# ss -tunlp | grep -i 5901
tcp LISTEN 0 5 0.0.0.0:5901 0.0.0.0:* users:(("Xvnc",pid=8169,fd=6))
tcp LISTEN 0 5 [::]:5901 [::]:* users:(("Xvnc",pid=8169,fd=7))
[root@linuxtechi ~]#
```
Use below systemctl command to verify the status of VNC server,
```
[root@linuxtechi ~]# systemctl status root@linuxtechi:1.service
```
![vncserver-status-centos8-rhel8][2]
Above commands output confirms that VNC is started successfully on port tcp port 5901. Use the following command allow VNC Server port “5901” in os firewall,
```
[root@linuxtechi ~]# firewall-cmd --permanent --add-port=5901/tcp
success
[root@linuxtechi ~]# firewall-cmd --reload
success
[root@linuxtechi ~]#
```
### Step 6) Connect to Remote Desktop Session
Now we are all set to see if the remote desktop connection is working. To access the remote desktop, Start the VNC Viewer from your Windows  / Linux workstation and enter your **VNC server IP Address** and **Port Number** and then hit enter
[![VNC-Viewer-Windows10][2]][3]
Next, it will ask for your VNC password. Enter the password that you have created earlier for your local user and click OK to continue
[![VNC-Viewer-Connect-CentOS8-RHEL8-VNC-Server][2]][4]
Now you can see the remote desktop,
[![VNC-Desktop-Screen-CentOS8][2]][5]
Thats it, youve successfully installed VNC Server in Centos 8 / RHEL 8.
**Conclusion**
Hope the step-by-step guide to install VNC server on Centos 8 / RHEL 8 has provided you with all the information to easily setup VNC Server and access remote desktops. Please provide your comments and suggestion in the feedback section below. See you in the next article…Until then a big THANK YOU and BYE for now!!!
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/install-configure-vnc-server-centos8-rhel8/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://www.linuxtechi.com/cdn-cgi/l/email-protection
[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/10/VNC-Viewer-Windows10.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/10/VNC-Viewer-Connect-CentOS8-RHEL8-VNC-Server.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/10/VNC-Desktop-Screen-CentOS8.jpg

View File

@ -1,100 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Command line quick tips: Locate and process files with find and xargs)
[#]: via: (https://fedoramagazine.org/command-line-quick-tips-locate-and-process-files-with-find-and-xargs/)
[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/)
Command line quick tips: Locate and process files with find and xargs
======
![][1]
**find** is one of the more powerful and flexible command-line programs in the daily toolbox. It does what the name suggests: it finds files and directories that match the conditions you specify. And with arguments like **-exec** or **-delete**, you can have find take action on what it… finds.
In this installment of the [Command Line Quick Tips][2] series, youll get an introduction to the **find** command and learn how to use it to process files with built-in commands or the **xargs** command.
### Finding files
At a minimum, **find** takes a path to find things in. For example, this command will find (and print) every file on the system:
```
find /
```
And since everything is a file, you will get a lot of output to sort through. This probably doesnt help you locate what youre looking for. You can change the path argument to narrow things down a bit, but its still not really any more helpful than using the **ls** command. So you need to think about what youre trying to locate.
Perhaps you want to find all the JPEG files in your home directory. The **-name** argument allows you to restrict your results to files that match the given pattern.
```
find ~ -name '*jpg'
```
But wait! What if some of them have an uppercase extension? **-iname** is like **-name**, but it is case-insensitive:
```
find ~ -iname '*jpg'
```
Great! But the 8.3 name scheme is so 1985. Some of the pictures might have a .jpeg extension. Fortunately, we can combine patterns with an “or,” represented by **-o**. The parentheses are escaped so that the shell doesnt try to interpret them instead of the **find** command.
```
find ~ \( -iname 'jpeg' -o -iname 'jpg' \)
```
Were getting closer. But what if you have some directories that end in jpg? (Why you named a directory **bucketofjpg** instead of **pictures** is beyond me.) We can modify our command with the **-type** argument to look only for files:
```
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f
```
Or maybe youd like to find those oddly named directories so you can rename them later:
```
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type d
```
It turns out youve been taking a lot of pictures lately, so narrow this down to files that have changed in the last week with **-mtime** (modification time). The **-7** means all files modified in 7 days or fewer.
```
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -mtime -7
```
### Taking action with xargs
The **xargs** command takes arguments from the standard input stream and executes a command based on them. Sticking with the example in the previous section, lets say you want to copy all of the JPEG files in your home directory that have been modified in the last week to a thumb drive that youll attach to a digital photo display. Assume you already have the thumb drive mounted as _/media/photo_display_.
```
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -mtime -7 -print0 | xargs -0 cp -t /media/photo_display
```
The **find** command is slightly modified from the previous version. The **-print0** command makes a subtle change on how the output is written: instead of using a newline, it adds a null character. The **-0** (zero) option to **xargs** adjusts the parsing to expect this. This is important because otherwise actions on file names that contain spaces, quotes, or other special characters may not work as expected. You should use these options whenever youre taking action on files.
The **-t** argument to **cp** is important because **cp** normally expects the destination to come last. You can do this without **xargs** using **find**s **-exec** command, but the **xargs** method will be faster, especially with a large number of files, because it will run as a single invocation of **cp**.
### Find out more
This post only scratches the surface of what **find** can do. **find** supports testing based on permissions, ownership, access time, and much more. It can even compare the files in the search path to other files. Combining tests with Boolean logic can give you incredible flexibility to find exactly the files youre looking for. With build in commands or piping to **xargs**, you can quickly process a large set of files.
_Portions of this article were previously published on [Opensource.com][3]._ _Photo by _[_Warren Wong_][4]_ on [Unsplash][5]_.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/command-line-quick-tips-locate-and-process-files-with-find-and-xargs/
作者:[Ben Cotton][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bcotton/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2018/10/commandlinequicktips-816x345.jpg
[2]: https://fedoramagazine.org/?s=command+line+quick+tips
[3]: https://opensource.com/article/18/4/how-use-find-linux
[4]: https://unsplash.com/@wflwong?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[5]: https://unsplash.com/s/photos/search?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,163 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Enable EPEL Repository on CentOS 8 and RHEL 8 Server)
[#]: via: (https://www.linuxtechi.com/enable-epel-repo-centos8-rhel8-server/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
How to Enable EPEL Repository on CentOS 8 and RHEL 8 Server
======
**EPEL** Stands for Extra Packages for Enterprise Linux, it is a free and opensource additional packages repository available for **CentOS** and **RHEL** servers. As the name suggests, EPEL repository provides extra and additional packages which are not available in the default package repositories of [CentOS 8][1] and [RHEL 8][2].
In this article we will demonstrate how to enable and use epel repository on CentOS 8 and RHEL 8 Server.
[![EPEL-Repo-CentOS8-RHEL8][3]][4]
### Prerequisites of EPEL Repository
* Minimal CentOS 8 and RHEL 8 Server
* Root or sudo admin privileges
* Internet Connection
### Install and Enable EPEL Repository on RHEL 8.x Server
Login or ssh to your RHEL 8.x server and execute the following dnf command to install EPEL rpm package,
```
[root@linuxtechi ~]# dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm -y
```
Output of above command would be something like below,
![dnf-install-epel-repo-rehl8][3]
Once epel rpm package is installed successfully then it will automatically enable and configure its yum / dnf repository.  Run following dnf or yum command to verify whether EPEL repository is enabled or not,
```
[root@linuxtechi ~]# dnf repolist epel
Or
[root@linuxtechi ~]# dnf repolist epel -v
```
![epel-repolist-rhel8][3]
### Install and Enable EPEL Repository on CentOS 8.x Server
Login or ssh to your CentOS 8 server and execute following dnf or yum command to install **epel-release** rpm package. In CentOS 8 server, epel rpm package is available in its default package repository.
```
[root@linuxtechi ~]# dnf install epel-release -y
Or
[root@linuxtechi ~]# yum install epel-release -y
```
Execute the following commands to verify the status of epel repository on CentOS 8 server,
```
[root@linuxtechi ~]# dnf repolist epel
Last metadata expiration check: 0:00:03 ago on Sun 13 Oct 2019 04:18:05 AM BST.
repo id repo name status
*epel Extra Packages for Enterprise Linux 8 - x86_64 1,977
[root@linuxtechi ~]#
[root@linuxtechi ~]# dnf repolist epel -v
……………………
Repo-id : epel
Repo-name : Extra Packages for Enterprise Linux 8 - x86_64
Repo-status : enabled
Repo-revision: 1570844166
Repo-updated : Sat 12 Oct 2019 02:36:32 AM BST
Repo-pkgs : 1,977
Repo-size : 2.1 G
Repo-metalink: https://mirrors.fedoraproject.org/metalink?repo=epel-8&arch=x86_64&infra=stock&content=centos
Updated : Sun 13 Oct 2019 04:28:24 AM BST
Repo-baseurl : rsync://repos.del.extreme-ix.org/epel/8/Everything/x86_64/ (34 more)
Repo-expire : 172,800 second(s) (last: Sun 13 Oct 2019 04:28:24 AM BST)
Repo-filename: /etc/yum.repos.d/epel.repo
Total packages: 1,977
[root@linuxtechi ~]#
```
Above commands output confirms that we have successfully enabled epel repo. Lets perform some basic operations on EPEL repo.
### List all available packages from epel repository
If you want to list all the packages from epel repository then run the following dnf command,
```
[root@linuxtechi ~]# dnf repository-packages epel list
……………
Last metadata expiration check: 0:38:18 ago on Sun 13 Oct 2019 04:28:24 AM BST.
Installed Packages
epel-release.noarch 8-6.el8 @epel
Available Packages
BackupPC.x86_64 4.3.1-2.el8 epel
BackupPC-XS.x86_64 0.59-3.el8 epel
CGSI-gSOAP.x86_64 1.3.11-7.el8 epel
CGSI-gSOAP-devel.x86_64 1.3.11-7.el8 epel
Field3D.x86_64 1.7.2-16.el8 epel
Field3D-devel.x86_64 1.7.2-16.el8 epel
GraphicsMagick.x86_64 1.3.33-1.el8 epel
GraphicsMagick-c++.x86_64 1.3.33-1.el8 epel
…………………………
zabbix40-web-mysql.noarch 4.0.12-1.el8 epel
zabbix40-web-pgsql.noarch 4.0.12-1.el8 epel
zerofree.x86_64 1.1.1-3.el8 epel
zimg.x86_64 2.8-4.el8 epel
zimg-devel.x86_64 2.8-4.el8 epel
zstd.x86_64 1.4.2-1.el8 epel
zvbi.x86_64 0.2.35-9.el8 epel
zvbi-devel.x86_64 0.2.35-9.el8 epel
zvbi-fonts.noarch 0.2.35-9.el8 epel
[root@linuxtechi ~]#
```
### Search a package from epel repository
Lets assume if we want to search Zabbix package in epel repository, execute the following dnf command,
```
[root@linuxtechi ~]# dnf repository-packages epel list | grep -i zabbix
```
Output of above command would be something like below,
![epel-repo-search-package-centos8][3]
### Install a package from epel repository
Lets assume we want to install htop package from epel repo, then issue the following dnf command,
Syntax:
# dnf enablerepo=”epel” install &lt;pkg_name&gt;
```
[root@linuxtechi ~]# dnf --enablerepo="epel" install htop -y
```
**Note:** If we dont specify the “**enablerepo=epel**” in above command then it will look for htop package in all available package repositories.
Thats all from this article, I hope above steps helps you to enable and configure EPEL repository on CentOS 8 and RHEL 8 Server, please dont hesitate to share your comments and feedback in below comments section.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/enable-epel-repo-centos8-rhel8-server/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://www.linuxtechi.com/centos-8-installation-guide-screenshots/
[2]: https://www.linuxtechi.com/install-configure-kvm-on-rhel-8/
[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/10/EPEL-Repo-CentOS8-RHEL8.jpg

View File

@ -0,0 +1,59 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Sugarizer: The Taste of Sugar on Any Device)
[#]: via: (https://opensourceforu.com/2019/10/sugarizer-the-taste-of-sugar-on-any-device/)
[#]: author: (Dr Anil Seth https://opensourceforu.com/author/anil-seth/)
Sugarizer: The Taste of Sugar on Any Device
======
[![][1]][2]
_Sugar is a learning platform that was initially developed for the OLPC project. The Sugar Learning Environment can be downloaded and installed on any Linux-compatible hardware. Sugarizer mimics the UI of Sugar using HTML5 and CSS3._
The One Laptop Per Child (OLPC) project was launched less than 12 years ago. The goal of bringing down the cost of a laptop to US$ 100 was never really achieved. The project also did not turn out to be as much of a success as anticipated. However, the goal was not really about the laptop, but to educate as many children as possible.
The interactive learning environment of the OLPC project was equally critical. This became a separate project under Sugar Labs, [_https://wiki.sugarlabs.org/_][3], and continues to be active. The Sugar Learning Environment is available as a Fedora spin, and can be downloaded and installed on any Linux-compatible hardware. It would be a good option to install it on an old system, which could then be donated. The US$ 90 Pinebook, [_https://www.pine64.org/,_][4] with Sugar installed on it would also make a memorable and useful gift.
The Sugar Environment can happily coexist with other desktop environments on Linux. So, the computer does not have to be dedicated to Sugar. On Fedora, you may add it to your existing desktop as follows:
```
$ sudo dnf group install Sugar Desktop Environment
```
I have not tried it on Ubuntu. However, the following command should work:
```
$ sudo apt install sucrose
```
However, Sugar remains, by and large, an unknown entity. This is especially disappointing considering that the need to _learn to learn_ has never been greater.
Hence, the release of Sugarizer is a pleasant surprise. It allows you to use the Sugar environment on any device, with the help of Web technologies. Sugarizer mimics the UI of Sugar using HTML5 and CSS3. It runs activities that have been written in HTML5/JavaScript. The current release includes a number of Sugar activities written initially in Python, which have been ported to HTML5/JavaScript.
You may try the new release at _sugarizer.org_. Better still, install it from Google Play on your Android tablet or from App Store on an Apple device. It works well even on a two-year-old, low-end tablet. Hence, you may easily put your old tablet to good use by gifting it to a child after installing Sugarizer on it. In this way, you could even rationalise your desire to buy the replacement tablet you have been eyeing.
**Does it work?**
My children are too old and grandchildren too young. Reason tells me that it should work. Experience also tells me that it will most likely NOT improve school grades. I did not like school. I was bored most of the time. If I was studying in todays schools, I would have had ulcers or a nervous breakdown!
When I think of schools, I recall the frustration of a child long ago (just 20 years) who got an answer wrong. The book and the teacher said that a mouse has two buttons. The mouse he used at home had three!
So, can you risk leaving the education of children you care about to the schools? Think about the skills you may be using today. Could these have been taught at schools a mere five years ago?
I never took JavaScript seriously and never made an effort to learn it. Today, I see Sugarizer and Snap! (a clone of Scratch in JavaScript) and am acutely aware of my foolishness. However, having learnt programming outside the classroom, I am confident that I can learn to program in JavaScript, should the need arise.
The intention at the start was to write about the activities in Sugarizer and, maybe, explore the source code. My favourite activities include TamTam, Turtle Blocks, Maze, etc. From the food chain activity, I discovered that some animals that I had believed to be carnivores, were not. I have also seen children get excited by the Speak activity.
However, once I started writing after the heading Does it work?, my mind took a radical turn. Now, I am convinced that Sugarizer will work only if you try it out.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/sugarizer-the-taste-of-sugar-on-any-device/
作者:[Dr Anil Seth][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/anil-seth/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2017/05/Technology-Development-in-Computers-Innovation-eLearning-1.jpg?resize=696%2C696&ssl=1 (Technology Development in Computers (Innovation), eLearning)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2017/05/Technology-Development-in-Computers-Innovation-eLearning-1.jpg?fit=900%2C900&ssl=1
[3]: https://wiki.sugarlabs.org/
[4]: https://www.pine64.org/,

View File

@ -0,0 +1,188 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to make a Halloween lantern with Inkscape)
[#]: via: (https://opensource.com/article/19/10/how-make-halloween-lantern-inkscape)
[#]: author: (Jess Weichler https://opensource.com/users/cyanide-cupcake)
How to make a Halloween lantern with Inkscape
======
Use open source tools to make a spooky and fun decoration for your
favorite Halloween haunt.
![Halloween - backlit bat flying][1]
The spooky season is almost here! This year, decorate your haunt with a unique Halloween lantern made with open source!
Typically, a portion of a lantern's structure is opaque to block the light from within. What makes a lantern a lantern are the parts that are missing: windows cut from the structure so that light can escape. While it's impractical for lighting, a lantern with windows in spooky shapes and lurking silhouettes can be atmospheric and a lot of fun to create.
This article demonstrates how to create your own lantern using [Inkscape][2]. If you don't have Inkscape, you can install it from your software repository on Linux or download it from the [Inkscape website][3] on MacOS and Windows.
### Supplies
* Template ([A4][4] or [Letter][5] size)
* Cardstock (black is traditional)
* Tracing paper (optional)
* Craft knife, ruler, and cutting mat (a craft cutting machine/laser cutter can be used instead)
* Craft glue
* LED tea-light "candle"
_Safety note:_ Only use battery-operated candles for this project.
### Understanding the template
To begin, download the correct template for your region (A4 or Letter) from the links above and open it in Inkscape.
* * *
* * *
* * *
**![Lantern template screen][6]**
The gray-and-white checkerboard background is see-through (in technical terms, it's an _alpha channel_.)
The black base forms the lantern. Right now, there are no windows for light to shine through; the lantern is a solid black base. You will use the **Union** and **Difference** options in Inkscape to design the windows digitally.
The dotted blue lines represent fold scorelines. The solid orange lines represent guides. Windows for light should not be placed outside the orange boxes.
To the left of the template are a few pre-made objects you can use in your design.
### To create a window or shape
1. Create an object that looks like the window style you want. Objects can be created using any of the shape tools in Inkscape's left toolbar. Alternately, you can download Creative Commons- or Public Domain-licensed clipart and import the PNG file into your project.
2. When you are happy with the shape of the object, turn it into a **Path** (rather than a **Shape**, which Inkscape sees as two different kinds of objects) by selecting **Object &gt; Object to Path** in the top menu.
![Object to path menu][7]
3. Place the object on top of the base shape.
4. Select both the object and the black base by clicking one, pressing and holding the Shift key, then selecting the other.
5. Select **Object &gt; Difference** from the top menu to remove the shape of the object from the base. This creates what will become a window in your lantern.
![Object > Difference menu][8]
### To add an object to a window
After making a window, you can add objects to it to create a scene.
**Tips:**
* All objects, including text, must be connected to the base of the lantern. If not, they will fall out after cutting and leave a blank space.
* Avoid small, intricate details. These are difficult to cut, even when using a machine like a laser cutter or a craft plotter.
1. Create or import an object.
2. Place the object inside the window so that it is touching at least two sides of the base.
3. With the object selected, choose **Object &gt; Object to Path** from the top menu.
![Object to path menu][9]
4. Select the object and the black base by clicking on each one while holding the Shift key).
5. Select **Object &gt; Union** to join the object and the base.
### Add text
Text can either be cut out from the base to create a window (as I did with the stars) or added to a window (which blocks the light from within the lantern). If you're creating a window, only follow steps 1 and 2 below, then use **Difference** to remove the text from the base layer.
1. Select the Text tool from the left sidebar to create text. Thick, bold fonts work best.
![Text tool][10]
2. Select your text, then choose **Path &gt; Object to Path** from the top menu. This converts the text object to a path. Note that this step means you can no longer edit the text, so perform this step _only after_ you're sure you have the word or words you want.
3. After you have converted the text, you can press **F2** on your keyboard to activate the **Node Editor** tool to clearly show the nodes of the text when it is selected with this tool.
![Text selected with Node editor][11]
4. Ungroup the text.
5. Adjust each letter so that it slightly overlaps its neighboring letter or the base.
![Overlapping the text][12]
6. To connect all of the letters to one another and to the base, re-select all the text and the base, then select **Path &gt; Union**.
![Connecting letters and base with Path > Union][13]
### Prepare for printing
The following instructions are for hand-cutting your lantern. If you're using a laser cutter or craft plotter, follow the techniques required by your hardware to prepare your files.
1. In the **Layer** panel, click the **Eye** icon beside the **Safety** layer to hide the safety lines. If you don't see the Layer panel, reveal it by selecting **Layer &gt; Layers** from the top menu.
2. Select the black base. In the **Fill and Stroke** panel, set the fill to **X** (meaning _no fill_) and the **Stroke** to solid black (that's #000000ff to fans of hexes).
![Setting fill and stroke][14]
3. Print your pattern with **File &gt; Print**.
4. Using a craft knife and ruler, carefully cut around each black line. Lightly score the dotted blue lines, then fold.
![Cutting out the lantern][15]
5. To finish off the windows, cut tracing paper to the size of each window and glue it to the inside of the lantern.
![Adding tracing paper][16]
6. Glue the lantern together at the tabs.
7. Turn on a battery-powered LED candle and place it inside your lantern.
![Completed lantern][17]
Now your lantern is complete and ready to light up your haunt. Happy Halloween!
How to make Halloween bottle labels with Inkscape, GIMP, and items around the house.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/how-make-halloween-lantern-inkscape
作者:[Jess Weichler][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/cyanide-cupcake
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/halloween_bag_bat_diy.jpg?itok=24M0lX25 (Halloween - backlit bat flying)
[2]: https://opensource.com/article/18/1/inkscape-absolute-beginners
[3]: http://inkscape.org
[4]: https://www.dropbox.com/s/75qzjilg5ak2oj1/papercraft_lantern_A4_template.svg?dl=0
[5]: https://www.dropbox.com/s/8fswdge49jwx91n/papercraft_lantern_letter_template%20.svg?dl=0
[6]: https://opensource.com/sites/default/files/uploads/lanterntemplate_screen.png (Lantern template screen)
[7]: https://opensource.com/sites/default/files/uploads/lantern1.png (Object to path menu)
[8]: https://opensource.com/sites/default/files/uploads/lantern2.png (Object > Difference menu)
[9]: https://opensource.com/sites/default/files/uploads/lantern3.png (Object to path menu)
[10]: https://opensource.com/sites/default/files/uploads/lantern4.png (Text tool)
[11]: https://opensource.com/sites/default/files/uploads/lantern5.png (Text selected with Node editor)
[12]: https://opensource.com/sites/default/files/uploads/lantern6.png (Overlapping the text)
[13]: https://opensource.com/sites/default/files/uploads/lantern7.png (Connecting letters and base with Path > Union)
[14]: https://opensource.com/sites/default/files/uploads/lantern8.png (Setting fill and stroke)
[15]: https://opensource.com/sites/default/files/uploads/lantern9.jpg (Cutting out the lantern)
[16]: https://opensource.com/sites/default/files/uploads/lantern10.jpg (Adding tracing paper)
[17]: https://opensource.com/sites/default/files/uploads/lantern11.jpg (Completed lantern)

View File

@ -0,0 +1,48 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (My Linux story: I grew up on PC Magazine not candy)
[#]: via: (https://opensource.com/article/19/10/linux-journey-newb-ninja)
[#]: author: (Michael Zamot https://opensource.com/users/mzamot)
My Linux story: I grew up on PC Magazine not candy
======
This Linux story begins with a kid reading about Linux in issues of PC
Magazine from his childhood home in Costa Rica. Today, he's a passionate
member of the global Linux community.
![The back of a kid head][1]
In 1998, the movie _Titanic_ was released, mobile phones were just a luxury, and pagers were still in use. This was also the year I got my first computer. I can remember the details as if it were yesterday: Pentium 133MHz and just 16MB of memory. Back in that time (while running nothing less than Windows 95), this was a good machine. I can still hear in my mind the old spinning hard drive noise when I powered that computer on, and see the Windows 95 flag. It never crossed my mind, though (especially as an 8-year-old kid), that I would dedicate every minute of my life to Linux and open source.
Being just a kid, I always asked my mom to buy me every issue of PC Magazine instead of candies. I never skipped a single issue, and all of those dusty old magazines are still there in Costa Rica. It was in these magazines that I discovered the essential technology that changed my life. An issue in the year 2000 talked extensively about Linux and the advantages of free and open-source software. That issue also included a review of one of the most popular Linux distributions back then: Corel Linux. Unfortunately, the disc was not included. Without internet at home, I was out of luck, but that issue still lit a spark within me.
In 2003, I asked my mom to take me to a Richard Stallman talk. I couldnt believe he was in the country. I was the only kid in that room, and I was laser-focused on everything he was saying, though I didnt understand anything about patents, licenses, or the jokes about him with an old hard drive over his head.
Despite my attempts, I couldnt make Linux work on my computer. One rainy afternoon in the year 2003, with the heavy smell of recently brewed coffee, my best friend and I were able to get a local magazine with a two-disk bundle: Mandrake Linux 7.1 (if my memory doesnt fail) on one and StarOffice on the other. My friend poured more coffee into our mugs while I inserted the Mandrake disk into the computer with my shaking, excited hands. Linux was finally running—the same Linux I had been obsessed with since I read about it 3 years earlier.
We were lucky enough to get broadband internet in 2006 (at the lightning speed of 128/64Kbps), so I was able to use an old Pentium II computer under my bed and run it 24x7 with Debian, Apache, and my own mail server (my personal server, I told myself). This old machine was my playground to experiment on and put into practice all of the knowledge and reading I had been doing (and also to make the electricity bill more expensive).
As soon as I discovered there were open source communities in the country, I started attending their meetings. Eventually, I was helping in their events, and not long after I was organizing and giving talks. We used to host two annual events for many years: Festival Latinoamericano de Software Libre (Latin American Free Software Installation Fest) and Software Freedom Day.
Thanks to what I learned from my reading, but more importantly from the people in these local communities that guided and mentored me, I was able to land my first Linux job in 2011, even without college. I kept growing from there, working for many companies and learning more about open source and Linux at each one. Eventually, I felt that I had an obligation (or a social debt) to give back to the community so that other people like the younger me could also learn. Not long after, I started teaching classes and meeting wonderful and passionate people, many of whom are now as devoted to Linux and open source as I am. I can definitely say: Mission accomplished!
Eventually, what I learned about open source, Linux, OpenStack, Docker, and every other technology I played with sent me overseas, allowing me to work (doesnt feel like it) for the most amazing company Ive ever worked for, doing what I love. Because of open source and Linux, I became a part of something bigger than me. I was a member of a community, and I experienced what I consider the most significant impact on my life: Meeting and learning from so many masterminds and amazing people that today I can call friends. Without them and these communities, I wouldnt be the person I am today.
How could I know when I was 10 years old and reading a magazine that Linux and open source would connect me to the greatest people, and change my life forever?
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/linux-journey-newb-ninja
作者:[Michael Zamot][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mzamot
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_kid_education.png?itok=3lRp6gFa (The back of a kid head)

View File

@ -0,0 +1,167 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (10 Ways to Customize Your Linux Desktop With GNOME Tweaks Tool)
[#]: via: (https://itsfoss.com/gnome-tweak-tool/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
10 Ways to Customize Your Linux Desktop With GNOME Tweaks Tool
======
![GNOME Tweak Tool Icon][1]
There are several ways you can tweak Ubuntu to customize its looks and behavior. The easiest way I find is by using the [GNOME Tweak tool][2]. It is also known as GNOME Tweaks or simply Tweaks.
I have mentioned it numerous time in my tutorials in the past. Here, I list all the major tweaks you can perform with this tool.
I have used Ubuntu here but the steps should be applicable to any Linux distribution using GNOME desktop environment.
### Install GNOME Tweak tool in Ubuntu 18.04 and other versions
Gnome Tweak tool is available in the [Universe repository in Ubuntu][3] so make sure that you have it enabled in your Software &amp; Updates tool:
![Enable Universe Repository in Ubuntu][4]
After that, you can install GNOME Tweak tool from the software center. Just open the Software Center and search for GNOME Tweaks and install it from there:
![Install GNOME Tweaks Tool from Software Center][5]
Alternatively, you may also use command line to install software with [apt command][6]:
```
sudo apt install gnome-tweaks
```
### Customizing GNOME desktop with Tweaks tool
![][7]
GNOME Tweak tool enables you to do a number of settings changes. Some of these changes like wallpaper changes, startup applications etc are also available in the official System Settings tool. I am going to focus on tweaks that are not available in the Settings by default.
#### 1\. Change themes
You can [install new themes in Ubuntu][8] in various ways. But if you want to change to the newly installed theme, youll have to install GNOME Tweaks tool.
You can find the theme and icon settings in Appearance section. You can browse through the available themes and icons and set the ones you like. The changes take into effect immediately.
![Change Themes With GNOME Tweaks][9]
#### 2\. Disable animation to speed up your desktop
There are subtle animations for application window opening, closing, maximizing etc. You can disable these animations to speed up your system slightly as it will use slightly fewer resources.
![Disable Animations For Slightly Faster Desktop Experience][10]
#### 3\. Control desktop icons
At least in Ubuntu, youll see the Home and Trash icons on the desktop. If you dont like, you can choose to disable it. You can also choose which icons will be displayed on the desktop.
![Control Desktop Icons in Ubuntu][11]
#### 4\. Manage GNOME extensions
I hope you are aware of [GNOME Extensions][12]. These are small plugins for your desktop that extends the functionalities of the GNOME desktop. There are [plenty of GNOME extensions][13] that you can use to get CPU consumption in the top panel, get clipboard history etc.
I have written in detail about [installing and using GNOME extensions][14]. Here, I assume that you are already using them and if thats the case, you can manage them from within GNOME Tweaks.
![Manage GNOME Extensions][15]
#### 5\. Change fonts and scaling factor
You can [install new fonts in Ubuntu][16] and apply the system wide font change using Tweaks tool. You can also change the scaling factor if you think the icons, text are way too small on your desktop.
![Change Fonts and Scaling Factor][17]
#### 6\. Control touchpad behavior like Disable touchpad while typing, Make right click on touchpad working
The GNOME Tweaks also allows you to disable touchpad while typing. This is useful if you type fast on a laptop. The bottom of your palm may touch the touchpad and the cursor moves away to an undesired location on the screen.
Automatically disabling touchpad while typing fixes this problem.
![Disable Touchpad While Typing][18]
Youll also notice that [when you press the bottom right corner of your touchpad for right click, nothing happens][19]. There is nothing wrong with your touchpad. Its a system settings that disables the right clicking this way for any touchpad that doesnt have a real right click button (like the old Thinkpad laptops). Two finger click gives you the right click.
You can also get this back by choosing Area in under Mouse Click Simulation instead of Fingers.
![Fix Right Click Issue][20]
You may have to [restart Ubuntu][21] in order to take the changes in effect. If you are Emacs lover, you can also force keybindings from Emacs.
#### 7\. Change power settings
There is only one power settings here. It allows you to put your laptop in suspend mode when the lid is closed.
![Power Settings in GNOME Tweaks Tool][22]
#### 8\. Decide whats displayed in the top panel
The top panel in your desktop gives shows a few important things. You have the calendar, network icon, system settings and the Activities option.
You can also [display battery percentage][23], add date along with day and time and show week numbers. You can also enable hot corners so that if you take your mouse to the top left corner of the screen, youll get the activities view with all the running applications.
![Top Panel Settings in GNOME Tweaks Tool][24]
If you have the mouse focus on an application window, youll notice that its menu is displayed in the top panel. If you dont like it, you may toggle it off and then the application menu will be available on the application itself.
#### 9\. Configure application window
You can decide if maximize and minimize option (the buttons on the top right corner) will be shown in the application window. You may also change their positioning between left and right.
![Application Window Configuration][25]
There are some other configuration options as well. I dont use them but feel free to explore them on your own.
#### 10\. Configure workspaces
GNOME Tweaks tool also allows you to configure a couple of things around workspaces.
![Configure Workspaces in Ubuntu][26]
**In the end…**
GNOME Tweaks tool is a must have utility for any GNOME user. It helps you configure looks and functionality of the desktop. I find it surprising that this tool is not even in Main repository of Ubuntu. In my opinion, it should be installed by default. Till then, youll have to install GNOME Tweak tool in Ubuntu manually.
If you find some hidden gem in GNOME Tweaks that hasnt been discussed here, why not share it with the rest of us?
--------------------------------------------------------------------------------
via: https://itsfoss.com/gnome-tweak-tool/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/gnome-tweak-tool-icon.png?ssl=1
[2]: https://wiki.gnome.org/action/show/Apps/Tweaks?action=show&redirect=Apps%2FGnomeTweakTool
[3]: https://itsfoss.com/ubuntu-repositories/
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/enable-repositories-ubuntu.png?ssl=1
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/install-gnome-tweaks-tool.jpg?ssl=1
[6]: https://itsfoss.com/apt-command-guide/
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/customize-gnome-with-tweak-tool.jpg?ssl=1
[8]: https://itsfoss.com/install-themes-ubuntu/
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/change-theme-ubuntu-gnome.jpg?ssl=1
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/disable-animation-ubuntu-gnome.jpg?ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/desktop-icons-ubuntu.jpg?ssl=1
[12]: https://extensions.gnome.org/
[13]: https://itsfoss.com/best-gnome-extensions/
[14]: https://itsfoss.com/gnome-shell-extensions/
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/manage-gnome-extension-tweaks-tool.jpg?ssl=1
[16]: https://itsfoss.com/install-fonts-ubuntu/
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/change-fonts-ubuntu-gnome.jpg?ssl=1
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/disable-touchpad-while-typing-ubuntu.jpg?ssl=1
[19]: https://itsfoss.com/fix-right-click-touchpad-ubuntu/
[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/enable-right-click-ubuntu.jpg?ssl=1
[21]: https://itsfoss.com/schedule-shutdown-ubuntu/
[22]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/power-settings-gnome-tweaks-tool.jpg?ssl=1
[23]: https://itsfoss.com/display-battery-ubuntu/
[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/top-panel-settings-gnome-tweaks-tool.jpg?ssl=1
[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/windows-configuration-ubuntu-gnome-tweaks.jpg?ssl=1
[26]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/configure-workspaces-ubuntu.jpg?ssl=1

View File

@ -0,0 +1,235 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Formatting NFL data for doing data science with Python)
[#]: via: (https://opensource.com/article/19/10/formatting-nfl-data-python)
[#]: author: (Christa Hayes https://opensource.com/users/cdhayes2)
Formatting NFL data for doing data science with Python
======
In part 1 of this series on machine learning with Python, learn how to
prepare a National Football League dataset for training.
![A football field.][1]
No matter what medium of content you consume these days (podcasts, articles, tweets, etc.), you'll probably come across some reference to data. Whether it's to back up a talking point or put a meta-view on how data is everywhere, data and its analysis are in high demand.
As a programmer, I've found data science to be more comparable to wizardry than an exact science. I've coveted the ability to get ahold of raw data and glean something useful and concrete from it. What a useful talent!
This got me thinking about the difference between data scientists and programmers. Aren't data scientists just statisticians who can code? Look around and you'll see any number of tools aimed at helping developers become data scientists. AWS has a full-on [machine learning course][2] geared specifically towards turning developers into experts. [Visual Studio][3] has built-in Python projects that—with the click of a button—will create an entire template for classification problems. And scores of programmers are writing tools designed to make data science easier for anyone to pick up.
I thought I'd lean into the clear message of recruiting programmers to the data (or dark) side and give it a shot with a fun project: training a machine learning model to predict plays using a National Football League (NFL) dataset.
### Set up the environment
Before I can dig into the data, I need to set up my [virtual environment][4]. This is important because, without an environment, I'll have nowhere to work. Fortunately, Opensource.com has [some great resources][5] for installing and configuring the setup.
Any of the code you see here, I was able to look up through existing documentation. If there is one thing programmers are familiar with, it's navigating foreign (and sometimes very sparse) documentation.
### Get the data
As with any modern problem, the first step is to make sure you have quality data. Luckily, I came across a set of [NFL tracking data][6] from 2017 that was used for the NFL Big Data Bowl. Even the NFL is trying its best to attract the brightest stars in the data realm.
Everything I need to know about the schema is in the README. This exercise will train a machine learning model to predict run (in which the ball carrier keeps the football and runs downfield) and pass (in which the ball is passed to a receiving player) plays using the plays.csv [data file][7]. I won't use player tracking data in this exercise, but it could be fun to explore later.
First things first, I need to get access to my data by importing it into a dataframe. The [Pandas][8] library is an open source Python library that provides algorithms for easy analysis of data structures. The structure in the sample NFL data happens to be a two-dimensional array (or in simpler terms, a table), which data scientists often refer to as a dataframe. The Pandas function dealing with dataframes is [pandas.DataFrame][9]. I'll also import several other libraries that I will use later.
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import xgboost as xgb
from sklearn import metrics
df = pd.read_csv('data/plays.csv')
print(len(df))
print(df.head())
```
### Format the data
The NFL data dump does not explicitly indicate which plays are runs (also called rushes) and which are passes. Therefore, I have to classify the offensive play types through some football savvy and reasoning.
Right away, I can get rid of special teams plays in the **isSTPLAY** column. Special teams are neither offense nor defense, so they are irrelevant to my objective.
```
#drop st plays
df = df[~df['isSTPlay']]
print(len(df))
```
Skimming the **playDescription** column, I see some plays where the quarterback kneels, which effectively ends a play. This is usually called a "victory formation" because the intent is to run out the clock. These are significantly different than normal running plays, so I can drop them as well.
```
#drop kneels
df = df[~df['playDescription'].str.contains("kneels")]
print (len(df))
```
The data reports time in terms of the quarters in which a game is normally played (as well as the time on the game clock in each quarter). Is this the most intuitive in terms of trying to predict a sequence? One way to answer this is to consider how gameplay differs between time splits.
When a team has the ball with a minute left in the first quarter, will it act the same as if it has the ball with a minute left in the second quarter? Probably not. Will it act the same with a minute to go at the end of both halves? All else remaining equal, the answer is likely yes in most scenarios.
I'll convert the **quarter** and **GameClock** columns from quarters to halves, denoted in seconds rather than minutes. I'll also create a **half** column from the **quarter** values. There are some fifth quarter values, which I take to be overtime. Since overtime rules are different than normal gameplay, I can drop them.
```
#drop overtime
df = df[~(df['quarter'] == 5)]
print(len(df))
#convert time/quarters
def translate_game_clock(row):
    raw_game_clock = row['GameClock']
    quarter = row['quarter']
    minutes, seconds_raw = raw_game_clock.partition(':')[::2]
    seconds = seconds_raw.partition(':')[0]
    total_seconds_left_in_quarter = int(seconds) + (int(minutes) * 60)
    if quarter == 3 or quarter == 1:
        return total_seconds_left_in_quarter + 900
    elif quarter == 4 or quarter == 2:
        return total_seconds_left_in_quarter
if 'GameClock' in list (df.columns):
    df['secondsLeftInHalf'] = df.apply(translate_game_clock, axis=1)
if 'quarter' in list(df.columns):
    df['half'] = df['quarter'].map(lambda q: 2 if q &gt; 2 else 1)
```
The **yardlineNumber** column also needs to be transformed. The data currently lists the yard line as a value from one to 50. Again, this is unhelpful because a team would not act the same on its own 20-yard line vs. its opponent's 20-yard line. I will convert it to represent a value from one to 99, where the one-yard line is nearest the possession team's endzone, and the 99-yard line is nearest the opponent's end zone.
```
def yards_to_endzone(row):
    if row['possessionTeam'] == row['yardlineSide']:
        return 100 - row['yardlineNumber']
    else :
        return row['yardlineNumber']
df['yardsToEndzone'] = df.apply(yards_to_endzone, axis = 1)
```
The personnel data would be extremely useful if I could get it into a format for the machine learning algorithm to take in. Personnel identifies the different types of skill positions on the field at a given time. The string value currently shown in **personnel.offense** is not conducive to input, so I'll convert each personnel position to its own column to indicate the number present on the field during the play. Defense personnel might be interesting to include later to see if it has any outcome on prediction. For now, I'll just stick with offense.
```
def transform_off_personnel(row):
   rb_count = 0
   te_count = 0
   wr_count = 0
   ol_count = 0
   dl_count = 0
   db_count = 0
   if not pd.isna(row['personnel.offense']):
       personnel = row['personnel.offense'].split(', ')
       for p in personnel:
           if p[2:4] == 'RB':
               rb_count = int(p[0])
           elif p[2:4] == 'TE':
                te_count = int(p[0])
           elif p[2:4] == 'WR':
                wr_count = int(p[0])
           elif p[2:4] == 'OL':
                ol_count = int(p[0])
           elif p[2:4] == 'DL':
                dl_count = int(p[0])
           elif p[2:4] == 'DB':
               db_count = int(p[0])
   return pd.Series([rb_count,te_count,wr_count,ol_count,dl_count, db_count])
df[['rb_count','te_count','wr_count','ol_count','dl_count', 'db_count']] = df.apply(transform_off_personnel, axis=1)
```
Now the offense personnel values are represented by individual columns.
![Result of reformatting offense personnel][10]
Formations describe how players are positioned on the field, and this is also something that would seemingly have value in predicting play outcomes. Once again, I'll convert the string values into integers.
```
df['offenseFormation'] = df['offenseFormation'].map(lambda f : 'EMPTY' if pd.isna(f) else f)
def formation(row):
    form = row['offenseFormation'].strip()
    if form == 'SHOTGUN':
        return 0
    elif form == 'SINGLEBACK':
        return 1
    elif form == 'EMPTY':
        return 2
    elif form == 'I_FORM':
        return 3
    elif form == 'PISTOL':
        return 4
    elif form == 'JUMBO':
        return 5
    elif form == 'WILDCAT':
        return 6
    elif form=='ACE':
        return 7
    else:
        return -1
df['numericFormation'] = df.apply(formation, axis=1)
print(df.yardlineNumber.unique())
```
Finally, it's time to classify the play types. The **PassResult** column has four distinct values: I, C, S, and null, which represent Incomplete passing plays, Complete passing plays, Sacks (classified as passing plays), and a null value. Since I've already eliminated all special teams plays, I can assume the null values are running plays. So I'll convert the play outcome into a single column called **play_type** represented by either a 0 for running or a 1 for passing. This will be the column (or _label_, as the data scientists say) I want my algorithm to predict.
```
def play_type(row):
    if row['PassResult'] == 'I' or row['PassResult'] == 'C' or row['PassResult'] == 'S':
        return 'Passing'
    else:
        return 'Rushing'
df['play_type'] = df.apply(play_type, axis = 1)
df['numericPlayType'] = df['play_type'].map(lambda p: 1 if p == 'Passing' else 0)
```
### Take a break
Is it time to start predicting things yet? Most of my work so far has been trying to understand the data and what format it needs to be in—before I even get started on predicting anything. Anyone else need a minute?
In part two, I'll do some analysis and visualization of the data before feeding it into a machine learning algorithm, and then I'll score the model's results to see how accurate they are. Stay tuned!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/formatting-nfl-data-python
作者:[Christa Hayes][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/cdhayes2
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_LIFE_football__520x292.png?itok=5hPbxQF8 (A football field.)
[2]: https://aws.amazon.com/training/learning-paths/machine-learning/developer/
[3]: https://docs.microsoft.com/en-us/visualstudio/python/overview-of-python-tools-for-visual-studio?view=vs-2019
[4]: https://opensource.com/article/19/9/get-started-data-science-python
[5]: https://opensource.com/article/17/10/python-101
[6]: https://github.com/nfl-football-ops/Big-Data-Bowl
[7]: https://github.com/nfl-football-ops/Big-Data-Bowl/tree/master/Data
[8]: https://pandas.pydata.org/
[9]: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html
[10]: https://opensource.com/sites/default/files/uploads/nfl-python-7_personneloffense.png (Result of reformatting offense personnel)

View File

@ -0,0 +1,81 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux sudo flaw can lead to unauthorized privileges)
[#]: via: (https://www.networkworld.com/article/3446036/linux-sudo-flaw-can-lead-to-unauthorized-privileges.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Linux sudo flaw can lead to unauthorized privileges
======
Exploiting a newly discovered sudo flaw in Linux can enable certain users with to run commands as root despite restrictions against it.
Thinkstock
A newly discovered and serious flaw in the [**sudo**][1] command can, if exploited, enable users to run commands as root in spite of the fact that the syntax of the  **/etc/sudoers** file specifically disallows them from doing so.
Updating **sudo** to version 1.8.28 should address the problem, and Linux admins are encouraged to do so as soon as possible. 
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
How the flaw might be exploited depends on specific privileges granted in the **/etc/sudoers** file. A rule that allows a user to edit files as any user except root, for example, would actually allow that user to edit files as root as well. In this case, the flaw could lead to very serious problems.
[][3]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][3]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
For a user to exploit the flaw, **a user** needs to be assigned privileges in the **/etc/sudoers **file that allow that user to run commands as some other users, and the flaw is limited to the command privileges that are assigned in this way.  
This problem affects versions prior to 1.8.28. To check your sudo version, use this command:
```
$ sudo -V
Sudo version 1.8.27 <===
Sudoers policy plugin version 1.8.27
Sudoers file grammar version 46
Sudoers I/O plugin version 1.8.27
```
The vulnerability has been assigned [CVE-2019-14287][4] in the **Common Vulnerabilities and Exposures** database. The risk is that any user who has been given the ability to run even a single command as an arbitrary user may be able to escape the restrictions and run that command as root even if the specified privilege is written to disallow running the command as root.
The lines below are meant to give the user "jdoe" the ability to edit files with **vi** as any user except root (**!root** means "not root") and nemo the right to run the **id** command as any user except root:
```
# affected entries on host "dragonfly"
jdoe dragonfly = (ALL, !root) /usr/bin/vi
nemo dragonfly = (ALL, !root) /usr/bin/id
```
However, given the flaw, either of these users would be able to circumvent the restriction and edit files or run the **id** command as root as well.
The flaw can be exploited by an attacker to run commands as root by specifying the user ID "-1" or "4294967295."  
The response of "1" demonstrates that the command is being run as root (showing root's user ID).
Joe Vennix from Apple Information Security both found and analyzed the problem.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3446036/linux-sudo-flaw-can-lead-to-unauthorized-privileges.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3236499/some-tricks-for-using-sudo.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14287
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,142 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open source interior design with Sweet Home 3D)
[#]: via: (https://opensource.com/article/19/10/interior-design-sweet-home-3d)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Open source interior design with Sweet Home 3D
======
Try out furniture layouts, color schemes, and more in virtual reality
before you go shopping in the real world.
![Several houses][1]
There are three schools of thought on how to go about decorating a room:
1. Buy a bunch of furniture and cram it into the room
2. Take careful measurements of each item of furniture, calculate the theoretical capacity of the room, then cram it all in, ignoring the fact that you've placed a bookshelf on top of your bed
3. Use a computer for pre-visualization
Historically, I practiced the little-known fourth principle: don't have furniture. However, since I became a remote worker, I've found that a home office needs conveniences like a desk and a chair, a bookshelf for reference books and tech manuals, and so on. Therefore, I have been formulating a plan to populate my living and working space with actual furniture, made of actual wood rather than milk crates (or glue and sawdust, for that matter), with an emphasis on _plan_. The last thing I want is to bring home a great find from a garage sale to discover that it doesn't fit through the door or that it's oversized compared to another item of furniture.
It was time to do what the professionals do. It was time to pre-viz.
### Open source interior design
[Sweet Home 3D][2] is an open source (GPLv2) interior design application that helps you draw your home's floor plan and then define, resize, and arrange furniture. You can do all of this with precise measurements, down to fractions of a centimeter, without having to do any math and with the ease of basic drag-and-drop operations. And when you're done, you can view the results in 3D. If you can create a basic table (not the furniture kind) in a word processor, you can plan the interior design of your home in Sweet Home 3D.
### Installing
Sweet Home 3D is a [Java][3] application, so it's universal. It runs on any operating system that can run Java, which includes Linux, Windows, MacOS, and BSD. Regardless of your OS, you can [download][4] the application from the website.
* On Linux, [untar][5] the archive. Right-click on the SweetHome3D file and select **Properties**. In the **Permission** tab, grant the file executable permission.
* On MacOS and Windows, expand the archive and launch the application. You must grant it permission to run on your system when prompted.
![Sweet Home 3D permissions][6]
On Linux, you can also install Sweet Home 3D as a Snap package, provided you have **snapd** installed and enabled.
### Measures of success
First thing first: Break out your measuring tape. To get the most out of Sweet Home 3D, you must know the actual dimensions of the living space you're planning for. You may or may not need to measure down to the millimeter or 16th of an inch; you know your own tolerance for variance. But you must get the basic dimensions, including measuring walls and windows and doors.
Use your best judgment for common sense. For instance, When measuring doors, include the door frame; while it's not technically part of the _door_ itself, it is part of the wall space that you probably don't want to cover with furniture.
![Measure twice, execute once][7]
CC-SA-BY opensource.com
### Creating a room
When you first launch Sweet Home 3D, it opens a blank canvas in its default viewing mode, a blueprint view in the top panel, and a 3D rendering in the bottom panel. On my [Slackware][8] desktop computer, this works famously, but my desktop is also my video editing and gaming computer, so it's got a great graphics card for 3D rendering. On my laptop, this view was a lot slower. For best performance (especially on a computer not dedicated to 3D rendering), go to the **3D View** menu at the top of the window and select **Virtual Visit**. This view mode renders your work from a ground-level point of view based on the position of a virtual visitor. That means you get to control what is rendered and when.
It makes sense to switch to this view regardless of your computer's power because an aerial 3D rendering doesn't provide you with much more detail than what you have in your blueprint plan. Once you have changed the view mode, you can start designing.
The first step is to define the walls of your home. This is done with the **Create Walls** tool, found to the right of the **Hand** icon in the top toolbar. Drawing walls is simple: Click where you want a wall to begin, click to anchor it, and continue until your room is complete.
![Drawing walls in Sweet Home 3D][9]
Once you close the walls, press **Esc** to exit the tool.
#### Defining a room
Sweet Home 3D is flexible on how you create walls. You can draw the outer boundary of your house first, and then subdivide the interior, or you can draw each room as conjoined "containers" that ultimately form the footprint of your house. This flexibility is possible because, in real life and in Sweet Home 3D, walls don't always define a room. To define a room, use the **Create Rooms** button to the right of the **Create Walls** button in the top toolbar.
If the room's floor space is defined by four walls, then all you need to do to define that enclosure as a room is double-click within the four walls. Sweet Home 3D defines the space as a room and provides you with its area in feet or meters, depending on your preference.
For irregular rooms, you must manually define each corner of the room with a click. Depending on the complexity of the room shape, you may have to experiment to find whether you need to work clockwise or counterclockwise from your origin point to avoid quirky Möbius-strip flooring. Generally, however, defining the floor space of a room is straightforward.
![Defining rooms in Sweet Home 3D][10]
After you give the room a floor, you can change to the **Arrow** tool and double-click on the room to give it a name. You can also set the color and texture of the flooring, walls, ceiling, and baseboards.
![Modifying room floors, ceilings, etc. in Sweet Home 3D][11]
None of this is rendered in your blueprint view by default. To enable room rendering in your blueprint panel, go to the **File** menu and select **Preferences**. In the **Preferences** panel, set **Room rendering in plan** to **Floor color or texture**.
### Doors and windows
Once you've finished the basic floor plan, you can switch permanently to the **Arrow** tool.
You can find doors and windows in the left column of Sweet Home 3D, in the **Doors and Windows** category. You have many choices, so choose whatever is closest to what you have in your home.
![Moving a door in Sweet Home 3D][12]
To place a door or window into your plan, drag-and-drop it on the appropriate wall in your blueprint panel. To adjust its position and size, double-click the door or window.
### Adding furniture
With the base plan complete, the part of the job that feels like _work_ is over! From this point onward, you can play with furniture arrangements and other décor.
You can find furniture in the left column, organized by the room for which each is intended. You can drag-and-drop any item into your blueprint plan and control orientation and size with the tools visible when you hover your mouse over the item's corners. Double-click on any item to adjust its color and finish.
### Visiting and exporting
To see what your future home will look like, drag the "person" icon in your blueprint view into a room.
![Sweet Home 3D rendering][13]
You can strike your own balance between realism and just getting a feel for space, but your imagination is your only limit. You can get additional assets to add to your home from the Sweet Home 3D [download page][4]. You can even create your own furniture and textures with the **Library Editor** applications, which are optional downloads from the project site.
Sweet Home 3D can export your blueprint plan to SVG format for use in [Inkscape][14], and it can export your 3D model to OBJ format for use in [Blender][15]. To export your blueprint, go to the **Plan** menu and select **Export to SVG format**. To export a 3D model, go to the **3D View** menu and select **Export to OBJ format**.
You can also take "snapshots" of your home so that you can refer to your ideas without opening Sweet Home 3D. To create a snapshot, go to the **3D View** menu and select **Create Photo**. The snapshot is rendered from the perspective of the person icon in the blueprint view, so adjust as required, then click the **Create** button in the **Create Photo** window. If you're happy with the photo, click **Save**.
### Home sweet home
There are many more features in Sweet Home 3D. You can add a sky and a lawn, position lights for your photos, set ceiling height, add another level to your house, and much more. Whether you're planning for a flat you're renting or a house you're buying—or a house that doesn't even exist (yet), Sweet Home 3D is an engaging and easy application that can entertain and help you make better purchasing choices when scurrying around for furniture, so you can finally stop eating breakfast at the kitchen counter and working while crouched on the floor.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/interior-design-sweet-home-3d
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_housing.png?itok=s7i6pQL1 (Several houses)
[2]: http://www.sweethome3d.com/
[3]: https://opensource.com/resources/java
[4]: http://www.sweethome3d.com/download.jsp
[5]: https://opensource.com/article/17/7/how-unzip-targz-file
[6]: https://opensource.com/sites/default/files/uploads/sweethome3d-permissions.png (Sweet Home 3D permissions)
[7]: https://opensource.com/sites/default/files/images/life/sweethome3d-measure.jpg (Measure twice, execute once)
[8]: http://www.slackware.com/
[9]: https://opensource.com/sites/default/files/uploads/sweethome3d-walls.jpg (Drawing walls in Sweet Home 3D)
[10]: https://opensource.com/sites/default/files/uploads/sweethome3d-rooms.jpg (Defining rooms in Sweet Home 3D)
[11]: https://opensource.com/sites/default/files/uploads/sweethome3d-rooms-modify.jpg (Modifying room floors, ceilings, etc. in Sweet Home 3D)
[12]: https://opensource.com/sites/default/files/uploads/sweethome3d-move.jpg (Moving a door in Sweet Home 3D)
[13]: https://opensource.com/sites/default/files/uploads/sweethome3d-view.jpg (Sweet Home 3D rendering)
[14]: http://inkscape.org
[15]: http://blender.org

View File

@ -0,0 +1,146 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to type emoji on Linux)
[#]: via: (https://opensource.com/article/19/10/how-type-emoji-linux)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
How to type emoji on Linux
======
The GNOME desktop makes it easy to use emoji in your communications.
![A cat under a keyboard.][1]
Emoji are those fanciful pictograms that snuck into the Unicode character space. They're all the rage online, and people use them for all kinds of surprising things, from signifying reactions on social media to serving as visual labels for important file names. There are many ways to enter Unicode characters on Linux, but the GNOME desktop makes it easy to find and type an emoji.
![Emoji in Emacs][2]
### Requirements
For this easy method, you must be running Linux with the [GNOME][3] desktop.
You must also have an emoji font installed. There are many to choose from, so do a search for _emoji_ using your favorite software installer application or package manager.
For example, on Fedora:
```
$ sudo dnf search emoji
emoji-picker.noarch : An emoji selection tool
unicode-emoji.noarch : Unicode Emoji Data Files
eosrei-emojione-fonts.noarch : A color emoji font
twitter-twemoji-fonts.noarch : Twitter Emoji for everyone
google-android-emoji-fonts.noarch : Android Emoji font released by Google
google-noto-emoji-fonts.noarch : Google “Noto Emoji” Black-and-White emoji font
google-noto-emoji-color-fonts.noarch : Google “Noto Color Emoji” colored emoji font
[...]
```
On Ubuntu or Debian, use **apt search** instead.
I'm using [Google Noto Color Emoji][4] in this article.
### Get set up
To get set up, launch GNOME's Settings application.
1. In Settings, click the **Region &amp; Language** category in the left column.
2. Click the plus symbol (**+**) under the **Input Sources** heading to bring up the **Add an Input Source** panel.
![Add a new input source][5]
3. In the **Add an Input Source** panel, click the hamburger menu at the bottom of the input list.
![Add an Input Source panel][6]
4. Scroll to the bottom of the list and select **Other**.
5. In the **Other** list, find **Other (Typing Booster)**. (You can type **boost** in the search field at the bottom to filter the list.)
![Find Other \(Typing Booster\) in inputs][7]
6. Click the **Add** button in the top-right corner of the panel to add the input source to GNOME.
Once you've done that, you can close the Settings window.
#### Switch to Typing Booster
You now have a new icon in the top-right of your GNOME desktop. By default, it's set to the two-letter abbreviation of your language (**en** for English, **eo** for Esperanto, **es** for Español, and so on). If you press the **Super** key (the key with a Linux penguin, Windows logo, or Mac Command symbol) and the **Spacebar** together on your keyboard, you will switch input sources from your default source to the next on your input list. In this example, you only have two input sources: your default language and Typing Booster.
Try pressing **Super**+**Spacebar** together and watch the input name and icon change.
#### Configure Typing Booster
With the Typing Booster input method active, click the input sources icon in the top-right of your screen, select **Unicode symbols and emoji predictions**, and set it to **On**.
![Set Unicode symbols and emoji predictions to On][8]
This makes Typing Booster dedicated to typing emoji, which isn't all Typing Booster is good for, but in the context of this article it's exactly what is needed.
### Type emoji
With Typing Booster still active, open a text editor like Gedit, a web browser, or anything that you know understands Unicode characters, and type "_thumbs up_." As you type, Typing Booster searches for matching emoji names.
![Typing Booster searching for emojis][9]
To leave emoji mode, press **Super**+**Spacebar** again, and your input source goes back to your default language.
### Switch the switcher
If the **Super**+**Spacebar** keyboard shortcut is not natural for you, then you can change it to a different combination. In GNOME Settings, navigate to **Devices** and select **Keyboard**.
In the top bar of the **Keyboard** window, search for **Input** to filter the list. Set **Switch to next input source** to a key combination of your choice.
![Changing keystroke combination in GNOME settings][10]
### Unicode input
The fact is, keyboards were designed for a 26-letter (or thereabouts) alphabet along with as many numerals and symbols. ASCII has more characters than what you find on a typical keyboard, to say nothing of the millions of characters within Unicode. If you want to type Unicode characters into a modern Linux application but don't want to switch to Typing Booster, then you can use the Unicode input shortcut.
1. With your default language active, open a text editor like Gedit, a web browser, or any application you know accepts Unicode.
2. Press **Ctrl**+**Shift**+**U** on your keyboard to enter Unicode entry mode. Release the keys.
3. You are currently in Unicode entry mode, so type a number of a Unicode symbol. For instance, try **1F44D** for a 👍 symbol, or **2620** for a ☠ symbol. To get the number code of a Unicode symbol, you can search the internet or refer to the [Unicode specification][11].
### Pragmatic emoji-ism
Emoji are fun and expressive. They can make your text unique to you. They can also be utilitarian. Because emoji are Unicode characters, they can be used anywhere a font can be used, and they can be used the same way any alphabetic character can be used. For instance, if you want to mark a series of files with a special symbol, you can add an emoji to the name, and you can filter by that emoji in Search.
![Labeling a file with emoji][12]
Use emoji all you want because Linux is a Unicode-friendly environment, and it's getting friendlier with every release.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/how-type-emoji-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead_cat-keyboard.png?itok=fuNmiGV- (A cat under a keyboard.)
[2]: https://opensource.com/sites/default/files/uploads/emacs-emoji.jpg (Emoji in Emacs)
[3]: https://www.gnome.org/
[4]: https://www.google.com/get/noto/help/emoji/
[5]: https://opensource.com/sites/default/files/uploads/gnome-setting-region-add.png (Add a new input source)
[6]: https://opensource.com/sites/default/files/uploads/gnome-setting-input-list.png (Add an Input Source panel)
[7]: https://opensource.com/sites/default/files/uploads/gnome-setting-input-other-typing-booster.png (Find Other (Typing Booster) in inputs)
[8]: https://opensource.com/sites/default/files/uploads/emoji-input-on.jpg (Set Unicode symbols and emoji predictions to On)
[9]: https://opensource.com/sites/default/files/uploads/emoji-input.jpg (Typing Booster searching for emojis)
[10]: https://opensource.com/sites/default/files/uploads/gnome-setting-keyboard-switch-input.jpg (Changing keystroke combination in GNOME settings)
[11]: http://unicode.org/emoji/charts/full-emoji-list.html
[12]: https://opensource.com/sites/default/files/uploads/file-label.png (Labeling a file with emoji)

View File

@ -0,0 +1,218 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Intro to the Linux useradd command)
[#]: via: (https://opensource.com/article/19/10/linux-useradd-command)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
Intro to the Linux useradd command
======
Add users (and customize their accounts as needed) with the useradd
command.
![people in different locations who are part of the same team][1]
Adding a user is one of the most fundamental exercises on any computer system; this article focuses on how to do it on a Linux system.
Before getting started, I want to mention three fundamentals to keep in mind. First, like with most operating systems, Linux users need an account to be able to log in. This article specifically covers local accounts, not network accounts such as LDAP. Second, accounts have both a name (called a username) and a number (called a user ID). Third, users are typically placed into a group. Groups also have a name and group ID.
As you'd expect, Linux includes a command-line utility for adding users; it's called **useradd**. You may also find the command **adduser**. Many distributions have added this symbolic link to the **useradd** command as a matter of convenience.
```
$ file `which adduser`
/usr/sbin/adduser: symbolic link to useradd
```
Let's take a look at **useradd**.
> Note: The defaults described in this article reflect those in Red Hat Enterprise Linux 8.0. You may find subtle differences in these files and certain defaults on other Linux distributions or other Unix operating systems such as FreeBSD or Solaris.
### Default behavior
The basic usage of **useradd** is quite simple: A user can be added just by providing their username.
```
`$ sudo useradd sonny`
```
In this example, the **useradd** command creates an account called _sonny_. A group with the same name is also created, and _sonny_ is placed in it to be used as the primary group. There are other parameters, such as language and shell, that are applied according to defaults and values set in the configuration files **/etc/default/useradd** and **/etc/login.defs**. This is generally sufficient for a single, personal system or a small, one-server business environment.
While the two files above govern the behavior of **useradd**, user information is stored in other files found in the **/etc** directory, which I will refer to throughout this article.
File | Description | Fields (bold—set by useradd)
---|---|---
passwd | Stores user account details | **username**:unused:**uid**:**gid**:**comment**:**homedir**:**shell**
shadow | Stores user account security details | **username**:password:lastchange:minimum:maximum:warn:**inactive**:**expire**:unused
group | Stores group details | **groupname**:unused:**gid**:**members**
### Customizable behavior
The command line allows customization for times when an administrator needs finer control, such as to specify a user's ID number.
#### User and group ID numbers
By default, **useradd** tries to use the same number for the user ID (UID) and primary group ID (GID), but there are no guarantees. Although it's not necessary for the UID and GID to match, it's easier for administrators to manage them when they do.
I have just the scenario to explain. Suppose I add another account, this time for Timmy. Comparing the two users, _sonny_ and _timmy_, shows that both users and their respective primary groups were created by using the **getent** command.
```
$ getent passwd sonny timmy
sonny❌1001:1002:Sonny:/home/sonny:/bin/bash
timmy❌1002:1003::/home/timmy:/bin/bash
$ getent group sonny timmy
sonny❌1002:
timmy❌1003:
```
Unfortunately, neither users' UID nor primary GID match. This is because the default behavior is to assign the next available UID to the user and then attempt to assign the same number to the primary group. However, if that number is already used, the next available GID is assigned to the group. To explain what happened, I hypothesize that a group with GID 1001 already exists and enter a command to confirm.
```
$ getent group 1001
book❌1001:alan
```
The group _book_ with the ID _1001_ has caused the GIDs to be off by one. This is an example where a system administrator would need to take more control of the user-creation process. To resolve this issue, I must first determine the next available user and group ID that will match. The commands **getent group** and **getent passwd** will be helpful in determining the next available number. This number can be passed with the **-u** argument.
```
$ sudo useradd -u 1004 bobby
$ getent passwd bobby; getent group bobby
bobby❌1004:1004::/home/bobby:/bin/bash
bobby❌1004:
```
Another good reason to specify the ID is for users that will be accessing files on a remote system using the Network File System (NFS). NFS is easier to administer when all client and server systems have the same ID configured for a given user. I cover this in a bit more detail in my article on [using autofs to mount NFS shares][2].
### More customization
Very often though, other account parameters need to be specified for a user. Here are brief examples of the most common customizations you may need to use.
#### Comment
The comment option is a plain-text field for providing a short description or other information using the **-c** argument.
```
$ sudo useradd -c "Bailey is cool" bailey
$ getent passwd bailey
bailey❌1011:1011:Bailey is cool:/home/bailey:/bin/bash
```
#### Groups
A user can be assigned one primary group and multiple secondary groups. The **-g** argument specifies the name or GID of the primary group. If it's not specified, **useradd** creates a primary group with the user's same name (as demonstrated above). The **-G** (uppercase) argument is used to pass a comma-separated list of groups that the user will be placed into; these are known as secondary groups.
```
$ sudo useradd -G tgroup,fgroup,libvirt milly
$ id milly
uid=1012(milly) gid=1012(milly) groups=1012(milly),981(libvirt),4000(fgroup),3000(tgroup)
```
#### Home directory
The default behavior of **useradd** is to create the user's home directory in **/home**. However, different aspects of the home directory can be overridden with the following arguments. The **-b** sets another directory where user homes can be placed. For example, **/home2** instead of the default **/home**.
```
$ sudo useradd -b /home2 vicky
$ getent passwd vicky
vicky❌1013:1013::/home2/vicky:/bin/bash
```
The **-d** lets you specify a home directory with a different name from the user.
```
$ sudo useradd -d /home/ben jerry
$ getent passwd jerry
jerry❌1014:1014::/home/ben:/bin/bash
```
#### The skeleton directory
The **-k** instructs the new user's new home directory to be populated with any files in the **/etc/skel** directory. These are usually shell configuration files, but they can be anything that a system administrator would like to make available to all new users.
#### Shell
The **-s** argument can be used to specify the shell. The default is used if nothing else is specified. For example, in the following, shell **bash** is defined in the default configuration file, but Wally has requested **zsh**.
```
$ grep SHELL /etc/default/useradd
SHELL=/bin/bash
$ sudo useradd -s /usr/bin/zsh wally
$ getent passwd wally
wally❌1004:1004::/home/wally:/usr/bin/zsh
```
#### Security
Security is an essential part of user management, so there are several options available with the **useradd** command. A user account can be given an expiration date, in the form YYYY-MM-DD, using the **-e** argument.
```
$ sudo useradd -e 20191231 sammy
$ sudo getent shadow sammy
sammy:!!:18171:0:99999:7::20191231:
```
An account can also be disabled automatically if the password expires. The **-f** argument will set the number of days after the password expires before the account is disabled. Zero is immediate.
```
$ sudo useradd -f 30 willy
$ sudo getent shadow willy
willy:!!:18171:0:99999:7:30::
```
### A real-world example
In practice, several of these arguments may be used when creating a new user account. For example, if I need to create an account for Perry, I might use the following command:
```
$ sudo useradd -u 1020 -c "Perry Example" \
-G tgroup -b /home2 \
-s /usr/bin/zsh \
-e 20201201 -f 5 perry
```
Refer to the sections above to understand each option. Verify the results with:
```
$ getent passwd perry; getent group perry; getent shadow perry; id perry
perry❌1020:1020:Perry Example:/home2/perry:/usr/bin/zsh
perry❌1020:
perry:!!:18171:0:99999:7:5:20201201:
uid=1020(perry) gid=1020(perry) groups=1020(perry),3000(tgroup)
```
### Some final advice
The **useradd** command is a "must-know" for any Unix (not just Linux) administrator. It is important to understand all of its options since user creation is something that you want to get right the first time. This means having a well-thought-out naming convention that includes a dedicated UID/GID range reserved for your users across your enterprise, not just on a single system—particularly when you're working in a growing organization.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/linux-useradd-command
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/connection_people_team_collaboration.png?itok=0_vQT8xV (people in different locations who are part of the same team)
[2]: https://opensource.com/article/18/6/using-autofs-mount-nfs-shares

View File

@ -0,0 +1,132 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using multitail on Linux)
[#]: via: (https://www.networkworld.com/article/3445228/using-multitail-on-linux.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Using multitail on Linux
======
[Glen Bowman][1] [(CC BY-SA 2.0)][2]
The **multitail** command can be very helpful whenever you want to watch activity on a number of files at the same time  especially log files. It works like a multi-windowed **tail -f** command. That is, it displays the bottoms of files and new lines as they are being added. While easy to use in general, **multitail** does provide some command-line and interactive options that you should be aware of before you start to use it routinely.
### Basic multitail-ing
The simplest use of **multitail** is to list the names of the files that you wish to watch on the command line. This command splits the screen horizontally (i.e., top and bottom), displaying the bottom of each of the files along with updates.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
```
$ multitail /var/log/syslog /var/log/dmesg
```
The display will be split like this:
[][4]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][4]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
```
+-----------------------+
| |
| |
+-----------------------|
| |
| |
+-----------------------+
```
The lines displayed from each of the files would be followed by a single line per file that includes the assigned file number (starting with 00), the file name, the file size, and the date and time the most recent content was added. Each of the files will be allotted half the space available regardless of its size or activity. For example:
```
content lines from my1.log
more content
more lines
00] my1.log 59KB - 2019/10/14 12:12:09
content lines from my2.log
more content
more lines
01] my2.log 120KB - 2019/10/14 14:22:29
```
Note that **multitail** will not complain if you ask it to display non-text files or files that you have no permission to view; you just won't see the contents.
You can also use wild cards to specify the files that you want to watch:
```
$ multitail my*.log
```
One thing to keep in mind is that **multitail** is going to split the screen evenly. If you specify too many files, you will see only a few lines from each and you will only see the first seven or so of the requested files if you list too many unless you take extra steps to view the later files (see the scrolling option described below). The exact result depends on the how many lines are available in your terminal window.
Press **q** to quit **multitail** and return to your normal screen view.
### Dividing the screen
**Multitail** will split your terminal window vertically (i.e., left and right) if you prefer. For this, use the **-s** option. If you specify three files, the right side of your screen will be divided horizontally as well. With four, you'll have four equal-sized windows.
```
+-----------+-----------+ +-----------+-----------+ +-----------+-----------+
| | | | | | | | |
| | | | | | | | |
| | | | +-----------+ +-----------+-----------+
| | | | | | | | |
| | | | | | | | |
+-----------+-----------+ +-----------+-----------+ +-----------+-----------+
2 files 3 files 4 files
```
Use **multitail -s 3 file1 file2 file3** if you want to split the screen into three columns.
```
+-------+-------+-------+
| | | |
| | | |
| | | |
| | | |
| | | |
+-------+-------+-------+
3 files with -s 3
```
### Scrolling
You can scroll up and down through displayed files, but you need to press **b** to bring up a selection menu and then use the up and arrow buttons to select the file you wish to scroll through. Then press the **enter** key. You can then scroll through the lines in an enlarged area, again using the up and down arrows. Press **q** when you're done to go back to the normal view.
### Getting Help
Pressing **h** in **multitail** will open a help menu describing some of the basic operations, though the man page provides quite a bit more information and is worth perusing if you want to learn even more about using this tool.
**Multitail** will not likely be installed on your system by default, but using **apt-get** or **yum** should get you to an easy install. The tool provides a lot of functionality, but with its character-based display, window borders will just be strings of **q**'s and **x**'s. It's a very handy when you need to keep an eye on file updates.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3445228/using-multitail-on-linux.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.flickr.com/photos/glenbowman/7992498919/in/photolist-dbgDtv-gHfRRz-5uRM4v-gHgFnz-6sPqTZ-5uaP7H-USFPqD-pbtRUe-fiKiYn-nmgWL2-pQNepR-q68p8d-dDsUxw-dbgFKG-nmgE6m-DHyqM-nCKA4L-2d7uFqH-Kbqzk-8EwKg-8Vy72g-2X3NSN-78Bv84-buKWXF-aeM4ok-yhweWf-4vwpyX-9hu8nq-9zCoti-v5nzP5-23fL48r-24y6pGS-JhWDof-6zF75k-24y6nHS-9hr19c-Gueh6G-Guei7u-GuegFy-24y6oX5-26qu5iX-wKrnMW-Gueikf-24y6oYh-27y4wwA-x4z19F-x57yP4-24BY6gc-24y6nPo-QGwbkf
[2]: https://creativecommons.org/licenses/by-sa/2.0/legalcode
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,210 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Configure Rsyslog Server in CentOS 8 / RHEL 8)
[#]: via: (https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/)
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
How to Configure Rsyslog Server in CentOS 8 / RHEL 8
======
**Rsyslog** is a free and opensource logging utility that exists by default on  **CentOS** 8 and **RHEL** 8 systems. It provides an easy and effective way of **centralizing logs** from client nodes to a single central server. The centralization of logs is beneficial in two ways. First,  it simplifies viewing of logs as the Systems administrator can view all the logs of remote servers from a central point without logging into every client system to check the logs. This is greatly beneficial if there are several servers that need to be monitored and secondly, in the event that a remote client suffers a crash, you need not worry about losing the logs because all the logs will be saved on the **central rsyslog server**. Rsyslog has replaced syslog which only supported **UDP** protocol. It extends the basic syslog protocol with superior features such as support for both **UDP** and **TCP** protocols in transporting logs, augmented filtering abilities, and flexible configuration options. That said, lets explore how to configure the Rsyslog server in CentOS 8 / RHEL 8 systems.
[![configure-rsyslog-centos8-rhel8][1]][2]
### Prerequisites
We are going to have the following lab setup to test the centralized logging process:
* **Rsyslog server**       CentOS 8 Minimal    IP address: 10.128.0.47
* **Client system**         RHEL 8 Minimal      IP address: 10.128.0.48
From the setup above, we will demonstrate how you can set up the Rsyslog server and later configure the client system to ship logs to the Rsyslog server for monitoring.
Lets get started!
### Configuring the Rsyslog Server on CentOS 8
By default, Rsyslog comes installed on CentOS 8 / RHEL 8 servers. To verify the status of Rsyslog, log in via SSH and issue the command:
```
$ systemctl status rsyslog
```
Sample Output
![rsyslog-service-status-centos8][1]
If rsyslog is not present for whatever reason, you can install it using the command:
```
$ sudo yum install rsyslog
```
Next, you need to modify a few settings in the Rsyslog configuration file. Open the configuration file.
```
$ sudo vim /etc/rsyslog.conf
```
Scroll and uncomment the lines shown below to allow reception of logs via UDP protocol
```
module(load="imudp") # needs to be done just once
input(type="imudp" port="514")
```
![rsyslog-conf-centos8-rhel8][1]
Similarly, if you prefer to enable TCP rsyslog reception uncomment the lines:
```
module(load="imtcp") # needs to be done just once
input(type="imtcp" port="514")
```
![rsyslog-conf-tcp-centos8-rhel8][1]
Save and exit the configuration file.
To receive the logs from the client system,  we need to open Rsyslog default port 514 on the firewall. To achieve this, run
```
# sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
```
Next, reload the firewall to save the changes
```
# sudo firewall-cmd --reload
```
Sample Output
![firewall-ports-rsyslog-centos8][1]
Next, restart Rsyslog server
```
$ sudo systemctl restart rsyslog
```
To enable Rsyslog on boot, run beneath command
```
$ sudo systemctl enable rsyslog
```
To confirm that the Rsyslog server is listening on port 514, use the netstat command as follows:
```
$ sudo netstat -pnltu
```
Sample Output
![netstat-rsyslog-port-centos8][1]
Perfect! we have successfully configured our Rsyslog server to receive logs from the client system.
To view log messages in real-time run the command:
```
$ tail -f /var/log/messages
```
Lets now configure the client system.
### Configuring the client system on RHEL 8
Like the Rsyslog server, log in and check if the rsyslog daemon is running by issuing the command:
```
$ sudo systemctl status rsyslog
```
Sample Output
![client-rsyslog-service-rhel8][1]
Next, proceed to open the rsyslog configuration file
```
$ sudo vim /etc/rsyslog.conf
```
At the end of the file, append the following line
```
*.* @10.128.0.47:514 # Use @ for UDP protocol
*.* @@10.128.0.47:514 # Use @@ for TCP protocol
```
Save and exit the configuration file. Just like the Rsyslog Server, open port 514 which is the default Rsyslog port on the firewall
```
$ sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
```
Next, reload the firewall to save the changes
```
$ sudo firewall-cmd --reload
```
Next,  restart the rsyslog service
```
$ sudo systemctl restart rsyslog
```
To enable Rsyslog on boot, run following command
```
$ sudo systemctl enable rsyslog
```
### Testing the logging operation
Having successfully set up and configured Rsyslog Server and client system, its time to verify of your configuration is working as intended.
On the client system issue the command:
```
# logger "Hello guys! This is our first log"
```
Now head out to the Rsyslog server and run the command below to check the logs messages in real-time
```
# tail -f /var/log/messages
```
The output from the command run on the client system should register on the Rsyslog servers log messages to imply that the  Rsyslog server is now receiving logs from the client system.
![centralize-logs-rsyslogs-centos8][1]
And thats it, guys! We have successfully setup the Rsyslog server to receive log messages from a client system.
Read Also: **[How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8][3]**
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/
作者:[James Kiarie][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/james/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/configure-rsyslog-centos8-rhel8.jpg
[3]: https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/

View File

@ -0,0 +1,516 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to use Protobuf for data interchange)
[#]: via: (https://opensource.com/article/19/10/protobuf-data-interchange)
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
How to use Protobuf for data interchange
======
Protobuf encoding increases efficiency when exchanging data between
applications written in different languages and running on different
platforms.
![metrics and data shown on a computer screen][1]
Protocol buffers ([Protobufs][2]), like XML and JSON, allow applications, which may be written in different languages and running on different platforms, to exchange data. For example, a sending application written in Go could encode a Go-specific sales order in Protobuf, which a receiver written in Java then could decode to get a Java-specific representation of the received order. Here is a sketch of the architecture over a network connection:
```
`Go sales order--->Pbuf-encode--->network--->Pbuf-decode--->Java sales order`
```
Protobuf encoding, in contrast to its XML and JSON counterparts, is binary rather than text, which can complicate debugging. However, as the code examples in this article confirm, the Protobuf encoding is significantly more efficient in size than either XML or JSON encoding.
Protobuf is efficient in another way. At the implementation level, Protobuf and other encoding systems serialize and deserialize structured data. Serialization transforms a language-specific data structure into a bytestream, and deserialization is the inverse operation that transforms a bytestream back into a language-specific data structure. Serialization and deserialization may become the bottleneck in data interchange because these operations are CPU-intensive. Efficient serialization and deserialization is another Protobuf design goal.
Recent encoding technologies, such as Protobuf and FlatBuffers, derive from the [DCE/RPC][3] (Distributed Computing Environment/Remote Procedure Call) initiative of the early 1990s. Like DCE/RPC, Protobuf contributes to both the [IDL][4] (interface definition language) and the encoding layer in data interchange.
This article will look at these two layers then provide code examples in Go and Java to flesh out Protobuf details and show that Protobuf is easy to use.
### Protobuf as an IDL and encoding layer
DCE/RPC, like Protobuf, is designed to be language- and platform-neutral. The appropriate libraries and utilities allow any language and platform to play in the DCE/RPC arena. Furthermore, the DCE/RPC architecture is elegant. An IDL document is the contract between the remote procedure on the one side and callers on the other side. Protobuf, too, centers on an IDL document.
An IDL document is text and, in DCE/RPC, uses basic C syntax along with syntactic extensions for metadata (square brackets) and a few new keywords such as **interface**. Here is an example:
```
[uuid (2d6ead46-05e3-11ca-7dd1-426909beabcd), version(1.0)]
interface echo {
   const long int ECHO_SIZE = 512;
   void echo(
      [in]          handle_t h,
      [in, string]  idl_char from_client[ ],
      [out, string] idl_char from_service[ECHO_SIZE]
   );
}
```
This IDL document declares a procedure named **echo**, which takes three arguments: the **[in]** arguments of type **handle_t** (implementation pointer) and **idl_char** (array of ASCII characters) are passed to the remote procedure, whereas the **[out]** argument (also a string) is passed back from the procedure. In this example, the **echo** procedure does not explicitly return a value (the **void** to the left of **echo**) but could do so. A return value, together with one or more **[out]** arguments, allows the remote procedure to return arbitrarily many values. The next section introduces a Protobuf IDL, which differs in syntax but likewise serves as a contract in data interchange.
The IDL document, in both DCE/RPC and Protobuf, is the input to utilities that create the infrastructure code for exchanging data:
```
`IDL document--->DCE/PRC or Protobuf utilities--->support code for data interchange`
```
As relatively straightforward text, the IDL is likewise human-readable documentation about the specifics of the data interchange—in particular, the number of data items exchanged and the data type of each item.
Protobuf can used in a modern RPC system such as [gRPC][5]; but Protobuf on its own provides only the IDL layer and the encoding layer for messages passed from a sender to a receiver. Protobuf encoding, like the DCE/RPC original, is binary but more efficient.
At present, XML and JSON encodings still dominate in data interchange through technologies such as web services, which make use of in-place infrastructure such as web servers, transport protocols (e.g., TCP, HTTP), and standard libraries and utilities for processing XML and JSON documents. Moreover, database systems of various flavors can store XML and JSON documents, and even legacy relational systems readily generate XML encodings of query results. Every general-purpose programming language now has libraries that support XML and JSON. What, then, recommends a return to a _binary_ encoding system such as Protobuf?
Consider the negative decimal value **-128**. In the 2's complement binary representation, which dominates across systems and languages, this value can be stored in a single 8-bit byte: 10000000. The text encoding of this integer value in XML or JSON requires multiple bytes. For example, UTF-8 encoding requires four bytes for the string, literally **-128**, which is one byte per character (in hex, the values are 0x2d, 0x31, 0x32, and 0x38). XML and JSON also add markup characters, such as angle brackets and braces, to the mix. Details about Protobuf encoding are forthcoming, but the point of interest now is a general one: Text encodings tend to be significantly less compact than binary ones.
### A code example in Go using Protobuf
My code examples focus on Protobuf rather than RPC. Here is an overview of the first example:
* The IDL file named _dataitem.proto_ defines a Protobuf **message** with six fields of different types: integer values with different ranges, floating-point values of a fixed size, and strings of two different lengths.
* The Protobuf compiler uses the IDL file to generate a Go-specific version (and, later, a Java-specific version) of the Protobuf **message** together with supporting functions.
* A Go app populates the native Go data structure with randomly generated values and then serializes the result to a local file. For comparison, XML and JSON encodings also are serialized to local files.
* As a test, the Go application reconstructs an instance of its native data structure by deserializing the contents of the Protobuf file.
* As a language-neutrality test, the Java application also deserializes the contents of the Protobuf file to get an instance of a native data structure.
This IDL file and two Go and one Java source files are available as a ZIP file on [my website][6].
The all-important Protobuf IDL document is shown below. The document is stored in the file _dataitem.proto_, with the customary _.proto_ extension.
#### Example 1. Protobuf IDL document
```
syntax = "proto3";
package main;
message DataItem {
  int64  oddA  = 1;
  int64  evenA = 2;
  int32  oddB  = 3;
  int32  evenB = 4;
  float  small = 5;
  float  big   = 6;
  string short = 7;
  string long  = 8;
}
```
The IDL uses the current proto3 rather than the earlier proto2 syntax. The package name (in this case, **main**) is optional but customary; it is used to avoid name conflicts. The structured **message** contains eight fields, each of which has a Protobuf data type (e.g., **int64**, **string**), a name (e.g., **oddA**, **short**), and a numeric tag (aka key) after the equals sign **=**. The tags, which are 1 through 8 in this example, are unique integer identifiers that determine the order in which the fields are serialized.
Protobuf messages can be nested to arbitrary levels, and one message can be the field type in the other. Here's an example that uses the **DataItem** message as a field type:
```
message DataItems {
  repeated DataItem item = 1;
}
```
A single **DataItems** message consists of repeated (none or more) **DataItem** messages.
Protobuf also supports enumerated types for clarity:
```
enum PartnershipStatus {
  reserved "FREE", "CONSTRAINED", "OTHER";
}
```
The **reserved** qualifier ensures that the numeric values used to implement the three symbolic names cannot be reused.
To generate a language-specific version of one or more declared Protobuf **message** structures, the IDL file containing these is passed to the _protoc_ compiler (available in the [Protobuf GitHub repository][7]). For the Go code, the supporting Protobuf library can be installed in the usual way (with **%** as the command-line prompt):
```
`% go get github.com/golang/protobuf/proto`
```
The command to compile the Protobuf IDL file _dataitem.proto_ into Go source code is:
```
`% protoc --go_out=. dataitem.proto`
```
The flag **\--go_out** directs the compiler to generate Go source code; there are similar flags for other languages. The result, in this case, is a file named _dataitem.pb.go_, which is small enough that the essentials can be copied into a Go application. Here are the essentials from the generated code:
```
var _ = proto.Marshal
type DataItem struct {
   OddA  int64   `protobuf:"varint,1,opt,name=oddA" json:"oddA,omitempty"`
   EvenA int64   `protobuf:"varint,2,opt,name=evenA" json:"evenA,omitempty"`
   OddB  int32   `protobuf:"varint,3,opt,name=oddB" json:"oddB,omitempty"`
   EvenB int32   `protobuf:"varint,4,opt,name=evenB" json:"evenB,omitempty"`
   Small float32 `protobuf:"fixed32,5,opt,name=small" json:"small,omitempty"`
   Big   float32 `protobuf:"fixed32,6,opt,name=big" json:"big,omitempty"`
   Short string  `protobuf:"bytes,7,opt,name=short" json:"short,omitempty"`
   Long  string  `protobuf:"bytes,8,opt,name=long" json:"long,omitempty"`
}
func (m *DataItem) Reset()         { *m = DataItem{} }
func (m *DataItem) String() string { return proto.CompactTextString(m) }
func (*DataItem) ProtoMessage()    {}
func init() {}
```
The compiler-generated code has a Go structure **DataItem**, which exports the Go fields—the names are now capitalized—that match the names declared in the Protobuf IDL. The structure fields have standard Go data types: **int32**, **int64**, **float32**, and **string**. At the end of each field line, as a string, is metadata that describes the Protobuf types, gives the numeric tags from the Protobuf IDL document, and provides information about JSON, which is discussed later.
There are also functions; the most important is **proto.Marshal** for serializing an instance of the **DataItem** structure into Protobuf format. The helper functions include **Reset**, which clears a **DataItem** structure, and **String**, which produces a one-line string representation of a **DataItem**.
The metadata that describes Protobuf encoding deserves a closer look before analyzing the Go program in more detail.
### Protobuf encoding
A Protobuf message is structured as a collection of key/value pairs, with the numeric tag as the key and the corresponding field as the value. The field names, such as **oddA** and **small**, are for human readability, but the _protoc_ compiler does use the field names in generating language-specific counterparts. For example, the **oddA** and **small** names in the Protobuf IDL become the fields **OddA** and **Small**, respectively, in the Go structure.
The keys and their values both get encoded, but with an important difference: some numeric values have a fixed-size encoding of 32 or 64 bits, whereas others (including the **message** tags) are _varint_ encoded—the number of bits depends on the integer's absolute value. For example, the integer values 1 through 15 require 8 bits to encode in _varint_, whereas the values 16 through 2047 require 16 bits. The _varint_ encoding, similar in spirit (but not in detail) to UTF-8 encoding, favors small integer values over large ones. (For a detailed analysis, see the Protobuf [encoding guide][8].) The upshot is that a Protobuf **message** should have small integer values in fields, if possible, and as few keys as possible, but one key per field is unavoidable.
Table 1 below gives the gist of Protobuf encoding:
**Table 1. Protobuf data types**
Encoding | Sample types | Length
---|---|---
varint | int32, uint32, int64 | Variable length
fixed | fixed32, float, double | Fixed 32-bit or 64-bit length
byte sequence | string, bytes | Sequence length
Integer types that are not explicitly **fixed** are _varint_ encoded; hence, in a _varint_ type such as **uint32** (**u** for unsigned), the number 32 describes the integer's range (in this case, 0 to 232 \- 1) rather than its bit size, which differs depending on the value. For fixed types such as **fixed32** or **double**, by contrast, the Protobuf encoding requires 32 and 64 bits, respectively. Strings in Protobuf are byte sequences; hence, the size of the field encoding is the length of the byte sequence.
Another efficiency deserves mention. Recall the earlier example in which a **DataItems** message consists of repeated **DataItem** instances:
```
message DataItems {
  repeated DataItem item = 1;
}
```
The **repeated** means that the **DataItem** instances are _packed_: the collection has a single tag, in this case, 1. A **DataItems** message with repeated **DataItem** instances is thus more efficient than a message with multiple but separate **DataItem** fields, each of which would require a tag of its own.
With this background in mind, let's return to the Go program.
### The dataItem program in detail
The _dataItem_ program creates a **DataItem** instance and populates the fields with randomly generated values of the appropriate types. Go has a **rand** package with functions for generating pseudo-random integer and floating-point values, and my **randString** function generates pseudo-random strings of specified lengths from a character set. The design goal is to have a **DataItem** instance with field values of different types and bit sizes. For example, the **OddA** and **EvenA** values are 64-bit non-negative integer values of odd and even parity, respectively; but the **OddB** and **EvenB** variants are 32 bits in size and hold small integer values between 0 and 2047. The random floating-point values are 32 bits in size, and the strings are 16 (**Short**) and 32 (**Long**) characters in length. Here is the code segment that populates the **DataItem** structure with random values:
```
// variable-length integers
n1 := rand.Int63()        // bigger integer
if (n1 &amp; 1) == 0 { n1++ } // ensure it's odd
...
n3 := rand.Int31() % UpperBound // smaller integer
if (n3 &amp; 1) == 0 { n3++ }       // ensure it's odd
// fixed-length floats
...
t1 := rand.Float32()
t2 := rand.Float32()
...
// strings
str1 := randString(StrShort)
str2 := randString(StrLong)
// the message
dataItem := &amp;DataItem {
   OddA:  n1,
   EvenA: n2,
   OddB:  n3,
   EvenB: n4,
   Big:   f1,
   Small: f2,
   Short: str1,
   Long:  str2,
}
```
Once created and populated with values, the **DataItem** instance is encoded in XML, JSON, and Protobuf, with each encoding written to a local file:
```
func encodeAndserialize(dataItem *DataItem) {
   bytes, _ := xml.MarshalIndent(dataItem, "", " ")  // Xml to dataitem.xml
   ioutil.WriteFile(XmlFile, bytes, 0644)            // 0644 is file access permissions
   bytes, _ = json.MarshalIndent(dataItem, "", " ")  // Json to dataitem.json
   ioutil.WriteFile(JsonFile, bytes, 0644)
   bytes, _ = proto.Marshal(dataItem)                // Protobuf to dataitem.pbuf
   ioutil.WriteFile(PbufFile, bytes, 0644)
}
```
The three serializing functions use the term _marshal_, which is roughly synonymous with _serialize_. As the code indicates, each of the three **Marshal** functions returns an array of bytes, which then are written to a file. (Possible errors are ignored for simplicity.) On a sample run, the file sizes were:
```
dataitem.xml:  262 bytes
dataitem.json: 212 bytes
dataitem.pbuf:  88 bytes
```
The Protobuf encoding is significantly smaller than the other two. The XML and JSON serializations could be reduced slightly in size by eliminating indentation characters, in this case, blanks and newlines.
Below is the _dataitem.json_ file resulting eventually from the **json.MarshalIndent** call, with added comments starting with **##**:
```
{
 "oddA":  4744002665212642479,                ## 64-bit &gt;= 0
 "evenA": 2395006495604861128,                ## ditto
 "oddB":  57,                                 ## 32-bit &gt;= 0 but &lt; 2048
 "evenB": 468,                                ## ditto
 "small": 0.7562016,                          ## 32-bit floating-point
 "big":   0.85202795,                         ## ditto
 "short": "ClH1oDaTtoX$HBN5",                 ## 16 random chars
 "long":  "xId0rD3Cri%3Wt%^QjcFLJgyXBu9^DZI"  ## 32 random chars
}
```
Although the serialized data goes into local files, the same approach would be used to write the data to the output stream of a network connection.
### Testing serialization/deserialization
The Go program next runs an elementary test by deserializing the bytes, which were written earlier to the _dataitem.pbuf_ file, into a **DataItem** instance. Here is the code segment, with the error-checking parts removed:
```
filebytes, err := ioutil.ReadFile(PbufFile) // get the bytes from the file
...
testItem.Reset()                            // clear the DataItem structure
err = proto.Unmarshal(filebytes, testItem)  // deserialize into a DataItem instance
```
The **proto.Unmarshal** function for deserializing Protbuf is the inverse of the **proto.Marshal** function. The original **DataItem** and the deserialized clone are printed to confirm an exact match:
```
Original:
2041519981506242154 3041486079683013705 1192 1879
0.572123 0.326855
boPb#T0O8Xd&amp;Ps5EnSZqDg4Qztvo7IIs 9vH66AiGSQgCDxk&amp;
Deserialized:
2041519981506242154 3041486079683013705 1192 1879
0.572123 0.326855
boPb#T0O8Xd&amp;Ps5EnSZqDg4Qztvo7IIs 9vH66AiGSQgCDxk&amp;
```
### A Protobuf client in Java
The example in Java is to confirm Protobuf's language neutrality. The original IDL file could be used to generate the Java support code, which involves nested classes. To suppress warnings, however, a slight addition can be made. Here is the revision, which specifies a **DataMsg** as the name for the outer class, with the inner class automatically named **DataItem** after the Protobuf message:
```
syntax = "proto3";
package main;
option java_outer_classname = "DataMsg";
message DataItem {
...
```
With this change in place, the _protoc_ compilation is the same as before, except the desired output is now Java rather than Go:
```
`% protoc --java_out=. dataitem.proto`
```
The resulting source file (in a subdirectory named _main_) is _DataMsg.java_ and about 1,120 lines in length: Java is not terse. Compiling and then running the Java code requires a JAR file with the library support for Protobuf. This file is available in the [Maven repository][9].
With the pieces in place, my test code is relatively short (and available in the ZIP file as _Main.java_):
```
package main;
import java.io.FileInputStream;
public class Main {
   public static void main(String[] args) {
      String path = "dataitem.pbuf";  // from the Go program's serialization
      try {
         DataMsg.DataItem deserial =
           DataMsg.DataItem.newBuilder().mergeFrom(new FileInputStream(path)).build();
         System.out.println(deserial.getOddA()); // 64-bit odd
         System.out.println(deserial.getLong()); // 32-character string
      }
      catch(Exception e) { System.err.println(e); }
    }
}
```
Production-grade testing would be far more thorough, of course, but even this preliminary test confirms the language-neutrality of Protobuf: the _dataitem.pbuf_ file results from the Go program's serialization of a Go **DataItem**, and the bytes in this file are deserialized to produce a **DataItem** instance in Java. The output from the Java test is the same as that from the Go test.
### Wrapping up with the numPairs program
Let's end with an example that highlights Protobuf efficiency but also underscores the cost involved in any encoding technology. Consider this Protobuf IDL file:
```
syntax = "proto3";
package main;
message NumPairs {
  repeated NumPair pair = 1;
}
message NumPair {
  int32 odd = 1;
  int32 even = 2;
}
```
A **NumPair** message consists of two **int32** values together with an integer tag for each field. A **NumPairs** message is a sequence of embedded **NumPair** messages.
The _numPairs_ program in Go (below) creates 2 million **NumPair** instances, with each appended to the **NumPairs** message. This message can be serialized and deserialized in the usual way.
#### Example 2. The numPairs program
```
package main
import (
   "math/rand"
   "time"
   "encoding/xml"
   "encoding/json"
   "io/ioutil"
   "github.com/golang/protobuf/proto"
)
// protoc-generated code: start
var _ = proto.Marshal
type NumPairs struct {
   Pair []*NumPair `protobuf:"bytes,1,rep,name=pair" json:"pair,omitempty"`
}
func (m *NumPairs) Reset()         { *m = NumPairs{} }
func (m *NumPairs) String() string { return proto.CompactTextString(m) }
func (*NumPairs) ProtoMessage()    {}
func (m *NumPairs) GetPair() []*NumPair {
   if m != nil { return m.Pair }
   return nil
}
type NumPair struct {
   Odd  int32 `protobuf:"varint,1,opt,name=odd" json:"odd,omitempty"`
   Even int32 `protobuf:"varint,2,opt,name=even" json:"even,omitempty"`
}
func (m *NumPair) Reset()         { *m = NumPair{} }
func (m *NumPair) String() string { return proto.CompactTextString(m) }
func (*NumPair) ProtoMessage()    {}
func init() {}
// protoc-generated code: finish
var numPairsStruct NumPairs
var numPairs = &amp;numPairsStruct
func encodeAndserialize() {
   // XML encoding
   filename := "./pairs.xml"
   bytes, _ := xml.MarshalIndent(numPairs, "", " ")
   ioutil.WriteFile(filename, bytes, 0644)
   // JSON encoding
   filename = "./pairs.json"
   bytes, _ = json.MarshalIndent(numPairs, "", " ")
   ioutil.WriteFile(filename, bytes, 0644)
   // ProtoBuf encoding
   filename = "./pairs.pbuf"
   bytes, _ = proto.Marshal(numPairs)
   ioutil.WriteFile(filename, bytes, 0644)
}
const HowMany = 200 * 100  * 100 // two million
func main() {
   rand.Seed(time.Now().UnixNano())
   // uncomment the modulus operations to get the more efficient version
   for i := 0; i &lt; HowMany; i++ {
      n1 := rand.Int31() // % 2047
      if (n1 &amp; 1) == 0 { n1++ } // ensure it's odd
      n2 := rand.Int31() // % 2047
      if (n2 &amp; 1) == 1 { n2++ } // ensure it's even
      next := &amp;NumPair {
                 Odd:  n1,
                 Even: n2,
              }
      numPairs.Pair = append(numPairs.Pair, next)
   }
   encodeAndserialize()
}
```
The randomly generated odd and even values in each **NumPair** range from zero to 2 billion and change. In terms of raw rather than encoded data, the integers generated in the Go program add up to 16MB: two integers per **NumPair** for a total of 4 million integers in all, and each value is four bytes in size.
For comparison, the table below has entries for the XML, JSON, and Protobuf encodings of the 2 million **NumPair** instances in the sample **NumsPairs** message. The raw data is included, as well. Because the _numPairs_ program generates random values, output differs across sample runs but is close to the sizes shown in the table.
**Table 2. Encoding overhead for 16MB of integers**
Encoding | File | Byte size | Pbuf/other ratio
---|---|---|---
None | pairs.raw | 16MB | 169%
Protobuf | pairs.pbuf | 27MB | —
JSON | pairs.json | 100MB | 27%
XML | pairs.xml | 126MB | 21%
As expected, Protobuf shines next to XML and JSON. The Protobuf encoding is about a quarter of the JSON one and about a fifth of the XML one. But the raw data make clear that Protobuf incurs the overhead of encoding: the serialized Protobuf message is 11MB larger than the raw data. Any encoding, including Protobuf, involves structuring the data, which unavoidably adds bytes.
Each of the serialized 2 million **NumPair** instances involves _four_ integer values: one apiece for the **Even** and **Odd** fields in the Go structure, and one tag per each field in the Protobuf encoding. As raw rather than encoded data, this would come to 16 bytes per instance, and there are 2 million instances in the sample **NumPairs** message. But the Protobuf tags, like the **int32** values in the **NumPair** fields, use _varint_ encoding and, therefore, vary in byte length; in particular, small integer values (which include the tags, in this case) require fewer than four bytes to encode.
If the _numPairs_ program is revised so that the two **NumPair** fields hold values less than 2048, which have encodings of either one or two bytes, then the Protobuf encoding drops from 27MB to 16MB—the very size of the raw data. The table below summarizes the new encoding sizes from a sample run.
**Table 3. Encoding with 16MB of integers &lt; 2048**
Encoding | File | Byte size | Pbuf/other ratio
---|---|---|---
None | pairs.raw | 16MB | 100%
Protobuf | pairs.pbuf | 16MB | —
JSON | pairs.json | 77MB | 21%
XML | pairs.xml | 103MB | 15%
In summary, the modified _numPairs_ program, with field values less than 2048, reduces the four-byte size for each integer value in the raw data. But the Protobuf encoding still requires tags, which add bytes to the Protobuf message. Protobuf encoding does have a cost in message size, but this cost can be reduced by the _varint_ factor if relatively small integer values, whether in fields or keys, are being encoded.
For moderately sized messages consisting of structured data with mixed types—and relatively small integer values—Protobuf has a clear advantage over options such as XML and JSON. In other cases, the data may not be suited for Protobuf encoding. For example, if two applications need to share a huge set of text records or large integer values, then compression rather than encoding technology may be the way to go.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/protobuf-data-interchange
作者:[Marty Kalin][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mkalindepauledu
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
[2]: https://developers.google.com/protocol-buffers/
[3]: https://en.wikipedia.org/wiki/DCE/RPC
[4]: https://en.wikipedia.org/wiki/Interface_description_language
[5]: https://grpc.io/
[6]: http://condor.depaul.edu/mkalin
[7]: https://github.com/protocolbuffers/protobuf
[8]: https://developers.google.com/protocol-buffers/docs/encoding
[9]: https://mvnrepository.com/artifact/com.google.protobuf/protobuf-java

View File

@ -0,0 +1,122 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Perceiving Python programming paradigms)
[#]: via: (https://opensource.com/article/19/10/python-programming-paradigms)
[#]: author: (Jigyasa Grover https://opensource.com/users/jigyasa-grover)
Perceiving Python programming paradigms
======
Python supports imperative, functional, procedural, and object-oriented
programming; here are tips on choosing the right one for a specific use
case.
![A python with a package.][1]
Early each year, TIOBE announces its Programming Language of The Year. When its latest annual [TIOBE index][2] report came out, I was not at all surprised to see [Python again winning the title][3], which was based on capturing the most search engine ranking points (especially on Google, Bing, Yahoo, Wikipedia, Amazon, YouTube, and Baidu) in 2018.
![Python data from TIOBE Index][4]
Adding weight to TIOBE's findings, earlier this year, nearly 90,000 developers took Stack Overflow's annual [Developer Survey][5], which is the largest and most comprehensive survey of people who code around the world. The main takeaway from this year's results was:
> "Python, the fastest-growing major programming language, has risen in the ranks of programming languages in our survey yet again, edging out Java this year and standing as the second most loved language (behind Rust)."
Ever since I started programming and exploring different languages, I have seen admiration for Python soaring high. Since 2003, it has consistently been among the top 10 most popular programming languages. As TIOBE's report stated:
> "It is the most frequently taught first language at universities nowadays, it is number one in the statistical domain, number one in AI programming, number one in scripting and number one in writing system tests. Besides this, Python is also leading in web programming and scientific computing (just to name some other domains). In summary, Python is everywhere."
There are several reasons for Python's rapid rise, bloom, and dominance in multiple domains, including web development, scientific computing, testing, data science, machine learning, and more. The reasons include its readable and maintainable code; extensive support for third-party integrations and libraries; modular, dynamic, and portable structure; flexible programming; learning ease and support; user-friendly data structures; productivity and speed; and, most important, community support. The diverse application of Python is a result of its combined features, which give it an edge over other languages.
But in my opinion, the comparative simplicity of its syntax and the staggering flexibility it provides developers coming from many other languages win the cake. Very few languages can match Python's ability to conform to a developer's coding style rather than forcing him or her to code in a particular way. Python lets more advanced developers use the style they feel is best suited to solve a particular problem.
While working with Python, you are like a snake charmer. This allows you to take advantage of Python's promise to offer a non-conforming environment for developers to code in the style best suited for a particular situation and to make the code more readable, testable, and coherent.
## Python programming paradigms
Python supports four main [programming paradigms][6]: imperative, functional, procedural, and object-oriented. Whether you agree that they are valid or even useful, Python strives to make all four available and working. Before we dive in to see which programming paradigm is most suitable for specific use cases, it is a good time to do a quick review of them.
### Imperative programming paradigm
The [imperative programming paradigm][7] uses the imperative mood of natural language to express directions. It executes commands in a step-by-step manner, just like a series of verbal commands. Following the "how-to-solve" approach, it makes direct changes to the state of the program; hence it is also called the stateful programming model. Using the imperative programming paradigm, you can quickly write very simple yet elegant code, and it is super-handy for tasks that involve data manipulation. Owing to its comparatively slower and sequential execution strategy, it cannot be used for complex or parallel computations.
[![Linus Torvalds quote][8]][9]
Consider this example task, where the goal is to take a list of characters and concatenate it to form a string. A way to do it in an imperative programming style would be something like:
```
&gt;&gt;&gt; sample_characters = ['p','y','t','h','o','n']
&gt;&gt;&gt; sample_string = ''
&gt;&gt;&gt; sample_string
''
&gt;&gt;&gt; sample_string = sample_string + sample_characters[0]
&gt;&gt;&gt; sample_string
'p'
&gt;&gt;&gt; sample_string = sample_string + sample_characters[1]
&gt;&gt;&gt; sample_string
'py'
&gt;&gt;&gt; sample_string = sample_string + sample_characters[2]
&gt;&gt;&gt; sample_string
'pyt'
&gt;&gt;&gt; sample_string = sample_string + sample_characters[3]
&gt;&gt;&gt; sample_string
'pyth'
&gt;&gt;&gt; sample_string = sample_string + sample_characters[4]
&gt;&gt;&gt; sample_string
'pytho'
&gt;&gt;&gt; sample_string = sample_string + sample_characters[5]
&gt;&gt;&gt; sample_string
'python'
&gt;&gt;&gt;
```
Here, the variable **sample_string** is also like a state of the program that is getting changed after executing the series of commands, and it can be easily extracted to track the progress of the program. The same can be done using a **for** loop (also considered imperative programming) in a shorter version of the above code:
```
&gt;&gt;&gt; sample_characters = ['p','y','t','h','o','n']
&gt;&gt;&gt; sample_string = ''
&gt;&gt;&gt; sample_string
&gt;&gt;&gt; for c in sample_characters:
...    sample_string = sample_string + c
...    print(sample_string)
...
p
py
pyt
pyth
pytho
python
&gt;&gt;&gt;
```
### Functional programming paradigm
The [functional programming paradigm][10] treats program computation as the evaluation of mathematical functions based on [lambda calculus][11]. Lambda calculus is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It follows the "what-to-solve" approach—that is, it expresses logic without describing its control flow—hence it is also classified as the declarative programming model.
The functional programming paradigm promotes stateless functions, but it's important to note that Python's implementation of functional programming deviates from standard implementation. Python is said to be an _impure_ functional language because it is possible to maintain state and create side effects if you are not careful. That said, functional programming is handy for parallel processing and is super-efficient for tasks requiring recursion and concurrent execution.
```
&gt;&gt;&gt; sample_characters = ['p','y','t','h','o','n']
&gt;&gt;&gt; import functools
&gt;&gt;&gt; sample_string = functools.reduce(lambda s,c: s + c, sample_characters)
&gt;&gt;&gt; sample_string
'python'
&gt;&gt;&gt;
```
Using the same example, the functional way of concatenating a list of characters to form a string would be the same as above. Since the computation happens in a single line, there is no explicit way to obtain the state of the program with **sample_string** and track the progress. The functional programming implementation of this example is fascinating, as it reduces the lines of code and simply does its job in a single line, with the exception of using the **functools** module and the **reduce** method. The three keywords—**functools**, **reduce**, and **lambda**—are defined as follows:
* **functools** is a module for higher-order functions and provides for functions that act on or return other functions. It encourages writing reusable code, as it is easier to replicate existing functions with some arguments already passed and create a new version of a function in a well-documented manner.
* **reduce** is a method that applies a function of two arguments cumulatively to the items in sequence, from left to right, to reduce the sequence to a single value. For example: [code] &gt;&gt;&gt; sample_list = [1,2,3,4,5]
&gt;&gt;&gt; import functools
&gt;&gt;&gt; sum = functools.reduce(lambda x,y: x + y, sample_list)
&gt;&gt;&gt; sum
15
&gt;&gt;&gt; ((((1+2)+3)+4)+5)
15
&gt;&gt;&gt;
```
* **lambda functions** are small, anonymized (i.e., nameless) functions that can take any number of arguments but spit out only one value. They are useful when they are used as an argu

View File

@ -0,0 +1,241 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (14 SCP Command Examples to Securely Transfer Files in Linux)
[#]: via: (https://www.linuxtechi.com/scp-command-examples-in-linux/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
14 SCP Command Examples to Securely Transfer Files in Linux
======
**SCP** (Secure Copy) is command line tool in Linux and Unix like systems which is used to transfer files and directories across the systems securely over the network. When we use scp command to copy files and directories from our local system to remote system then in the backend it makes **ssh connection** to remote system. In other words, we can say scp uses the same **SSH security mechanism** in the backend, it needs either password or keys for authentication.
[![scp-command-examples-linux][1]][2]
In this tutorial we will discuss 14 useful Linux scp command examples.
**Syntax of scp command:**
### scp &lt;options&gt; &lt;files_or_directories&gt; [root@linuxtechi][3]_host:/&lt;folder&gt;
### scp &lt;options&gt; [root@linuxtechi][3]_host:/files   &lt;folder_local_system&gt;
First syntax of scp command demonstrate how to copy files or directories from local system to target host under the specific folder.
Second syntax of scp command demonstrate how files from target host is copied into local system.
Some of the most widely used options in scp command are listed below,
*  -C         Enable Compression
*  -i           identity File or private key
*  -l           limit the bandwidth while copying
*  -P          ssh port number of target host
*  -p          Preserves permissions, modes and access time of files while copying
*  -q          Suppress warning message of SSH
*   -r          Copy files and directories recursively
*   -v          verbose output
Lets jump into the examples now!!!!
###### Example:1) Copy a file from local system to remote system using scp
Lets assume we want to copy jdk rpm package from our local Linux system to remote system (172.20.10.8) using scp command, use the following command,
```
[root@linuxtechi ~]$ scp jdk-linux-x64_bin.rpm root@linuxtechi:/opt
root@linuxtechi's password:
jdk-linux-x64_bin.rpm 100% 10MB 27.1MB/s 00:00
[root@linuxtechi ~]$
```
Above command will copy jdk rpm package file to remote system under /opt folder.
###### Example:2) Copy a file from remote System to local system using scp
Lets suppose we want to copy a file from remote system to our local system under the /tmp folder, execute the following scp command,
```
[root@linuxtechi ~]$ scp root@linuxtechi:/root/Technical-Doc-RHS.odt /tmp
root@linuxtechi's password:
Technical-Doc-RHS.odt 100% 1109KB 31.8MB/s 00:00
[root@linuxtechi ~]$ ls -l /tmp/Technical-Doc-RHS.odt
-rwx------. 1 pkumar pkumar 1135521 Oct 19 11:12 /tmp/Technical-Doc-RHS.odt
[root@linuxtechi ~]$
```
######  Example:3) Verbose Output while transferring files using scp (-v)
In scp command, we can enable the verbose output using -v option, using verbose output we can easily find what exactly is happening in the background. This becomes very useful in **debugging connection**, **authentication** and **configuration problems**.
```
root@linuxtechi ~]$ scp -v jdk-linux-x64_bin.rpm root@linuxtechi:/opt
Executing: program /usr/bin/ssh host 172.20.10.8, user root, command scp -v -t /opt
OpenSSH_7.8p1, OpenSSL 1.1.1 FIPS 11 Sep 2018
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Reading configuration data /etc/ssh/ssh_config.d/05-redhat.conf
debug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config
debug1: /etc/ssh/ssh_config.d/05-redhat.conf line 8: Applying options for *
debug1: Connecting to 172.20.10.8 [172.20.10.8] port 22.
debug1: Connection established.
…………
debug1: Next authentication method: password
root@linuxtechi's password:
```
###### Example:4) Transfer multiple files to remote system
Multiple files can be copied / transferred to remote system using scp command in one go, in scp command specify the multiple files separated by space, example is shown below
```
[root@linuxtechi ~]$ scp install.txt index.html jdk-linux-x64_bin.rpm root@linuxtechi:/mnt
root@linuxtechi's password:
install.txt 100% 0 0.0KB/s 00:00
index.html 100% 85KB 7.2MB/s 00:00
jdk-linux-x64_bin.rpm 100% 10MB 25.3MB/s 00:00
[root@linuxtechi ~]$
```
###### Example:5) Transfer files across two remote hosts
Using scp command we can copy files and directories between two remote hosts, lets suppose we have a local Linux system which can connect to two remote Linux systems, so from my local linux system I can use scp command to copy files across these two systems,
Syntax:
### scp [root@linuxtechi][3]_hosts1:/&lt;files_to_transfer&gt;  [root@linuxtechi][3]_host2:/&lt;folder&gt;
Example is shown below,
```
# scp root@linuxtechi:~/backup-Oct.zip root@linuxtechi:/tmp
# ssh root@linuxtechi "ls -l /tmp/backup-Oct.zip"
-rwx------. 1 root root 747438080 Oct 19 12:02 /tmp/backup-Oct.zip
```
###### Example:6) Copy files and directories recursively (-r)
Use -r option in scp command to recursively copy the entire directory from one system to another, example is shown below,
```
[root@linuxtechi ~]$ scp -r Downloads root@linuxtechi:/opt
```
Use below command to verify whether Download folder is copied to remote system or not,
```
[root@linuxtechi ~]$ ssh root@linuxtechi "ls -ld /opt/Downloads"
drwxr-xr-x. 2 root root 75 Oct 19 12:10 /opt/Downloads
[root@linuxtechi ~]$
```
###### Example:7) Increase transfer speed by enabling compression (-C)
In scp command, we can increase the transfer speed by enabling the compression using -C option, it will automatically enable compression at source and decompression at destination host.
```
root@linuxtechi ~]$ scp -r -C Downloads root@linuxtechi:/mnt
```
In the above example we are transferring the Download directory with compression enabled.
###### Example:8) Limit bandwidth while copying ( -l )
Use -l option in scp command to put limit on bandwidth usage while copying. Bandwidth is specified in Kbit/s, example is shown below,
```
[root@linuxtechi ~]$ scp -l 500 jdk-linux-x64_bin.rpm root@linuxtechi:/var
```
###### Example:9) Specify different ssh port while scp ( -P)
There can be some scenario where ssh port is changed on destination host, so while using scp command we can specify the ssh port number using -P option.
```
[root@linuxtechi ~]$ scp -P 2022 jdk-linux-x64_bin.rpm root@linuxtechi:/var
```
In above example, ssh port for remote host is “2022”
###### Example:10) Preserves permissions, modes and access time of files while copying (-p)
Use “-p” option in scp command to preserve permissions, access time and modes while copying from source to destination
```
[root@linuxtechi ~]$ scp -p jdk-linux-x64_bin.rpm root@linuxtechi:/var/tmp
jdk-linux-x64_bin.rpm 100% 10MB 13.5MB/s 00:00
[root@linuxtechi ~]$
```
###### Example:11) Transferring files in quiet mode ( -q) in scp
Use -q option in scp command to suppress transfer progress, warning and diagnostic messages of ssh. Example is shown below,
```
[root@linuxtechi ~]$ scp -q -r Downloads root@linuxtechi:/var/tmp
[root@linuxtechi ~]$
```
###### Example:12) Use Identify file in scp while transferring ( -i )
In most of the Linux environments, keys-based authentication is preferred. In scp command we specify the identify file or private key file using -i option, example is shown below,
```
[root@linuxtechi ~]$ scp -i my_key.pem -r Downloads root@linuxtechi:/root
```
In above example, “my_key.pem” is the identity file or private key file.
###### Example:13) Use different ssh_config file in scp ( -F)
There are some scenarios where you use different networks to connect to Linux systems, may be some network is behind proxy servers, so in that case we must have different **ssh_config** file.
Different ssh_config file in scp command is specified via -F option, example is shown below
```
[root@linuxtechi ~]$ scp -F /home/pkumar/new_ssh_config -r Downloads root@linuxtechi:/root
root@linuxtechi's password:
jdk-linux-x64_bin.rpm 100% 10MB 16.6MB/s 00:00
backup-Oct.zip 100% 713MB 41.9MB/s 00:17
index.html 100% 85KB 6.6MB/s 00:00
[root@linuxtechi ~]$
```
###### Example:14) Use Different Cipher in scp command (-c)
By default, scp uses AES-128 cipher to encrypt the files. If you want to use another cipher in scp command then use -c option followed by cipher name,
Lets suppose we want to use 3des-cbc cipher in scp command while transferring the files, run the following scp command
```
[root@linuxtechi ~]# scp -c 3des-cbc -r Downloads root@linuxtechi:/root
```
Use the below command to list ssh and scp ciphers,
```
[root@linuxtechi ~]# ssh -Q cipher localhost | paste -d , -s -
3des-cbc,aes128-cbc,aes192-cbc,aes256-cbc,root@linuxtechi,aes128-ctr,aes192-ctr,aes256-ctr,root@linuxtechi,root@linuxtechi,root@linuxtechi
[root@linuxtechi ~]#
```
Thats all from this tutorial, to get more details about scp command, kindly refer its man page. Please do share your feedback and comments in comments section below.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/scp-command-examples-in-linux/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/scp-command-examples-linux.jpg
[3]: https://www.linuxtechi.com/cdn-cgi/l/email-protection

View File

@ -0,0 +1,111 @@
[#]: collector: (lujun9972)
[#]: translator: (hopefully2333)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How DevOps professionals can become security champions)
[#]: via: (https://opensource.com/article/19/9/devops-security-champions)
[#]: author: (Jessica Repka https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo)
DevOps 专业人员如何成为网络安全拥护者
======
打破信息孤岛,成为网络安全的拥护者,这对你、对你的职业、对你的公司都会有所帮助。
![A lock on the side of a building][1]
安全是 DevOps 中一个被误解了的部分,一些人认为它不在 DevOps 的范围内,而另一些人认为它太过重要(并且被忽视),建议改为使用 DevSecOps。无论你同意哪一方的观点网络安全都会影响到我们每一个人这是很明显的事实。
每年, [黑客行为的统计数据][3] 都会更加令人震惊。例如, 每 39 秒就有一次黑客行为发生,这可能会导致你为公司写的记录、身份和专有项目被盗。你的安全团队可能需要花上几个月(也可能是永远找不到)才能发现这次黑客行为背后是谁,目的是什么,人在哪,什么时候黑进来的。
运营专家面对这些棘手问题应该如何是好?呐我说,现在是时候成为网络安全的拥护者,变为解决方案的一部分了。
### 孤岛势力范围的战争
在我和我本地的 IT 安全ITSEC团队一起肩并肩战斗的岁月里我注意到了很多事情。一个很大的问题是安全团队和 DevOps 之间关系紧张,这种情况非常普遍。这种紧张关系几乎都是来源于安全团队为了保护系统、防范漏洞所作出的努力(例如,设置访问控制或者禁用某些东西),这些努力会中断 DevOps 的工作并阻碍他们快速部署应用程序。
你也看到了,我也看到了,你在现场碰见的每一个人都有至少一个和它有关的故事。一小撮的怨恨最终烧毁了信任的桥梁,要么是花费一段时间修复,要么就是两个团体之间开始一场小型的地盘争夺战,这个结果会使 DevOps 实现起来更加艰难。
### 一种新观点
为了打破这些孤岛并结束势力战争,我在每个安全团队中都选了至少一个人来交谈,了解我们组织日常安全运营里的来龙去脉。我开始做这件事是出于好奇,但我持续做这件事是因为它总是能带给我一些有价值的、新的观点。例如,我了解到,对于每个因为失败的安全性而被停止的部署,安全团队都在疯狂地尝试修复 10 个他们看见的其他问题。他们反应的莽撞和尖锐是因为他们必须在有限的时间里修复这些问题,不然这些问题就会变成一个大问题。
考虑到发现、识别和撤销已完成操作所需的大量知识,或者指出 DevOps 团队正在做什么-没有背景信息-然后复制并测试它。所有的这些通常都要由人手配备非常不足的安全团队完成。
这就是你的安全团队的日常生活,并且你的 DevOps 团队看不到这些。ITSEC 的日常工作意味着超时加班和过度劳累,以确保公司,公司的团队,团队里工作的所有人能够安全地工作。
### 成为安全拥护者的方法
这些是你成为你的安全团队的拥护者之后可以帮到它们的。这意味着-对于你做的所有操作-你必须仔细、认真地查看所有能够让其他人登录的方式,以及他们能够从中获得什么。
帮助你的安全团队就是在帮助你自己。将工具添加到你的工作流程里以此将你知道的要干的活和他们知道的要干的活结合到一起。从小事入手例如阅读公共漏洞披露CVEs并将扫描模块添加到你的 CI/CD 流程里。对于你写的所有代码,都会有一个开源扫描工具,添加小型开源工具(例如下面列出来的)在长远看来是可以让项目更好的。
**容器扫描工具:**
* [Anchore Engine][5]
* [Clair][6]
* [Vuls][7]
* [OpenSCAP][8]
**代码扫描工具:**
* [OWASP SonarQube][9]
* [Find Security Bugs][10]
* [Google Hacking Diggity Project][11]
**Kubernetes 安全工具:**
* [Project Calico][12]
* [Kube-hunter][13]
* [NeuVector][14]
### 保持你的 DevOps 态度
如果你的工作角色是和 DevOps 相关的,那么学习新技术和如何运用这项新技术创造新事物就是你工作的一部分。安全也是一样。我在 DevOps 安全方面保持到最新,下面是我的方法的列表。
* 每周阅读一篇你工作的方向里和安全相关的文章.
* 每周查看 [CVE][15] 官方网站,了解出现了什么新漏洞.
* 尝试做一次黑客马拉松。一些公司每个月都要这样做一次;如果你觉得还不够、想了解更多,可以访问 Beginner Hack 1.0 网站。
* 每年至少一次和那你的安全团队的成员一起参加安全会议,从他们的角度来看事情。
### 成为拥护者是为了变得更好
你应该成为你的安全的拥护者,下面是我们列出来的几个理由。首先是增长你的知识,帮助你的职业发展。第二是帮助其他的团队,培养新的关系,打破对你的组织有害的孤岛。在你的整个组织内建立由很多好处,包括设置沟通团队的典范,并鼓励人们一起工作。你同样能促进在整个组织中分享知识,并给每个人提供一个在安全方面更好的内部合作的新契机。
总的来说,成为一个网络安全的拥护者会让你成为你整个组织的拥护者。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/devops-security-champions
作者:[Jessica Repka][a]
选题:[lujun9972][b]
译者:[hopefully2333](https://github.com/hopefully2333)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
[2]: https://opensource.com/article/19/1/what-devsecops
[3]: https://hostingtribunal.com/blog/hacking-statistics/
[4]: https://opensource.com/article/18/8/what-cicd
[5]: https://github.com/anchore/anchore-engine
[6]: https://github.com/coreos/clair
[7]: https://vuls.io/
[8]: https://www.open-scap.org/
[9]: https://github.com/OWASP/sonarqube
[10]: https://find-sec-bugs.github.io/
[11]: https://resources.bishopfox.com/resources/tools/google-hacking-diggity/
[12]: https://www.projectcalico.org/
[13]: https://github.com/aquasecurity/kube-hunter
[14]: https://github.com/neuvector/neuvector-helm
[15]: https://cve.mitre.org/
[16]: https://www.hackerearth.com/challenges/hackathon/beginner-hack-10/

View File

@ -1,141 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (tomjlw)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Guide to Install VMware Tools on Linux)
[#]: via: (https://itsfoss.com/install-vmware-tools-linux)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
在 Linux 上安装 VMware 工具的教程
======
**VMware 工具通过允许你分享剪贴板和文件夹以及其他东西来提升你的虚拟机体验。了解如何在 Ubuntu 和其它 Linux 发行版上安装 VMware 工具**
在先前的教程中,你学习了[在 Ubuntu 上安装 VMware 工作站][1]。你可以通过安装 VMware 工具进一步提升你的虚拟机功能。
如果你已经在 VMware 上安装了一个访客系统,你必须要注意 [VMware 工具][2]的要求-尽管并不完全清楚到底有什么要求。
在本文中,我们将要强调 VMware 工具的重要性,所提供的特性,以及在 Ubuntu 和其它 Linux 发行版上安装 VMware 工具的方法。
### VMware 工具:概览及特性
![在 Ubuntu 上安装 VMware 工具][3]在 Ubuntu 上安装 VMware 工具
出于显而易见的的理由虚拟机你的客机系统将不会与主机上的表现完全一致。在其性能和操作上会有特定的限制。而那是为什么工具箱VMware 工具)被引入的原因。
VMware 工具以一种高效的形式在提升了其性能的同时,帮助管理访客系统。
#### VMware 工具到底负责什么?
![如何在在 Linux 上安装 VMware 工具][4]
你已大致了解它做什么了-但让我们探讨一下细节:
* 同步访客系统与主机系统间的时间以简化事务
* 解锁从主机系统向客机系统传递消息的能力。比如说,你可以复制文字到剪贴板并将它轻松粘贴到你的客机系统。
* 在客机系统启用声音
* 提升视频分辨率
* 修正错误网络速度数据
* 减少不合适的色深
当你在客机系统上安装 VMware 工具时有重大改变。但是它到底包含什么特性以解锁/提升这些功能呢?然让我们来看看。。
#### VMware 工具:核心特性细节
![用 VMware 工具在主机系统与客机系统间分享剪切板][5]用 VMware 工具在主机系统与客机系统间分享剪切板
如果你不想知道它包含什么来启用这些功能的话,你可以跳过这部分。但是为了好奇的读者,让我们简短地讨论它一下:
**VMware 设备驱动:** 它真的取决于系统。大多数主流操作系统确实默认包括设备驱动。因此你不必另外安装它。这主要涉及-内存控制驱动鼠标驱动音频驱动NIC 驱动VGA 驱动以及其它。
**VMware 用户处理:** 这是事情变得十分有趣的地方。通过它你获得了在客机和主机间复制粘贴和拖拽的能力。你基本上可以从主机复制粘贴到虚拟机,反之亦然。
你同样也可以拖拽文件。此外,在你未安装 SVGA 驱动时它会启用指针释放/锁定。
**VMware工具生命周期管理** 嗯我们会在下面看看如何安装 VMware 工具,但是这个特性帮你在虚拟机中轻松安装/升级 VMware 工具。
**分享文件夹**除了这些。VMware 工具同样允许你在客机与主机系统间分享文件夹。
![使用 VMware 工具在客机与主机系统间分享文件][6]使用 VMware 工具在客机与主机系统间分享文件
当然,它的效果同样取决于客机系统。例如在 Windows 上你通过 Unity 模式在虚拟机上运行程序并从主机系统上操作它。
### 如何在 Ubuntu 和其它 Linux 发行版上安装 VMware 工具
**注意:** 对于 Linux 操作系统你应该已经安装好了“Open VM 工具”,大多数情况下移除了额外安装 VMware 工具的需求。
大部分时候,当你安装了客机系统时,如果操纵系统支持 [Easy Install][7] 的话你会收到软件更新弹窗告诉你安装 VMware 工具。
Windows 和 Ubuntu 不支持 Esay Install。因此即使你使用 Windows 作为你的主机或尝试在 Ubuntu 上安装 VMware 工具,你应该有一个和弹窗消息差不多的选项来轻松安装 VMware 工具。这是它应该看起来的样子:
![安装 VMware 工具的弹窗][8]安装 VMware 工具的弹窗
这是搞定它最简便的办法。因此当你配置虚拟机时确保你有一个通畅的网络连接。
如果你没收到任何弹窗-或者选项来轻松安装VMware 工具。你需要手动安装它。以下是如何去做:
1\. 运行工作站播放器。
2\. 从菜单导航至**虚拟机->安装 VMware 工具**。如果你已经安装了它并想修复安装,你会看到“**重新安装 VMware 工具**”这一选项出现。
3\. 一旦你点击了,你就会看到一个虚拟 CD/DVD 挂载在客户系统上。
4\. 打开并复制粘贴 **tar.gz** 文件到任何你选择的区域并解压,这里我们选择**桌面**。
![][9]
5\. 在解压后运行终端并通过输入以下命令导航至里面的文件夹:
```
cd Desktop/VMwareTools-10.3.2-9925305/vmware-tools-distrib
```
你需要检查文件夹与路径名-取决于版本与解压目的地-名字可能会改变。
![][10]
Replace **Desktop** with your storage location (such as cd Downloads) and the rest should remain the same if you are installing **10.3.2 version**.
6\. 现在仅需输入以下命令开始安装:
```
sudo ./vmware-install.pl -d
```
![][11]
你会被询问密码以获得安装权限,输入密码然后应当一切都搞定了。
到此为止了,你搞定了。这系列步骤应当适用于几乎大部分基于 Ubuntu 的客机系统。如果你想要在 Ubuntu 服务器上或其它系统安装 VMware 工具,步骤应该类似。
**总结**
在 Ubuntu Linux 上安装 VMware 工具应该挺简单。除了简单办法,我们也详述了手动安装的方法。如果你仍需帮助或者对安装有任何建议,在评论区评论让我们知道。
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-vmware-tools-linux
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[tomjlw](https://github.com/tomjlw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/install-vmware-player-ubuntu-1310/
[2]: https://kb.vmware.com/s/article/340
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-downloading.jpg?fit=800%2C531&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/install-vmware-tools-linux.png?resize=800%2C450&ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-features.gif?resize=800%2C500&ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-shared-folder.jpg?fit=800%2C660&ssl=1
[7]: https://docs.vmware.com/en/VMware-Workstation-Player-for-Linux/15.0/com.vmware.player.linux.using.doc/GUID-3F6B9D0E-6CFC-4627-B80B-9A68A5960F60.html
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools.jpg?fit=800%2C481&ssl=1
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-extraction.jpg?fit=800%2C564&ssl=1
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-folder.jpg?fit=800%2C487&ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-installation-ubuntu.jpg?fit=800%2C492&ssl=1

View File

@ -0,0 +1,261 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How writers can get work done better with Git)
[#]: via: (https://opensource.com/article/19/4/write-git)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/noreplyhttps://opensource.com/users/seth)
用 Git 帮助写作者更好地完成工作
======
> 如果你是一名写作者,你也能从使用 Git 中受益。在我们的系列文章中了解有关 Git 鲜为人知的用法。
![Writing Hand][1]
[Git][2] 是一个少有的能将如此多的现代计算封装到一个程序之中的应用程序,它可以用作许多其他应用程序的计算引擎。虽然它以跟踪软件开发中的源代码更改而闻名,但它还有许多其他用途,可以让你的生活更轻松、更有条理。在这个 Git 系列中,我们将分享七种鲜为人知的使用 Git 的方法。
今天我们来看看写作者如何使用 Git 更好的地完成工作。
### 写作者的 Git
有些人写小说,也有人撰写学术论文、诗歌、剧本、技术手册或有关开源的文章。许多人都在做一点各种写作。相同的是,如果你是一名写作者,则或许能从使用 Git 中受益。尽管 Git 是著名的计算机程序员所使用的高度技术性工具,但它也是现代写作者的理想之选,本文将向你演示如何改变你的书写方式以及为什么要这么做的原因。
但是,在谈论 Git 之前,重要的是先谈谈“副本”(或者叫“内容”,对于数字时代而言)到底是什么,以及为什么它与你的交付*媒介*不同。这是 21 世纪,大多数写作者选择的工具是计算机。尽管计算机看似擅长将副本的编辑和布局等过程结合在一起,但写作者还是(重新)发现将内容与样式分开是一个好主意。这意味着你应该在计算机上像在打字机上而不是在文字处理器中进行书写。以计算机术语而言,这意味着以*纯文本*形式写作。
### 以纯文本写作
这个假设曾经是毫无疑问的:你知道自己的写作所要针对的市场,你可以为书籍、网站或软件手册等不同市场编写内容。但是,近来各种市场趋于扁平化:你可能决定在纸质书中使用为网站编写的内容,并且纸质书可能会在以后发布 EPUB 版本。对于你的内容的数字版本,读者才是最终控制者:他们可以在你发布内容的网站上阅读你的文字,也可以点击 Firefox 出色的[阅读视图][3],还可能会打印到纸张上,或者可能会使用 Lynx 将网页转储到文本文件中,甚至可能因为使用屏幕阅读器而根本看不到你的内容。
你只需要逐字写下你的内容,而将交付工作留给发布者。即使你是自己发布,将字词作为写作作品的一种源代码也是一种更聪明、更有效的工作方式,因为在发布时,你可以使用相同的源(你的纯文本)生成适合你的目标输出(用于打印的 PDF、用于电子书的 EPUB、用于网站的 HTML 等)。
用纯文本编写不仅意味着你不必担心布局或文本样式,而且也不再需要专门的工具。无论是手机或平板电脑上的基本记事本应用程序、计算机附带的文本编辑器,还是从互联网上下载的免费编辑器,任何能够产生文本内容的工具对你而言都是有效的“文字处理器”。无论你身在何处或在做什么,几乎可以在任何设备上书写,并且所生成的文本可以与你的项目完美集成,而无需进行任何修改。
而且Git 专门用来管理纯文本。
### Atom 编辑器
当你以纯文本形式书写时,文字处理程序会显得过于庞大。使用文本编辑器更容易,因为文本编辑器不会尝试“有效地”重组输入内容。它使你可以将脑海中的单词输入到屏幕中,而不会受到干扰。更好的是,文本编辑器通常是围绕插件体系结构设计的,这样应用程序本身就很基础(它用来编辑文本),但是你可以围绕它构建一个环境来满足你的各种需求。
[Atom][4] 编辑器就是这种设计理念的一个很好的例子。这是一个具有内置 Git 集成的跨平台文本编辑器。如果你不熟悉纯文本格式,也不熟悉 Git那么 Atom 是最简单的入门方法。
#### 安装 Git 和 Atom
首先,请确保你的系统上已安装 Git。如果运行 Linux 或 BSD则 Git 在软件存储库或 ports 树中可用。你使用的命令将根据你的发行版而有所不同。例如在 Fedora 上:
```
$ sudo dnf install git
```
你也可以下载并安装适用于 [Mac][5] 和 [Windows][6] 的 Git。
你不需要直接使用 Git因为 Atom 会充当你的 Git 界面。下一步是安装 Atom。
如果你使用的是 Linux请通过软件安装程序或适当的命令从软件存储库中安装 Atom例如
```
$ sudo dnf install atom
```
Atom 当前没有在 BSD 上构建。但是,有很好的替代方法,例如 [GNU Emacs][7]。对于 Mac 和 Windows 用户,可以在 [Atom 网站][4]上找到安装程序。
安装完成后,启动 Atom 编辑器。
#### 快速指导
如果要使用纯文本和 Git则需要适应你的编辑器。Atom 的用户界面可能比你习惯的更加动态。实际上,你可以将它视为 Firefox 或 Chrome而不是文字处理程序因为它具有可以根据需要打开和关闭的选项卡和面板甚至还可以安装和配置附件。尝试全部掌握 Atom 如许之多的功能是不切实际的,但是你至少可以知道有什么功能。
当 Atom 打开时,它将显示一个欢迎屏幕。如果不出意外,此屏幕很好地介绍了 Atom 的选项卡式界面。你可以通过单击 Atom 窗口顶部选项卡上的“关闭”图标来关闭欢迎屏幕,并使用“文件 > 新建文件”创建一个新文件。
使用纯文本格式与使用文字处理程序有点不同,因此这里有一些技巧,以人可以连接的方式编写内容,并且 Git 和计算机可以解析,跟踪和转换。
#### 用 Markdown 书写
如今,当人们谈论纯文本时,大多是指 Markdown。Markdown 与其说是格式不如说是样式这意味着它旨在为文本提供可预测的结构以便计算机可以检测自然的模式并智能地转换文本。Markdown 有很多定义,但是最好的技术定义和备忘单在 [CommonMark 的网站][8]上。
```
# Chapter 1
This is a paragraph with an *italic* word and a **bold** word in it.
And it can even reference an image.
![An image will render here.](drawing.jpg)
```
从示例中可以看出Markdown 读起来感觉不像代码,但可以将其视为代码。如果你遵循 CommonMark 定义的 Markdown 规范,那么一键就可以可靠地将 Markdown 的文字转换为 .docx、.epub、.html、MediaWiki、.odt、.pdf、.rtf 和各种其他的格式,而*不会*失去格式。
你可以认为 Markdown 有点像文字处理程序的样式。如果你曾经为出版社撰写过一套样式来控制章节标题和章节标题的样式,那基本上就是一回事,除了不是从下拉菜单中选择样式以外,你要给你的文字添加一些小记号。对于任何习惯“以文字交谈”的现代阅读者来说,这些表示法都是很自然的,但是在呈现文本时,它们会被精美的文本样式替换掉。实际上,这是文字处理程序在后台秘密进行的操作。文字处理器显示粗体文本,但是如果你可以看到使文本变为粗体的生成代码,则它与 Markdown 很像(实际上,它是更复杂的 XML。使用 Markdown 可以消除这种代码和样式之间的阻隔,一方面看起来更可怕,但另一方面,你可以在几乎所有可以生成文本的东西上书写 Markdown 而不会丢失任何格式信息。
Markdown 文件流行d 文件扩展名是 .md。如果你使用的平台不知道 .md 文件是什么,则可以手动将扩展名与 Atom 关联,或者仅使用通用的 .txt 扩展名。文件扩展名不会更改文件的性质。它只会改变你的计算机决定如何处理它的方式。Atom 和某些平台足够聪明,可以知道该文件是纯文本格式,无论你给它以什么扩展名。
#### 实时预览
Atom 具有 “Markdown 预览” 插件,该插件可以向你显示正在编写的纯文本 Markdown 及其(通常)呈现的方式。
![Atom's preview screen][9]
要激活此预览窗格,请选择“包 > Markdown 预览 > 切换预览” 或按 `Ctrl + Shift + M`
此视图为你提供了两全其美的方法。无需承担为你的文本添加样式的负担,就可以写作,而你也可以看到一个通用的示例外观,至少是以典型的数字化格式显示了文本的外观。当然,关键是你无法控制文本的最终呈现方式,因此不要试图调整 Markdown 来强制以某种方式显示呈现的预览。
#### 每行一句话
你的高中写作老师不会看你的 Markdown。
一开始它并那么自然但是在数字世界中保持每行一个句子更有意义。Markdown 忽略单个换行符(当你按下 Return 或 Enter 键时),并且只在单个空行之后才会创建一个新段落。
![Writing in Atom][10]
每行写一个句子的好处是你的工作更容易跟踪。也就是说,如果你在段落的开头更改了一个单词,那么如果更改仅限于一行而不是一个长的段落中的一个单词,那么 Atom、Git 或任何应用程序很容易以有意义的方式突出显示该更改。换句话说,对一个句子的更改只会影响该句子,而不会影响整个段落。
你可能会想:“许多文字处理器也可以跟踪更改,它们可以突出显示已更改的单个单词。”但是这些修订跟踪器绑定到该字处理器的界面上,这意味着你必须先打开该字处理器才能浏览修订。在纯文本工作流程中,你可以以纯文本形式查看修订,这意味着无论手头有什么,只要该设备可以处理纯文本(大多数都可以),就可以进行编辑或批准编辑。
诚然写作者通常不会考虑行号但它对于计算机有用并且通常是一个很好的参考点。默认情况下Atom 为文本文档的行进行编号。按下 Enter 键或 Return 键后,一*行*就是一行。
![Writing in Atom][11]
如果一行中有一个点而不是一个数字,则表示它是上一行折叠的一部分,因为它不超出了你的屏幕。
#### 主题
如果你是一个在意视觉形象的人,那么你可能会非常注重自己的写作环境。即使你使用普通的 Markdown 进行编写,也并不意味着你必须使用程序员的字体或在使你看起来像码农的黑窗口中进行书写。修改 Atom 外观的最简单方法是使用[主题包][12]。主题设计人员通常将深色主题与浅色主题区分开因此你可以根据需要使用关键字“Dark”或“Light”进行搜索。
要安装主题,请选择“编辑 > 首选项”。这将在 Atom 界面中打开一个新标签页。是的,标签页可以用于处理文档*和*用于配置及控制面板。在“设置”标签页中,单击“安装”类别。
在“安装”面板中,搜索要安装的主题的名称。单击搜索字段右侧的“主题”按钮,以仅搜索主题。找到主题后,单击其“安装”按钮。
![Atom's themes][13]
要使用已安装的主题或根据喜好自定义主题,请导航至“设置”标签页中的“主题”类别中。从下拉菜单中选择要使用的主题。更改会立即进行,因此你可以准确了解主题如何影响您的环境。
你也可以在“设置”标签的“编辑器”类别中更改工作字体。Atom 默认采用等宽字体,程序员通常首选这种字体。但是你可以使用系统上的任何字体,无论是衬线字体、无衬线字体、哥特式字体还是草书字体。无论你想整天盯着什么字体都行。
作为相关说明默认情况下Atom 会在其屏幕上绘制一条垂直线,以提示编写代码的人员。程序员通常不想编写太长的代码行,因此这条垂直线会提醒他们不要写太长的代码行。不过,这条竖线对写作者而言毫无意义,你可以通过禁用 “wrap-guide” 包将其删除。
要禁用 “wrap-guide” 软件包,请在“设置”标签中选择“折行”类别,然后搜索 “wrap-guide”。找到该程序包后单击其“禁用”按钮。
#### 动态结构
创建长文档时,我发现每个文件写一个章节比在一个文件中写整本书更有意义。此外,我不会以明显的语法 ` chapter-1.md``1.example.md` 来命名我的章节,而是以章节标题或关键词(例如 `example.md`)命名。为了将来为自己提供有关如何编写本书的指导,我维护了一个名为 `toc.md` (用于“目录”)的文件,其中列出了各章的(当前)顺序。
我这样做是因为,无论我多么相信第 6 章都不可能出现在第 1 章之前,但在我完成整本书之前,几乎不大可能出现我不会交换一两个章节的顺序。我发现从一开始就保持动态变化可以帮助我避免重命名混乱,也可以帮助我避免僵化的结构。
### 在 Atom 中使用 Git
每位写作者的共同点是两件事:他们为流传而写作,而他们的写作是一段旅程。你无需坐下来写作就完成最终稿件。顾名思义,你有一个初稿。该草稿会经过修订,你会仔细地将每个修订保存一式两份或三份,以防万一你的文件损坏了。最终,你得到了所谓的最终草案,但很有可能你有一天还会回到这份最终草案,要么恢复好的部分要么修改坏的部分。
Atom 最令人兴奋的功能是其强大的 Git 集成。无需离开 Atom你就可以与 Git 的所有主要功能进行交互,跟踪和更新项目、回滚你不喜欢的更改、集成来自协作者的更改等等。最好的学习方法就是逐步学习,因此这是从写作项目开始到结束在 Atom 界面中使用 Git 的方法。
第一件事:通过选择 “视图 > 切换 Git 标签页” 来显示 Git 面板。这将在 Atom 界面的右侧打开一个新标签页。现在没什么可看的,所以暂时保持打开状态就行。
#### 建立一个 Git 项目
你可以将 Git 视为它被绑定到文件夹。Git 目录之外的任何文件夹都不知道 Git而 Git 也不知道外面。Git 目录中的文件夹和文件将被忽略,直到你授予 Git 权限来跟踪它们为止。
你可以通过在 Atom 中创建新的项目文件夹来创建 Git 项目。选择 “文件 > 添加项目文件夹”,然后在系统上创建一个新文件夹。你创建的文件夹将出现在 Atom 窗口的左侧“项目面板”中。
#### Git 添加文件
右键单击你的新项目文件夹然后选择“新建文件”以在项目文件夹中创建一个新文件。如果你要导入文件到新项目中请右键单击该文件夹然后选择“在文件管理器中显示”以在系统的文件查看器中打开该文件夹Linux 上为 Dolphin 或 NautilusMac 上为 Finder在 Windows 上是 Explorer然后拖放文件到你的项目文件夹。
在 Atom 中打开一个项目文件(你创建的空文件或导入的文件)后,单击 Git 标签中的 “<ruby>创建存储库<rt>Create Repository</rt></ruby>” 按钮。在弹出的对话框中,单击 “<ruby>初始化<rt>Init</rt></ruby>” 以将你的项目目录初始化为本地 Git 存储库。 Git 会将 `.git` 目录(在系统的文件管理器中不可见,但在 Atom 中可见)添加到项目文件夹中。不要被这个愚弄了:`.git` 目录是 Git 管理的,而不是由你管理的,因此你一般不要动它。但是在 Atom 中看到它可以很好地提醒你正在由 Git 管理的项目中工作。换句话说,当你看到 `.git` 目录时,就有了修订历史记录。
在你的空文件中,写一些东西。你是写作者,所以输入一些单词就行。你可以随意输入任何一组单词,但要记住上面的写作技巧。
`Ctrl + S` 保存文件,该文件将显示在 Git 标签的 “<ruby>未暂存的改变<rt>Unstaged Changes</rt></ruby>” 部分中。这意味着该文件存在于你的项目文件夹中,但尚未提交给 Git 管理。通过单击 Git 选项卡右上方的 “<ruby>暂存全部<rt>Stage All</rt></ruby>” 按钮,允许 Git 跟踪这些文件。如果你使用过带有修订历史记录的文字处理器,则可以将此步骤视为允许 Git记录更改。
#### Git 提交
你的文件现在已暂存。这意味着 Git 知道该文件存在,并且知道自上次 Git 知道该文件以来,该文件已被更改。
Git 的<ruby>提交<rt>commit</rt></ruby>会将你的文件发送到 Git 的内部和永久存档中。如果你习惯于文字处理程序,这就类似于给一个修订版命名。要创建一个提交,请在 Git 选项卡底部的“<ruby>提交<rt>Commit</rt></ruby>”消息框中输入一些描述性文本。你可能会感到含糊不清或随意写点什么,但如果你想在将来知道进行修订的原因,那么输入一些有用的信息会更有用。
第一次提交时,必须创建一个<ruby>分支<rt>branch</rt></ruby>。Git 分支有点像另外一个空间,它允许你从一个时间轴切换到另一个时间轴,以进行你可能想要或可能不想要永久保留的更改。如果最终喜欢该更改,则可以将一个实验分支合并到另一个实验分支,从而统一项目的不同版本。这是一个高级过程,不需要先学会,但是你仍然需要一个活动分支,因此你必须为首次提交创建一个分支。
单击 Git 选项卡最底部的“<ruby>分支<rt>Branch</rt></ruby>”图标,以创建新的分支。
![Creating a branch][14]
通常将第一个分支命名为 `master`,但不是必须如此;你可以将其命名为 `firstdraft` 或任何你喜欢的名称,但是遵守当地习俗有时会使谈论 Git和查找问题的答案变得容易一些因为你会知道有人提到 “master” 时,它们的真正意思是“主干”而不是“初稿”或你给分支起的什么名字。
在某些版本的 Atom 上UI 也许不会更新以反映你已经创建的新分支。不用担心,做了提交之后,它会创建分支(并更新 UI。按下 “<ruby>提交<rt>Commit</rt></ruby>” 按钮,无论它显示的是 “<ruby>创建脱离的提交<rt>Create detached commit</rt></ruby>” 还是 “<ruby>提交到主干<rt>Commit to master</rt></ruby>
提交后,文件的状态将永久保留在 Git 的记忆之中。
#### 历史记录和 Git 差异
一个自然而然的问题是你应该多久做一次提交。这并没有正确的答案。使用 `Ctrl + S` 保存文件并提交到 Git 是两个单独的过程,因此你会一直做这两个过程。每当你觉得自己已经做了重要的事情或打算尝试一个可能要被干掉的疯狂的新想法时,你可能都会想要做个提交。
要了解提交对工作流程的影响,请从测试文档中删除一些文本,然后在顶部和底部添加一些文本。再次提交。 这样做几次,直到你在 Git 标签的底部有了一小段历史记录,然后单击其中一个提交以在 Atom 中查看它。
![Viewing differences][15]
查看过去的提交时,你会看到三种元素:
1. 绿色文本是该提交中已被添加到文档中的内容。
2. 红色文本是该提交中已从文档中删除的内容。
3. 其他所有文字均未做更改。
#### 远程备份
使用 Git 的优点之一是,按照设计,它是分布式的,这意味着你可以将工作提交到本地存储库,并将所做的更改推送到任意数量的服务器上进行备份。你还可以从这些服务器中拉取更改,以便你碰巧正在使用的任何设备始终具有最新更改。
为此,你必须在 Git 服务器上拥有一个帐户。有几种免费的托管服务,其中包括 GitHub这个公司开发了 Atom但奇怪的是 GitHub 不是开源的;而 GitLab 是开源的。相比私有的,我更喜欢开源,在本示例中,我将使用 GitLab。
如果你还没有 GitLab 帐户,请注册一个帐户并开始一个新项目。项目名称不必与 Atom 中的项目文件夹匹配,但是如果匹配,则可能更有意义。你可以将项目保留为私有,在这种情况下,只有你和任何一个你给予了明确权限的人可以访问它,或者,如果你希望该项目可供任何互联网上偶然发现它的人使用,则可以将其公开。
不要将 README 文件添加到项目中。
创建项目后,这个文件将为你提供有关如何设置存储库的说明。如果你决定在终端中或通过单独的 GUI 使用 Git这是非常有用的信息但是 Atom 的工作流程则有所不同。
单击 GitLab 界面右上方的 “<ruby>克隆<rt>Clone</rt></ruby>” 按钮。这显示了访问 Git 存储库必须使用的地址。复制 “SSH” 地址(而不是 “https” 地址)。
在 Atom 中,点击项目的 `.git` 目录,然后打开 `config` 文件。将下面这些配置行添加到该文件中,调整 `url` 值的 `seth/example.git` 部分以匹配你自己独有的地址。
```
[remote "origin"]
url = git@gitlab.com:seth/example.git
fetch = +refs/heads/*:refs/remotes/origin/*
[branch "master"]
remote = origin
merge = refs/heads/master
```
在 Git 标签的底部,出现了一个新按钮,标记为 “<ruby>提取<rt>Fetch</rt></ruby>”。由于你的服务器是全新的服务器,因此没有可供你提取的数据,因此请右键单击该按钮,然后选择“<ruby>推送<rt>Push</rt></ruby>”。这会将你的更改推送到你的 GitLab 帐户,现在你的项目已备份到 Git 服务器上。
你可以在每次提交后将更改推送到服务器。它提供了立即的异地备份,并且由于数据量通常很少,因此它几乎与本地保存一样快。
### 撰写而 Git
Git 是一个复杂的系统,不仅对修订跟踪和备份有用。它还支持异步协作并鼓励实验。本文介绍了一些基础知识,但还有更多关于 Git 的文章和整本的书,以及如何使用它使你的工作更高效、更具弹性和更具活力。 从使用 Git 完成小任务开始,使用的次数越多,你会发现自己提出的问题就越多,最终你将学到的技巧越多。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/write-git
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/noreplyhttps://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/write-hand_0.jpg?itok=Uw5RJD03 (Writing Hand)
[2]: https://git-scm.com/
[3]: https://support.mozilla.org/en-US/kb/firefox-reader-view-clutter-free-web-pages
[4]: http://atom.io
[5]: https://git-scm.com/download/mac
[6]: https://git-scm.com/download/win
[7]: http://gnu.org/software/emacs
[8]: https://commonmark.org/help/
[9]: https://opensource.com/sites/default/files/uploads/atom-preview.jpg (Atom's preview screen)
[10]: https://opensource.com/sites/default/files/uploads/atom-para.jpg (Writing in Atom)
[11]: https://opensource.com/sites/default/files/uploads/atom-linebreak.jpg (Writing in Atom)
[12]: https://atom.io/themes
[13]: https://opensource.com/sites/default/files/uploads/atom-theme.jpg (Atom's themes)
[14]: https://opensource.com/sites/default/files/uploads/atom-branch.jpg (Creating a branch)
[15]: https://opensource.com/sites/default/files/uploads/git-diff.jpg (Viewing differences)
[16]: mailto:git@gitlab.com

View File

@ -1,73 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Blockchain 2.0 Introduction To Hyperledger Fabric [Part 10])
[#]: via: (https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-fabric/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
区块链 2.0Hyperledger Fabric 介绍(十)
======
![Hyperledger Fabric][1]
### Hyperledger Fabric
[Hyperledger 项目][2] 是一个伞形组织,具有许多正在开发的不同模块和系统。在这些子项目中,最受欢迎的是 “Hyperledger Fabric”。这篇博文将探讨一旦区块链系统开始大量使用到主流将使 Fabric 在不久的将来几乎不可或缺的功能。最后,我们还将快速了解开发人员和爱好者们需要了解的有关 Hyperledger Fabric 技术的知识。
### 起始时间
按照 Hyperledger 项目的常规方式Fabric 由其核心成员之一 IBM “捐赠”给该组织,而 IBM 以前是该组织的主要开发者。IBM 共享的这个技术平台在 Hyperledger 项目中进行了联合开发,来自 100 多个成员公司和机构为之做出了贡献。
目前Fabric 正处于 LTS 版本的 v1.4该版本已经发展很长一段路并且被视为企业管理业务数据的解决方案。Hyperledger 项目的核心愿景也不可避免地渗透到了 Fabric 中。Hyperledger Fabric 系统继承了所有企业级的可扩展功能,这些功能已深深地刻入到 Hyperledger 组织下的所有项目中。
### Hyperledger Fabric 的亮点
Hyperledger Fabric 提供了多种功能和标准,这些功能和标准围绕支持快速开发和模块化体系结构的使命而构建。此外,与竞争对手(主要是瑞波和[以太坊][3]相比Fabric 对封闭和[许可区块链][4]采取了明确的立场。它们的核心目标是开发一套工具,这些工具将帮助区块链开发人员创建定制的解决方案,而不是创建独立的生态系统或产品。
Hyperledger Fabric 的一些亮点如下:
#### 许可区块链系统
这是一个 Hyperledger Fabric 与其他平台如以太坊和瑞波差异很大的类别。默认情况下Fabric 是一种旨在实现私有许可的区块链的工具。此类区块链不能被任何人访问,并且致力于达成共识或验证交易的节点由中央机构选择。这对于某些应用(例如银行和保险)可能很重要,在这些应用中,交易必须由中央机构而不是参与者来验证。
#### 机密和受控的信息流
Fabric 内置了权限系统,该权限系统将视情况限制特定组或某些个人中的信息流。与公有区块链不同,在公有区块链中,任何运行节点的人都可以对存储在区块链中的数据进行复制和选择性访问,而 Fabric 系统的管理员可以选择谁能访问共享信息,以及访问的方式。与现有竞争产品相比,它还有以更好的安全性标准对存储的数据进行加密的子系统。
#### 即插即用架构
Hyperledger Fabric 具有即插即用类型的体系结构。可以选择实施系统的各个组件而开发人员看不到用处的系统组件可能会被废弃。Fabric 采取高度模块化和可定制的方式进行开发,而不是一种与其竞争对手采用的一种方法适应所有需求的方式。对于希望快速构建精益系统的公司和公司而言,这尤其有吸引力。这与 Fabric 与其它 Hyperledger 组件的互操作性相结合,意味着开发人员和设计人员现在可以使用各种标准化工具,而不必从其他来源提取代码并随后进行集成。 它还提供了一种相当可靠的方式来构建健壮的模块化系统。
#### 智能合约和链码
运行在区块链上的分布式应用程序称为[智能合约][5]。虽然智能合约这个术语或多或少与以太坊平台相关联,但<ruby>链码<rt>chaincode</rt></ruby>是在 Hyperledger 阵营中为其赋予的名称。链码应用程序除了拥有 DApps 中存在的所有优点之外,使 Hyperledger 与众不同的是,该应用程序的代码可以用多种高级编程语言编写。它本身支持 [Go][6] 和 JavaScript并且在与适当的编译器模块集成后还支持许多其他语言。尽管这一事实在此时可能并不意味着什么但这意味着如果可以将现有人才用于正在进行的涉及区块链的项目从长远来看这有可能为公司节省数十亿美元的人员培训和管理费用。开发人员可以使用自己喜欢的语言进行编码从而在 Hyperledger Fabric 上开始构建应用程序,而无需学习或培训平台特定的语言和语法。这提供了 Hyperledger Fabric 当前竞争对手无法提供的灵活性。
### 总结
* Hyperledger Fabric 是一个后端驱动程序平台,是一个主要针对需要区块链或其它分布式账本技术的集成项目。因此,除了次要的脚本功能外,它不提供任何面向用户的服务。(认可以为​​它更像是一种脚本语言。)
* Hyperledger Fabric 支持针对特定用例构建侧链。如果开发人员希望将一组用户或参与者隔离到应用程序的特定部分或功能,则可以通过侧链来实现。侧链是衍生自主要父代的区块链,但在其初始块之后形成不同的链。产生新链的块将不受新链进一步变化的影响,即使将新信息添加到原始链中,新链也将保持不变。此功能将有助于扩展正在开发的平台,并引入用户特定和案例特定的处理功能。
* 先前的功能还意味着并非所有用户都会像通常对公有链所期望的那样拥有区块链中所有数据的“精确”副本。参与节点将具有仅与之相关的数据副本。例如,假设一个类似于印度的 PayTM 的应用程序。该应用程序具有钱包功能以及电子商务功能。但是,并非所有的钱包用户都使用 PayTM 在线购物。在这种情况下,只有活跃的购物者将在 PayTM 电子商务网站上拥有相应的交易链,而钱包用户将仅拥有存储钱包交易的链的副本。这种灵活的数据存储和检索体系结构在扩展时非常重要,因为大量的单链区块链已经显示出会增加处理交易的前置时间。这样可以保持链的精简和分类。
我们将在以后的文章中详细介绍 Hyperledger Project 下的其他模块。
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-fabric/
作者:[sk][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
[3]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/
[4]: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/
[5]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
[6]: https://www.ostechnix.com/install-go-language-linux/

View File

@ -1,305 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mutation testing is the evolution of TDD)
[#]: via: (https://opensource.com/article/19/8/mutation-testing-evolution-tdd)
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic)
变异测试是 TDD 的演变
======
测试驱动开发技术是根据大自然的运作规律创建的,变异测试自然成为 DevOps 发展的下一步。
![Ants and a leaf making the word "open"][1]
在 "[故障是无懈可击的开发运维中的一个特点][2]," 我讨论了故障以征求反馈的机制在交付优质产品过程中所起到的重要作用。 敏捷DevOps团队就是用故障来指导他们并推动开发进程的。 [测试驱动开发(TDD)][3] 是任何敏捷 DevOps 团队评估产品交付的[必要条件][4]。 以故障为中心的 TDD 方法仅在与可量化的测试配合使用时才有效。
TDD 方法仿照大自然是如何运作的以及自然界在进化博弈中是如何产生赢家和输家为模型而建立的。
### 自然选择
![查尔斯·达尔文][5]
1859年, [查尔斯·达尔文][6] 在他的[物种起源][7]_一书中提出了进化论学说。 达尔文的论点是,自然变异是由生物个体的自发突变和环境压力共同造成的。 环境压力淘汰了适应性较差的生物体,而有利于其他适应性强的生物的发展。 每个生物体的染色体都会发生变异,而这些自发的变异会携带给下一代(后代)。 然后在自然选择下测试新出现的变异性-当下存在的环境压力是由变异性的环境条件所导致的。
这张简图说明了调整适应环境条件的过程。
![环境压力对鱼类的影响][8]
图1. 不同的环境压力导致自然选择下的不同结果。图片截图来源于[理查德•道金斯的一个视频][9]。
该图显示了一群生活在自己栖息地的鱼。 栖息地各不相同(海底或河床底部的砾石颜色有深有浅),每条鱼长的也各不相同(鱼身图案和颜色也有深有浅)。
这张图还显示了两种情况(即环境压力的两种变化)::
1. 捕食者在场
2. 捕食者不在场
在第一种情况下,在砾石颜色衬托下容易凸显出来的鱼被捕食者捕获的风险更高。 当砾石颜色较深时,浅色鱼的数量会更少一些。 反之亦然-当砾石颜色较浅时,深色鱼的数量会更少。
在第二种情况下,鱼完全放松下来进行交配。 在没有捕食者和没有交配仪式的情况下,可以预料到相反的结果:在砾石背景下显眼的鱼会有更大的机会被选来交配并将其特性传递给后代。
### 选择标准
变异性在进行选择时,绝不是任意的,反复无常的,异想天开的或随机的。选择过程中的决定性因素通常是可以度量的。 该决定性因素通常称为测试或目标。
一个简单的数学例子可以说明这一决策过程。 (在该示例中,这种选择不是由自然选择决定的,而是由人为选择决定。)假设有人要求您构建一个小函数,该函数将选用一个正数,然后计算该数的平方根。你将怎么做?
敏捷 DevOps 团队的方法是快速验证失败。 谦虚一点,承认自己真的不知道如何开发该功能。 这时,你所知道的就是如何描述你想做的事情。 从技术上讲,你已准备好进行单元测试。
“单元测试”描述了你的具体期望结果是什么。 它可以简单地表述为“给定数字16我希望平方根函数返回数字4”。 您可能知道16的平方根是4。但是你不知道一些较大数字例如533的平方根。
但至少,你已经制定了选择标准,即你的测试或你的期望值。
### 进行故障测试
[.NET Core][10] 平台可以实现该测试。.NET 通常使用 xUnit.net 作为单元测试框架。(要遵循编码示例,请安装 .NET Core 和 xUnit.net。
打开命令行并创建一个文件夹,在该文件夹实现平方根解决方案。 例如,输入:
```
`mkdir square_root`
```
再输入:
```
`cd square_root`
```
为单元测试创建一个单独的文件夹:
```
`mkdir unit_tests`
```
进入 **unit_tests** 文件夹下(**cd unit_tests**) 初始化xUnit 框架:
```
`dotnet new xunit`
```
现在,将文件夹移动到 **square_root** 下, 创建 **app** 文件夹:
```
mkdir app
cd app
```
如果有必要的话,为你的代码创建一个脚手架:
```
`dotnet new classlib`
```
现在打开你最喜欢的编辑器开始编码!
在你的代码编辑器中,导航到 **unit_tests** 文件夹,打开 **UnitTest1.cs**
**UnitTest1.cs** 中自动生成的代码替换为:
```
using System;
using Xunit;
using app;
namespace unit_tests{
public class UnitTest1{
Calculator calculator = new Calculator();
[Fact]
public void GivenPositiveNumberCalculateSquareRoot(){
var expected = 4;
var actual = calculator.CalculateSquareRoot(16);
Assert.Equal(expected, actual);
}
}
}
```
该单元测试描述了变量的**期望值**应该为4。下一行描述了**实际值**。 建议通过将输入值发送到称为**calculator** 的组件来计算**实际值**。对该组件的描述是通过接收数值来处理**CalculateSquareRoot**信息。 该组件尚未开发。 但这并不重要,我们在此只是描述期望值。
最后,描述了触发消息发送时发生的情况。 此时,判断**期望值** 是否等于**实际值**。 如果是,则测试通过,目标达成。 如果**期望值** 不等于**实际值**,则测试失败。
接下来,要实现称为**calculator**的组件,在 **app** 文件夹中创建一个新文件,并将其命名为**Calculator.cs**。 要实现计算平方根的功能,请在此新文件中添加以下代码:
```
namespace app {
public class Calculator {
public double CalculateSquareRoot(double number) {
double bestGuess = number;
return bestGuess;
}
}
}
```
Before you can test this implementation, you need to instruct the unit test how to find this new component (**Calculator**). Navigate to the **unit_tests** folder and open the **unit_tests.csproj** file. Add the following line in the **&lt;ItemGroup&gt;** code block:
在测试之前,你需要通知单元测试如何找到该新组件(**Calculator**)。 导航至**unit_tests** 文件夹,打开**unit_tests.csproj**文件。 在 **&lt;ItemGroup&gt;** 代码块中添加以下代码:
```
`<ProjectReference Include="../app/app.csproj" />`
```
保存 **unit_test.csproj** 文件。现在,你可以运行第一个测试了。
切换到命令行,进入 **unit_tests** 文件夹。 运行以下命令:
```
`dotnet test`
```
运行单元测试,会输出以下内容:
![单元测试失败后xUnit的输出结果][12]
图2. 单元测试失败后xUnit的输出结果
正如你所看到的,单元测试失败了。 期望将数字16发送到**calculator** 组件后会输出数字4但是输出**实际值**的是16。
恭喜你! 创建了第一个故障。 单元测试为你提供了强有力的反馈机制,敦促你修复故障。
### 修复故障
要修复故障,你必须要改进 **bestGuess**。 当下,**bestGuess** 仅获取函数接收的数字并返回。 这不够好。
但是,如何找到一种计算平方根值的方法呢? 我有一个主意-看一下大自然母亲是如何解决问题的。
### 效仿大自然的迭代
在第一次(也是唯一的)尝试中要得出正确值是非常难的(几乎不可能)。 你必须允许自己进行多次尝试猜测,以增加解决问题的机会。 允许多次尝试的一种方法是进行迭代。
要迭代,就要将 **bestGuess**值存储在 **previousGuess** 变量中,转换**bestGuess**的值,然后比较两个值之间的差。 如果差为0则说明问题已解决。 否则,继续迭代。
这是生成任何正数的平方根的函数体:
```
double bestGuess = number;
double previousGuess;
do {
previousGuess = bestGuess;
bestGuess = (previousGuess + (number/previousGuess))/2;
} while((bestGuess - previousGuess) != 0);
return bestGuess;
```
该循环迭代将bestGuess值集中到设想的解决方案。 现在,你精心设计的单元测试通过了!
![单元测试通过了][13]
图 3. 单元测试通过了。
### 迭代解决了问题
正如大自然母亲解决问题的方法,在本练习中,迭代解决了问题。 增量方法与逐步改进相结合是获得满意解决方案的有效方法。 该示例中的决定性因素是具有可衡量的目标和测试。 一旦有了这些,就可以继续迭代直到达到目标。
### 关键点!
好的,这是一个有趣的试验,但是更有趣的发现来自于使用这种新创建的解决方案。 到目前为止,**bestGuess** 从开始一直把函数接收到的数字作为输入参数。 如果更改**bestGuess**的初始值会怎样?
为了测试这一点,你可以测试几种情况。 首先在迭代多次尝试计算25的平方根时要逐步细化观察结果
![25平方根的迭代编码][14]
图 4. 通过迭代来计算25的平方根。
以25作为 **bestGuess** 的初始值该函数需要八次迭代才能计算出25的平方根。但是如果在设计 **bestGuess** 初始值上犯下荒谬的错误,那将怎么办? 尝试第二次那100万可能是25的平方根吗 在这种明显错误的情况下会发生什么? 你写的功能是否能够处理这种低级错误。
直接来吧。回到测试中来,这次以一百万开始:
![逐步求精法][15]
图 5. 在计算25的平方根时运用逐步求精法以100万作为**bestGuess**的初始值。
哇! 以一个荒谬的数字开始迭代次数仅增加了两倍从八次迭代到23次。 增长幅度没有你直觉中预期的那么大。
### 故事的寓意
啊哈! 当你意识到,迭代不仅能够保证解决问题,而且与你的解决方案的初始猜测值是好是坏也没有关系。 不论你最初理解得多么不正确,迭代过程以及可衡量的测试/目标,都可以使你走上正确的道路并得到解决方案。
图4和5显示了陡峭而戏剧性的燃尽图。 一个非常错误得开始,迭代很快就产生了一个绝对正确的解决方案。
简而言之,这种神奇的方法就是敏捷 DevOps 的本质。
### 回到一些更深层次的观察
敏捷DevOps的实践源于人们对所生活的世界的认知。我们生活的世界存在不确定性不完整性以及充满太多的困惑。 从科学/哲学的角度来看,这些特征得到了[海森堡的不确定性原理][16] (涵盖不确定性部分), [维特根斯坦的逻辑论哲学][17] (歧义性部分), [哥德尔的不完全性定理][18] (不完全性方面), 以及[热力学第二定律][19] (无情的熵引起的混乱)的充分证明和支持。
简而言之,无论你多么努力,在尝试解决任何问题时都无法获得完整的信息。 因此,放下傲慢的姿态,采取更为谦虚的方法来解决问题对我们会更有帮助。 谦卑会给为你带来巨大的回报,这个回报不仅是你期望的一个解决方案,还会有它的副产品。
### 总结
大自然在不停地运作,这是一个持续不断的过程。 大自然没有总体规划。 一切都是对先前发生的事情的回应。 反馈循环是非常紧密的,明显的进步/倒退都是逐步实现的。大自然中随处可见,任何事物的都在以一种或多种形式逐步完善。
敏捷 DevOps 是工程模型逐渐成熟的一个非常有趣的结果。 DevOps 基于这样的认识,即你所拥有的信息总是不完整的,因此你最好谨慎进行。 获得可衡量的测试(例如,假设,可测量的期望结果),进行简单的尝试,大多数情况下可能失败,然后收集反馈,修复故障并继续测试。 除了同意每个步骤都必须要有可衡量的假设/测试之外,没有其他方法。
在本系列的下一篇文章中,我将仔细研究变异测试是如何提供及时反馈来推动实现结果的。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/mutation-testing-evolution-tdd
作者:[Alex Bunardzic][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-bunardzic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520X292_openanttrail-2.png?itok=xhD3WmUd (Ants and a leaf making the word "open")
[2]: https://opensource.com/article/19/7/failure-feature-blameless-devops
[3]: https://en.wikipedia.org/wiki/Test-driven_development
[4]: https://www.merriam-webster.com/dictionary/conditio%20sine%20qua%20non
[5]: https://opensource.com/sites/default/files/uploads/darwin.png (Charles Darwin)
[6]: https://en.wikipedia.org/wiki/Charles_Darwin
[7]: https://en.wikipedia.org/wiki/On_the_Origin_of_Species
[8]: https://opensource.com/sites/default/files/uploads/environmentalconditions2.png (Environmental pressures on fish)
[9]: https://www.youtube.com/watch?v=MgK5Rf7qFaU
[10]: https://dotnet.microsoft.com/
[11]: https://xunit.net/
[12]: https://opensource.com/sites/default/files/uploads/xunit-output.png (xUnit output after the unit test run fails)
[13]: https://opensource.com/sites/default/files/uploads/unit-test-success.png (Unit test successful)
[14]: https://opensource.com/sites/default/files/uploads/iterating-square-root.png (Code iterating for the square root of 25)
[15]: https://opensource.com/sites/default/files/uploads/bestguess.png (Stepwise refinement)
[16]: https://en.wikipedia.org/wiki/Uncertainty_principle
[17]: https://en.wikipedia.org/wiki/Tractatus_Logico-Philosophicus
[18]: https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems
[19]: https://en.wikipedia.org/wiki/Second_law_of_thermodynamics

View File

@ -1,205 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mutation testing by example: How to leverage failure)
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-tdd)
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic)
变异测试:如何利用故障?
======
使用事先设计好的故障以确保你的代码达到预期的结果,并遵循 .NET xUnit.net 测试框架来进行测试。
![failure sign at a party, celebrating failure][1]
[在变异测试是TDD的演变][2]一文中, 我谈到了迭代的力量。在可度量的测试中,迭代能够保证找到问题的解决方案。 在那篇文章中,我们讨论了迭代法帮助确定实现计算给定数字平方根的代码。
我还演示了最有效的方法是找到可衡量的目标或测试,然后以最佳猜测值开始迭代。 正如所预期的,第一次测试通常会失败。因此,必须根据可衡量的目标或测试对失败的代码进行完善。 根据运行结果,对测试值进行验证或进一步加以完善。
在此模型中,学习获得解决方案的唯一方法是反复失败。 这听起来有悖常理,但它确实有效。
按照这种分析,本文探讨了在构建包含某些依赖项的解决方案时使用 DevOps 的最佳方法。 第一步是编写一个预期结果失败的用例。
### 依赖性问题是你不能依赖它们
正如迈克尔•尼加德Michael Nygard在_[Architecture without an end state][3]_,表达的那样,依赖问题是一个很大的话题,最好留到另一篇文章中讨论。 在这里,你将会看到依赖项给项目带来的一些潜在问题,以及
如何利用测试驱动开发TDD来避免这些陷阱。
首先找到现实生活中的一个挑战然后看看如何使用TDD解决它。
### 谁让猫出来?
![一只猫站在屋顶][4]
在敏捷开发环境中,通过定义期望结果开始构建解决方案会很有帮助。 通常,在 [用户故事][5]中描述期望结果:
>我想使用我家的自动化系统HAS来控制猫何时可以出门因为我想保证它在夜间的安全。
现在你已经有了一个用户故事,你需要通过提供一些功能要求(即指定验收标准)来对其进行详细说明。 从伪代码中描述的最简单的场景开始:
> 场景1在夜间关闭猫门
>
> * 用时钟监测到晚上时间
> * 时钟通知 HAS 系统
> * HAS 关闭支持物联网IoT的猫门
>
### 分解系统
开始构建之前你需要将正在构建的系统HAS进行分解分解为依赖项。 你必须要做的第一件事是识别任何依赖项(如果幸运的话,你的系统没有依赖项,这将会更容易,但是,这样的系统可以说不是非常有用)。
从上面的简单场景中,你可以看到所需的业务成果(自动控制猫门)取决于对夜间情况监测。 这种依赖性取决于时钟。 但是时钟是无法区分白天和夜晚的。 需要你来提供这种逻辑。
正在构建的系统中的另一个依赖项是能够自动访问猫门并启用或关闭它。 该依赖项很可能取决于具有 IoT 功能的猫门提供的API。
### 依赖管理面临快速失败
为了满足一个依赖项,我们将构建确定当前时间是白天还是晚上的逻辑。 本着TDD的精神我们将从一个小小的失败开始。
有关如何设置此练习所需的开发环境和脚手架的详细说明,请参阅我的[上一篇文章][2]。 我们将重用相同的 NET 环境和 [xUnit.net][6] 框架。
接下来,创建一个名为 HAS“家庭自动化系统”的新项目创建一个名为**UnitTest1.cs**的文件。 在该文件中,编写第一个失败的单元测试。 在此单元测试中,描述你的期望结果。 例如当系统运行时如果时间是晚上7点负责确定是白天还是夜晚的组件将返回值“ Nighttime”。
这是描述期望值的单元测试:
```
using System;
using Xunit;
namespace unittest
{
public class UnitTest1
{
DayOrNightUtility dayOrNightUtility = [new][7] DayOrNightUtility();
[Fact]
public void Given7pmReturnNighttime()
{
var expected = "Nighttime";
var actual = dayOrNightUtility.GetDayOrNight();
Assert.Equal(expected, actual);
}
}
}
```
至此,你可能已经熟悉了单元测试的结构。 快速复习:在此示例中,通过给单元测试一个描述性名称**Given7pmReturnNighttime** 来描述期望结果。 然后,在单元测试的主体中,创建一个名为**expected** 的变量,并为该变量指定期望值(在该示例中,值为“ Nighttime”。 然后,为实际变量指定一个 **actual**(在组件或服务处理一天中的时间之后可用)。
最后,通过断言期望值和实际值是否相等来检查是否满足期望结果:**Assert.Equal(expected, actual)**。
你还可以在上面的列表中看到名为**dayOrNightUtility** 的组件或服务。 该模块能够接收消息**GetDayOrNight**,并且返回**string** 类型的值。
同样本着TDD的精神描述的组件或服务还尚未构建仅为了后面说明在此进行描述。 构建这些是由所描述的期望结果来驱动的。
**app** 文件夹中创建一个新文件,并将其命名为**DayOrNightUtility.cs**。 将以下 C 代码添加到该文件中并保存:
```
using System;
namespace app {
public class DayOrNightUtility {
public string GetDayOrNight() {
string dayOrNight = "Undetermined";
return dayOrNight;
}
}
}
```
现在转到命令行,将目录更改为**unittests**文件夹,然后运行:
```
[Xunit.net 00:00:02.33] unittest.UnitTest1.Given7pmReturnNighttime [FAIL]
Failed unittest.UnitTest1.Given7pmReturnNighttime
[...]
```
恭喜,你已经完成了第一个失败的单元测试。 单元测试的期望结果是**DayOrNightUtility**方法返回字符串“ Nighttime”但相反它返回是“ Undetermined”。
### 修复失败的单元测试
修复失败的测试的一种快速而粗略的方法是将值“ Undetermined”替换为值“ Nighttime”并保存更改
```
using System;
namespace app {
public class DayOrNightUtility {
public string GetDayOrNight() {
string dayOrNight = "Nighttime";
return dayOrNight;
}
}
}
```
现在运行时,成功了。
```
Starting test execution, please wait...
Total tests: 1. Passed: 1. Failed: 0. Skipped: 0.
Test Run Successful.
Test execution time: 2.6470 Seconds
```
但是,对值进行硬编码基本上是在作弊,最好为**DayOrNightUtility** 方法赋予一些智能。 修改**GetDayOrNight**方法以包括一些时间计算逻辑:
```
public string GetDayOrNight() {
string dayOrNight = "Daylight";
DateTime time = new DateTime();
if(time.Hour &lt; 7) {
dayOrNight = "Nighttime";
}
return dayOrNight;
}
```
该方法现在从系统获取当前时间,并与 **Hour**比较查看其是否小于上午7点。 如果小于,则处理逻辑将 **dayOrNight**字符串值从“ Daylight”转换为“ Nighttime”。 现在,单元测试通过。
### 测试驱动解决方案的开始
现在,我们已经开始了基本的单元测试,并为我们的时间依赖项提供了可行的解决方案。 后面还有更多的测试案例需要执行。
在下一篇文章中,我将演示如何对白天时间进行测试以及如何在整个过程中利用故障。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/mutation-testing-example-tdd
作者:[Alex Bunardzic][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-bunardzic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_failure_celebrate.png?itok=LbvDAEZF (failure sign at a party, celebrating failure)
[2]: https://opensource.com/article/19/8/mutation-testing-evolution-tdd
[3]: https://www.infoq.com/presentations/Architecture-Without-an-End-State/
[4]: https://opensource.com/sites/default/files/uploads/cat.png (Cat standing on a roof)
[5]: https://www.agilealliance.org/glossary/user-stories
[6]: https://xunit.net/
[7]: http://www.google.com/search?q=new+msdn.microsoft.com

View File

@ -1,34 +1,34 @@
[#]: collector: (lujun9972)
[#]: translator: (way-ww)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Run the Top Command in Batch Mode)
[#]: via: (https://www.2daygeek.com/linux-run-execute-top-command-in-batch-mode/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
[#]: collector: "lujun9972"
[#]: translator: "way-ww"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: subject: "How to Run the Top Command in Batch Mode"
[#]: via: "https://www.2daygeek.com/linux-run-execute-top-command-in-batch-mode/"
[#]: author: "Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/"
How to Run the Top Command in Batch Mode
如何在批处理模式下运行 Top 命令
======
The **[Linux Top command][1]** is the best and most well known command that everyone uses to **[monitor Linux system performance][2]**.
**[Top 命令][1]** 是每个人都在使用的用于 **[监控 Linux 系统性能][2]** 的最好的命令。
You probably already know most of the options available, except for a few options, and if Im not wrong, “batch more” is one of the options.
除了很少的几个操作, 你可能已经知道 top 命令的绝大部分操作, 如果我没错的话, 批处理模式就是其中之一。
Most script writer and developers know this because this option is mainly used when writing the script.
大部分的脚本编写者和开发人员都知道这个, 因为这个操作主要就是用来编写脚本。
If youre not sure about this, dont worry were here to explain this.
如果你不了解这个, 不用担心,我们将在这里介绍它。
### What is “Batch Mode” in the Top Command
### 什么是 Top 命令的批处理模式
The “Batch Mode” option allows you to send top command output to other programs or to a file.
批处理模式允许你将 top 命令的输出发送至其他程序或者文件中。
In this mode, top will not accept input and runs until the iterations limit youve set with the “-n” command-line option.
在这个模式中, top 命令将不会接收输入并且持续运行直到迭代次数达到你用 “-n” 选项指定的次数为止。
If you want to fix any performance issues on the Linux server, you need to **[understand the top command output][3]** correctly.
如果你想解决 Linux 服务器上的任何性能问题, 你需要正确的 **[理解 top 命令的输出][3]** 。
### 1) How to Run the Top Command in Batch Mode
### 1) 如何在批处理模式下运行 top 命令
By default, the top command sort the results based on CPU usage, so when you run the below top command in batch mode, it does the same and prints the first 35 lines.
默认地, top 命令按照 CPU 的使用率来排序输出结果, 所以当你在批处理模式中运行以下命令时, 它会执行同样的操作并打印前 35 行。
```
# top -bc | head -35
@ -70,9 +70,9 @@ PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
46 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kmpath_rdacd]
```
### 2) How to Run the Top Command in Batch Mode and Sort the Output Based on Memory Usage
### 2) 如何在批处理模式下运行 top 命令并按内存使用率排序结果
Run the below top command to sort the results based on memory usage in batch mode.
在批处理模式中运行以下命令按内存使用率对结果进行排序
```
# top -bc -o +%MEM | head -n 20
@ -99,19 +99,19 @@ KiB Swap: 1048572 total, 514640 free, 533932 used. 2475984 avail Mem
8632 nobody 20 0 256844 25744 2216 S 0.0 0.7 0:00.03 /usr/sbin/httpd -k start
```
**Details of the above command:**
**上面命令的详细信息:**
* **-b :** Batch mode operation
* **-c :** To print the absolute path of the running process
* **-o :** To specify fields for sorting processes
* **head :** Output the first part of files
* **-n :** To print the first “n” lines
* **-b :** 批处理模式选项
* **-c :** 打印运行中的进程的绝对路径
* **-o :** 指定进行排序的字段
* **head :** 输出文件的第一部分
* **-n :** 打印前 n 行
### 3) How to Run the Top Command in Batch Mode and Sort the Output Based on a Specific User Process
### 3) 如何在批处理模式下运行 top 命令并按照指定的用户进程对结果进行排序
If you want to sort results based on a specific user, run the below top command.
如果你想要按照指定用户进程对结果进行排序请运行以下命令
```
# top -bc -u mysql | head -n 10
@ -126,13 +126,13 @@ KiB Swap: 1048572 total, 514640 free, 533932 used. 2649412 avail Mem
18105 mysql 20 0 1453900 156888 8816 S 0.0 4.0 2:16.42 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid
```
### 4) How to Run the Top Command in Batch Mode and Sort the Output Based on the Process Age
### 4) 如何在批处理模式下运行 top 命令并按照处理时间进行排序
Use the below top command to sort the results based on the age of the process in batch mode. It shows the total CPU time the task has used since it started.
在批处理模式中使用以下 top 命令按照处理时间对结果进行排序。 这展示了任务从启动以来已使用的总 CPU 时间
But if you want to check how long a process has been running on Linux, go to the following article.
但是如果你想要检查一个进程在 Linux 上运行了多长时间请看接下来的文章。
* **[Five Ways to Check How Long a Process Has Been Running in Linux][4]**
* **[检查 Linux 中进程运行时间的五种方法][4]**
@ -161,9 +161,9 @@ KiB Swap: 1048572 total, 514640 free, 533932 used. 2440332 avail Mem
342 root 20 0 39472 2940 2752 S 0.0 0.1 1:18.17 /usr/lib/systemd/systemd-journald
```
### 5) How to Run the Top Command in Batch Mode and Save the Output to a File
### 5) 如何在批处理模式下运行 top 命令并将结果保存到文件中
If you want to share the output of the top command to someone for troubleshooting purposes, redirect the output to a file using the following command.
如果出于解决问题的目的, 你想要和别人分享 top 命令的输出, 请使用以下命令重定向输出到文件中
```
# top -bc | head -35 > top-report.txt
@ -207,11 +207,11 @@ KiB Swap: 1048572 total, 514640 free, 533932 used. 2659084 avail Mem
36 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [crypto]
```
### How to Sort Output Based on Specific Fields
### 如何按照指定字段对结果进行排序
In the latest version of the top command release, press the **“f”** key to sort the fields via the field letter.
在 top 命令的最新版本中, 按下 **“f”** 键进入字段管理界面。
To sort with a new field, use the **“up/down”** arrow to select the correct selection, and then press **“s”** to sort it. Finally press **“q”** to exit from this window.
要使用新字段进行排序, 请使用 **“up/down”** 箭头选择正确的选项, 然后再按下 **“s”** 键进行排序。 最后按 **“q”** 键退出此窗口。
```
Fields Management for window 1:Def, whose current sort field is %CPU
@ -269,9 +269,9 @@ Fields Management for window 1:Def, whose current sort field is %CPU
nsUSER = USER namespace Inode
```
For older version of the top command, press the **“shift+f”** or **“shift+o”** key to sort the fields via the field letter.
对 top 命令的旧版本, 请按 **“shift+f”** 或 **“shift+o”** 键进入字段管理界面进行排序。
To sort with a new field, select the corresponding sort **field letter**, and then press **“Enter”** to sort it.
要使用新字段进行排序, 请选择相应的排序字段字母, 然后按下 **“Enter”** 排序。
```
Current Sort Field: N for window 1:Def
@ -322,7 +322,7 @@ via: https://www.2daygeek.com/linux-run-execute-top-command-in-batch-mode/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[way-ww](https://github.com/way-ww)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,126 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Unzip a Zip File in Linux [Beginners Tutorial])
[#]: via: (https://itsfoss.com/unzip-linux/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
如何在 linux 下解压 Zip 文件
======
_**摘要: 将会向你展示如何在 ubuntu 和其他 linux 发行版本上解压文件 . 终端和图形界面的方法都会被讨论 **_
[Zip][1] 是一种最普通 , 最流行的方法来创建压缩存档文件 . 它也是一种古老的文件归档文件格式,创建于 1989 年 . 自从它被广泛的使用 , 你会经常遇见 zip 文件 .
在更早的一份教程 , 我展示了 [how to zip a folder in Linux][2] . 在这篇快速教程中 , 对于初学者我会展示如何在 linux 上解压文件 .
**先决条件: 检查你是否安装了 unzip**
为了解压 zip 归档文件 , 你必须解压安装包到你的系统 . 大多数现代的的 linux 发行版本提供解压 zip 文件的原生支持 . 校验它来避免以后出现坏的惊喜 .
以 [Unbutu][3] 和 [Debian][4] 为基础的发行版本 , 你能够使用下面的命令来安装 unzip. 如果你已经安装了, 你会被告知已经被安装 .
```
sudo apt install unzip
```
一旦你能够确认你的系统中安装了 unzip, 你就可以通过 unzip 来解压 zip 归档文件.
你也能够使用命令行或者图形工具来达到目的, 我会向你展示两种方法.
* [Unzip files in Linux terminal][5]
* [Unzip files in Ubuntu via GUI][6]
### 使用命令行解压文件
在 linux 下使用 unzip 命令是非常简单. 当你向解压 zip 文件, 用下面的命令:
```
unzip zipped_file.zip
```
你可以给 zip 文件提供解压路径而不是当前所在路径 . 你会在终端输出中看到提取的文件:
```
unzip metallic-container.zip -d my_zip
Archive: metallic-container.zip
inflating: my_zip/625993-PNZP34-678.jpg
inflating: my_zip/License free.txt
inflating: my_zip/License premium.txt
```
上面的命令有一个小问题. 它会提取 zip 文件中所有的内容到现在的文件夹 . 你会在当前文件夹下留下一堆没有组织的文件, 这不是一件很好的事情.
#### 解压到文件夹下
在 linux 命令行下, 对于把文件解压到一个文件夹下是一个好的做法. 这种方式下, 所有的提取文件都会被存储到你所指定的文件夹下. 如果文件夹不存在, 文件夹会被创建.
```
unzip zipped_file.zip -d unzipped_directory
```
现在 zipped_file.zip 中所有的内容都会被提取到 unzipped_directory 中.
从我们讨论好的做法, 另一个注意点, 我们可以查看压缩文件中的内容而不用真实的解压 .
#### 查看压缩文件中的内容而不解压压缩文件
```
unzip -l zipped_file.zip
```
下面是命令的输出:
```
unzip -l metallic-container.zip
Archive: metallic-container.zip
Length Date Time Name
--------- ---------- ----- ----
6576010 2019-03-07 10:30 625993-PNZP34-678.jpg
1462 2019-03-07 13:39 License free.txt
1116 2019-03-07 13:39 License premium.txt
--------- -------
6578588 3 files
```
在 linux 下, 这里还有些其他的 unzip 的用法, 你对在 linux 下使用解压文件有了足够的知识.
### 使用图形界面来解压文件
如果你使用桌面版 linux , 那你就不必总是使用终端. 在图形化的界面下,我们又要如何解压文件呢? 我使用 [GNOME desktop][7]. 和其他的桌面版 linux 发行版本相同 .
打开文件管理器,然后跳转到压缩文件所在的文件夹下. 点击鼠标右键, 你会在弹出的窗口中看到 "extract here",选择它.
![Unzip File in Ubuntu][8]
与 unzip 命令不同, 提取选项会创建一个和压缩文件名相同的文件夹,并且把压缩文件中的所有内容存储到创建的文件夹下. 相对于 unzip 命令的默认行为是将压缩文件提取到当前所在的文件下,图形界面的解压对于我来说是一件非常好的事情.
这里还有一个选项 "extract to", 你可以悬着特定的文件夹来存储提取文件.
你现在知道如何在 linux 解压文件. 你也许对学习有兴趣 [using 7zip in Linux][9] .
--------------------------------------------------------------------------------
via: https://itsfoss.com/unzip-linux/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[octopus](https://github.com/singledo)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Zip_(file_format)
[2]: https://itsfoss.com/linux-zip-folder/
[3]: https://ubuntu.com/
[4]: https://www.debian.org/
[5]: tmp.eqEocGssC8#terminal
[6]: tmp.eqEocGssC8#gui
[7]: https://gnome.org/
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/unzip-files-ubuntu.jpg?ssl=1
[9]: https://itsfoss.com/use-7zip-ubuntu-linux/

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Object-Oriented Programming and Essential State)
[#]: via: (https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
面向对象编程和根本状态
======
早在 2015 年Brian Will 撰写了一篇有挑衅性的博客:[面向对象编程:一个灾难故事][1]。他随后发布了一个名为[面向对象编程很糟糕][2]的视频,该视频更加详细。我建议你花些时间观看视频,但这是我的一小段摘要:
OOP 的柏拉图式理想是一堆相互解耦的对象它们彼此之间发送无状态消息。没有人真的像这样制作软件Brian 指出这甚至没有意义:对象需要知道向哪个对象发送消息,这意味着它们需要相互引用。视频大部分讲述的是人们试图将对象耦合以实现控制流,同时假装它们是通过设计解耦的。
总的来说,他的想法与我自己的 OOP 经验产生了共鸣对象没有问题但是我从来没有对_面向_对象建立程序控制流满意而试图使代码“正确地”面向对象似乎总是在创建不必要的复杂性。
我认为他无法完全解释一件事。他直截了当地说“封装没有作用”,但在脚注后面加上“在细粒度的代码级别”,并继续承认对象有时可以奏效,并且在库和文件级别可以封装。但是他没有确切解释为什么有时会奏效,有时却没有奏效,以及如何/在何处划清界限。有人可能会说这使他的“ OOP不好”的说法有缺陷但是我认为他的观点是正确的并且可以在根本状态和偶发状态之间划清界限。
如果你以前从未听说过“根本”和“偶发”这两个术语的使用,那么你应该阅读 Fred Brooks 的经典文章[没有银弹][3]。 (顺便说一句,他写了许多有关构建软件系统的很棒的文章。)我以前曾写过[关于根本和偶发的复杂性的文章][4],但是这里有一个简短的摘要:软件很复杂。部分原因是因为我们希望软件能够解决混乱的现实世界问题,因此我们将其称为“根本复杂性”。“偶发复杂性”是所有其他复杂性,因为我们正尝试使用硅和金属来解决与硅和金属无关的问题。例如,对于大多数程序而言,用于内存管理或在内存与磁盘之间传输数据或解析文本格式的代码都是“偶发的复杂性”。
假设你正在构建一个支持多个频道的聊天应用。消息可以随时到达任何频道。有些频道特别有趣,当有新消息传入时,用户希望得到通知。其他频道静音:消息被存储,但用户不会受到打扰。你需要跟踪每个频道的用户首选设置。
一种实现方法是在频道和频道设置之间使用映射(也称为哈希表,字典或关联数组)。注意,映射是 Brian Will 所说的可以用作对象的抽象数据类型ADT
如果我们有一个调试器并查看内存中的 map 对象,我们将看到什么?我们当然会找到频道 ID 和频道设置数据(或至少指向它们的指针)。但是我们还会找到其他数据。如果 map 是使用红黑树实现的,我们将看到带有红/黑标签和指向其他节点的指针的树节点对象。与频道相关的数据是根本状态,而树节点是偶发状态。不过,请注意以下几点:该映射有效地封装了它的偶发状态-你可以用 AVL 树实现的另一个映射替换该映射,并且你的聊天程序仍然可以使用。另一方面,映射没有封装根本状态(仅使用 `get()``set()`方法访问数据不是封装)。事实上,映射与根本状态是尽可能不可知的,你可以使用基本相同的映射数据结构来存储与频道或通知无关的其他映射。
这就是映射 ADT 如此成功的原因它封装了偶发状态并与根本状态解耦。如果你考虑一下Brian 描述的封装问题就是尝试封装根本状态。其他描述的好处是封装偶发状态的好处。
要使整个软件系统都达到这一理想相当困难,但扩展开来,我认为它看起来像这样:
* 没有全局的可变状态
* 封装了偶发状态(在对象或模块或以其他任何形式)
* 无状态偶发复杂性封装在单独函数中,与数据解耦
* 使用诸如依赖注入之类的技巧使输入和输出变得明确
* 完全拥有组件,并从易于识别的位置进行控制
其中有些违反了我很久以前的本能。例如,如果你有一个数据库查询函数,如果数据库连接处理隐藏在该函数内部,并且唯一的参数是查询参数,那么接口会看起来会更简单。但是,当你使用这样的函数构建软件系统时,协调数据库的使用实际上变得更加复杂。组件不仅以自己的方式做事,而且还试图将自己所做的事情隐藏为“实现细节”。数据库查询需要数据库连接这一事实从来都不是实现细节。如果无法隐藏某些内容,那么显露它是更合理的。
我警惕将面向对象编程和函数式编程放在两极但我认为从函数式编程进入面向对象编程的另一极端是很有趣的OOP 试图封装事物,包括无法封装的根本复杂性,而纯函数式编程往往会使事情变得明确,包括一些偶发复杂性。在大多数时候,没什么问题,但有时候(比如[在纯函数式语言中构建自我指称的数据结构][5])设计更多的是为了函数编程,而不是为了简便(这就是为什么 [Haskell 包含了一些“逃生出口”( escape hatches][6])。我之前写过一篇[中立的所谓的“弱纯性” weak purity][7]
Brian 发现封装对更大规模有效,原因有几个。一个是,由于大小的原因,较大的组件更可能包含偶发状态。另一个是“偶发”与你要解决的问题有关。从聊天程序用户的角度来看,“偶发的复杂性”是与消息,频道和用户等无关的任何事物。但是,当你将问题分解为子问题时,更多的事情就变得重要。例如,在解决“构建聊天应用”问题时,可以说频道名称和频道 ID 之间的映射是偶发的复杂性,而在解决“实现 `getChannelIdByName()` 函数”子问题时,这是根本复杂性。因此,封装对于子组件的作用比对父组件的作用要小。
顺便说一句在影片的结尾Brian Will 想知道是否有任何语言支持_无法_访问它们所作用的范围的匿名函数。[D][8] 语言可以。 D 中的匿名 Lambda 通常是闭包,但是如果你想要的话,也可以声明匿名无状态函数:
```
import std.stdio;
void main()
{
int x = 41;
// Value from immediately executed lambda
auto v1 = () {
return x + 1;
}();
writeln(v1);
// Same thing
auto v2 = delegate() {
return x + 1;
}();
writeln(v2);
// Plain functions aren't closures
auto v3 = function() {
// Can't access x
// Can't access any mutable global state either if also marked pure
return 42;
}();
writeln(v3);
}
```
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://medium.com/@brianwill/object-oriented-programming-a-personal-disaster-1b044c2383ab
[2]: https://www.youtube.com/watch?v=QM1iUe6IofM
[3]: http://www.cs.nott.ac.uk/~pszcah/G51ISS/Documents/NoSilverBullet.html
[4]: https://theartofmachinery.com/2017/06/25/compression_complexity_and_software.html
[5]: https://wiki.haskell.org/Tying_the_Knot
[6]: https://en.wikibooks.org/wiki/Haskell/Mutable_objects#The_ST_monad
[7]: https://theartofmachinery.com/2016/03/28/dirtying_pure_functions_can_be_useful.html
[8]: https://dlang.org

View File

@ -0,0 +1,215 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bash Script to Delete Files/Folders Older Than “X” Days in Linux)
[#]: via: (https://www.2daygeek.com/bash-script-to-delete-files-folders-older-than-x-days-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
在 Linux 中使用 Bash 脚本删除早于 “X” 天的文件/文件夹
======
**[磁盘使用率][1]**监控工具能够在达到给定阈值时提醒我们。
但它们无法自行解决**[磁盘使用率][2]**问题。
需要手动干预才能解决该问题。
如果你想完全自动化此类操作,你会做什么。
是的,可以使用 bash 脚本来完成。
该脚本可防止来自**[监控工具][3]**的警报,因为我们会在填满磁盘空间之前删除旧的日志文件。
我们过去做了很多 shell 脚本。如果要查看,请进入下面的链接。
* **[如何使用 shell 脚本自动化日常活动?][4]**
我在本文中添加了两个 bash 脚本,它们有助于清除旧日志。
### 1在 Linux 中删除早于 “X” 天的文件夹的 Bash 脚本
我们有一个名为 **“/var/log/app/”** 的文件夹,其中包含 15 天的日志,我们将删除早于 10 天的文件夹。
```
$ ls -lh /var/log/app/
drwxrw-rw- 3 root root 24K Oct 1 23:52 app_log.01
drwxrw-rw- 3 root root 24K Oct 2 23:52 app_log.02
drwxrw-rw- 3 root root 24K Oct 3 23:52 app_log.03
drwxrw-rw- 3 root root 24K Oct 4 23:52 app_log.04
drwxrw-rw- 3 root root 24K Oct 5 23:52 app_log.05
drwxrw-rw- 3 root root 24K Oct 6 23:54 app_log.06
drwxrw-rw- 3 root root 24K Oct 7 23:53 app_log.07
drwxrw-rw- 3 root root 24K Oct 8 23:51 app_log.08
drwxrw-rw- 3 root root 24K Oct 9 23:52 app_log.09
drwxrw-rw- 3 root root 24K Oct 10 23:52 app_log.10
drwxrw-rw- 3 root root 24K Oct 11 23:52 app_log.11
drwxrw-rw- 3 root root 24K Oct 12 23:52 app_log.12
drwxrw-rw- 3 root root 24K Oct 13 23:52 app_log.13
drwxrw-rw- 3 root root 24K Oct 14 23:52 app_log.14
drwxrw-rw- 3 root root 24K Oct 15 23:52 app_log.15
```
该脚本将删除早于 10 天的文件夹,并通过邮件发送文件夹列表。
你可以根据需要修改 **“-mtime X”** 的值。另外,请替换你的电子邮箱,而不是用我们的。
```
# /opt/script/delete-old-folders.sh
#!/bin/bash
prev_count=0
fpath=/var/log/app/app_log.*
find $fpath -type d -mtime +10 -exec ls -ltrh {} \; > /tmp/folder.out
find $fpath -type d -mtime +10 -exec rm -rf {} \;
count=$(cat /tmp/folder.out | wc -l)
if [ "$prev_count" -lt "$count" ] ; then
MESSAGE="/tmp/file1.out"
TO="[email protected]"
echo "Application log folders are deleted older than 15 days" >> $MESSAGE
echo "+----------------------------------------------------+" >> $MESSAGE
echo "" >> $MESSAGE
cat /tmp/folder.out | awk '{print $6,$7,$9}' >> $MESSAGE
echo "" >> $MESSAGE
SUBJECT="WARNING: Apache log files are deleted older than 15 days $(date)"
mail -s "$SUBJECT" "$TO" < $MESSAGE
rm $MESSAGE /tmp/folder.out
fi
```
**“delete-old-folders.sh”** 设置可执行权限。
```
# chmod +x /opt/script/delete-old-folders.sh
```
最后添加一个 [cronjob][5] 自动化此任务。它于每天早上 7 点运行。
```
# crontab -e
0 7 * * * /bin/bash /opt/script/delete-old-folders.sh
```
你将看到类似下面的输出。
```
Application log folders are deleted older than 20 days
+--------------------------------------------------------+
Oct 11 /var/log/app/app_log.11
Oct 12 /var/log/app/app_log.12
Oct 13 /var/log/app/app_log.13
Oct 14 /var/log/app/app_log.14
Oct 15 /var/log/app/app_log.15
```
### 2在 Linux 中删除早于 “X” 天的文件的 Bash 脚本
我们有一个名为 **“/var/log/apache/”** 的文件夹其中包含15天的日志我们将删除 10 天前的文件。
以下文章与该主题相关,因此你可能有兴趣阅读。
* **[如何在 Linux 中查找和删除早于 “X” 天和 “X” 小时的文件?][6]**
* **[如何在 Linux 中查找最近修改的文件/文件夹][7]**
* **[如何在 Linux 中自动删除或清理 /tmp 文件夹内容?][8]**
```
# ls -lh /var/log/apache/
-rw-rw-rw- 3 root root 24K Oct 1 23:52 2daygeek_access.01
-rw-rw-rw- 3 root root 24K Oct 2 23:52 2daygeek_access.02
-rw-rw-rw- 3 root root 24K Oct 3 23:52 2daygeek_access.03
-rw-rw-rw- 3 root root 24K Oct 4 23:52 2daygeek_access.04
-rw-rw-rw- 3 root root 24K Oct 5 23:52 2daygeek_access.05
-rw-rw-rw- 3 root root 24K Oct 6 23:54 2daygeek_access.06
-rw-rw-rw- 3 root root 24K Oct 7 23:53 2daygeek_access.07
-rw-rw-rw- 3 root root 24K Oct 8 23:51 2daygeek_access.08
-rw-rw-rw- 3 root root 24K Oct 9 23:52 2daygeek_access.09
-rw-rw-rw- 3 root root 24K Oct 10 23:52 2daygeek_access.10
-rw-rw-rw- 3 root root 24K Oct 11 23:52 2daygeek_access.11
-rw-rw-rw- 3 root root 24K Oct 12 23:52 2daygeek_access.12
-rw-rw-rw- 3 root root 24K Oct 13 23:52 2daygeek_access.13
-rw-rw-rw- 3 root root 24K Oct 14 23:52 2daygeek_access.14
-rw-rw-rw- 3 root root 24K Oct 15 23:52 2daygeek_access.15
```
该脚本将删除 10 天前的文件并通过邮件发送文件夹列表。
你可以根据需要修改 **“-mtime X”** 的值。另外,请替换你的电子邮箱,而不是用我们的。
```
# /opt/script/delete-old-files.sh
#!/bin/bash
prev_count=0
fpath=/var/log/apache/2daygeek_access.*
find $fpath -type f -mtime +15 -exec ls -ltrd {} \; > /tmp/file.out
find $fpath -type f -mtime +15 -exec rm -rf {} \;
count=$(cat /tmp/file.out | wc -l)
if [ "$prev_count" -lt "$count" ] ; then
MESSAGE="/tmp/file1.out"
TO="[email protected]"
echo "Apache Access log files are deleted older than 20 days" >> $MESSAGE
echo "+--------------------------------------------- +" >> $MESSAGE
echo "" >> $MESSAGE
cat /tmp/file.out | awk '{print $6,$7,$9}' >> $MESSAGE
echo "" >> $MESSAGE
SUBJECT="WARNING: Apache log folders are deleted older than 15 days $(date)"
mail -s "$SUBJECT" "$TO" < $MESSAGE
rm $MESSAGE /tmp/file.out
fi
```
**“delete-old-files.sh”** 设置可执行权限。
```
# chmod +x /opt/script/delete-old-files.sh
```
最后添加一个 [cronjob][5] 自动化此任务。它于每天早上 7 点运行。
```
# crontab -e
0 7 * * * /bin/bash /opt/script/delete-old-folders.sh
```
你将看到类似下面的输出。
```
Apache Access log files are deleted older than 20 days
+--------------------------------------------------------+
Oct 11 /var/log/apache/2daygeek_access.11
Oct 12 /var/log/apache/2daygeek_access.12
Oct 13 /var/log/apache/2daygeek_access.13
Oct 14 /var/log/apache/2daygeek_access.14
Oct 15 /var/log/apache/2daygeek_access.15
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/bash-script-to-delete-files-folders-older-than-x-days-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/linux-check-disk-usage-files-and-directories-folders-size-du-command/
[2]: https://www.2daygeek.com/linux-check-disk-space-usage-df-command/
[3]: https://www.2daygeek.com/category/monitoring-tools/
[4]: https://www.2daygeek.com/category/shell-script/
[5]: https://www.2daygeek.com/crontab-cronjob-to-schedule-jobs-in-linux/
[6]: https://www.2daygeek.com/how-to-find-and-delete-files-older-than-x-days-and-x-hours-in-linux/
[7]: https://www.2daygeek.com/check-find-recently-modified-files-folders-linux/
[8]: https://www.2daygeek.com/automatically-delete-clean-up-tmp-directory-folder-contents-in-linux/