Merge pull request #9 from LCTT/master

update
This commit is contained in:
wenwensnow 2019-10-22 11:08:08 +02:00 committed by GitHub
commit c7b896e522
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
40 changed files with 3526 additions and 1789 deletions

View File

@ -0,0 +1,151 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11478-1.html)
[#]: subject: (What is a Java constructor?)
[#]: via: (https://opensource.com/article/19/6/what-java-constructor)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
什么是 Java 构造器?
======
> 构造器是编程的强大组件。使用它们来释放 Java 的全部潜力。
![](https://img.linux.net.cn/data/attachment/album/201910/18/230523hdx7sy804xdtxybb.jpg)
在开源、跨平台编程领域Java 无疑(?)是无可争议的重量级语言。尽管有许多[伟大的跨平台][2][框架][3],但很少有像 [Java][4] 那样统一和直接的。
当然Java 也是一种非常复杂的语言具有自己的微妙之处和惯例。Java 中与<ruby>构造器<rt> constructor</rt></ruby>有关的最常见问题之一是:它们是什么,它们的作用是什么?
简而言之:构造器是在 Java 中创建新<ruby>对象<rt>object</rt></ruby>时执行的操作。当 Java 应用程序创建一个你编写的类的实例时,它将检查构造器。如果(该类)存在构造器,则 Java 在创建实例时将运行构造器中的代码。这几句话中包含了大量的技术术语,但是当你看到它的实际应用时就会更加清楚,所以请确保你已经[安装了 Java][5] 并准备好进行演示。
### 没有使用构造器的开发日常
如果你正在编写 Java 代码那么你已经在使用构造器了即使你可能不知道它。Java 中的所有类都有一个构造器因为即使你没有创建构造器Java 也会在编译代码时为你生成一个。但是,为了进行演示,请忽略 Java 提供的隐藏构造器(因为默认构造器不添加任何额外的功能),并观察没有显式构造器的情况。
假设你正在编写一个简单的 Java 掷骰子应用程序,因为你想为游戏生成一个伪随机数。
首先,你可以创建骰子类来表示一个骰子。你玩了很久[《龙与地下城》][6],所以你决定创建一个 20 面的骰子。在这个示例代码中,变量 `dice` 是整数 20表示可能的最大掷骰数一个 20 边骰子的掷骰数不能超过 20。变量 `roll` 是最终的随机数的占位符,`rand` 用作随机数种子。
```
import java.util.Random;
public class DiceRoller {
private int dice = 20;
private int roll;
private Random rand = new Random();
```
接下来,在 `DiceRoller` 类中创建一个函数,以执行计算机模拟模子滚动所必须采取的步骤:从 `rand` 中获取一个整数并将其分配给 `roll`变量,考虑到 Java 从 0 开始计数但 20 面的骰子没有 0 值的情况,`roll` 再加 1 ,然后打印结果。
```
import java.util.Random;
public class DiceRoller {
private int dice = 20;
private int roll;
private Random rand = new Random();
```
最后,产生 `DiceRoller` 类的实例并调用其关键函数 `Roller`
```
// main loop
public static void main (String[] args) {
System.out.printf("You rolled a ");
DiceRoller App = new DiceRoller();
App.Roller();
}
}
```
只要你安装了 Java 开发环境(如 [OpenJDK][10]),你就可以在终端上运行你的应用程序:
```
$ java dice.java
You rolled a 12
```
在本例中,没有显式构造器。这是一个非常有效和合法的 Java 应用程序,但是它有一点局限性。例如,如果你把游戏《龙与地下城》放在一边,晚上去玩一些《快艇骰子》,你将需要六面骰子。在这个简单的例子中,更改代码不会有太多的麻烦,但是在复杂的代码中这不是一个现实的选择。解决这个问题的一种方法是使用构造器。
### 构造函数的作用
这个示例项目中的 `DiceRoller` 类表示一个虚拟骰子工厂:当它被调用时,它创建一个虚拟骰子,然后进行“滚动”。然而,通过编写一个自定义构造器,你可以让掷骰子的应用程序询问你希望模拟哪种类型的骰子。
大部分代码都是一样的,除了构造器接受一个表示面数的数字参数。这个数字还不存在,但稍后将创建它。
```
import java.util.Random;
public class DiceRoller {
private int dice;
private int roll;
private Random rand = new Random();
// constructor
public DiceRoller(int sides) {
dice = sides;
}
```
模拟滚动的函数保持不变:
```
public void Roller() {
roll = rand.nextInt(dice);
roll += 1;
System.out.println (roll);
}
```
代码的主要部分提供运行应用程序时提供的任何参数。这的确会是一个复杂的应用程序,你需要仔细解析参数并检查意外结果,但对于这个例子,唯一的预防措施是将参数字符串转换成整数类型。
```
public static void main (String[] args) {
System.out.printf("You rolled a ");
DiceRoller App = new DiceRoller( Integer.parseInt(args[0]) );
App.Roller();
}
```
启动这个应用程序,并提供你希望骰子具有的面数:
```
$ java dice.java 20
You rolled a 10
$ java dice.java 6
You rolled a 2
$ java dice.java 100
You rolled a 44
```
构造器已接受你的输入,因此在创建类实例时,会将 `sides` 变量设置为用户指定的任何数字。
构造器是编程的功能强大的组件。练习用它们来解开了 Java 的全部潜力。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/what-java-constructor
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag
[2]: https://opensource.com/resources/python
[3]: https://opensource.com/article/17/4/pyqt-versus-wxpython
[4]: https://opensource.com/resources/java
[5]: https://openjdk.java.net/install/index.html
[6]: https://opensource.com/article/19/5/free-rpg-day
[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+random
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[10]: https://openjdk.java.net/
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+integer

View File

@ -0,0 +1,257 @@
[#]: collector: (lujun9972)
[#]: translator: (lnrCoder)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11480-1.html)
[#]: subject: (How to Install and Configure PostgreSQL on Ubuntu)
[#]: via: (https://itsfoss.com/install-postgresql-ubuntu/)
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
如何在 Ubuntu 上安装和配置 PostgreSQL
======
> 本教程中,你将学习如何在 Ubuntu Linux 上安装和使用开源数据库 PostgreSQL。
[PostgreSQL][1] (又名 Postgres) 是一个功能强大的自由开源的关系型数据库管理系统 ([RDBMS][2]) ,其在可靠性、稳定性、性能方面获得了业内极高的声誉。它旨在处理各种规模的任务。它是跨平台的,而且是 [macOS Server][3] 的默认数据库。
如果你喜欢简单易用的 SQL 数据库管理器,那么 PostgreSQL 将是一个正确的选择。PostgreSQL 对标准的 SQL 兼容的同时提供了额外的附加特性,同时还可以被用户大量扩展,用户可以添加数据类型、函数并执行更多的操作。
之前我曾论述过 [在 Ubuntu 上安装 MySQL][4]。在本文中,我将向你展示如何安装和配置 PostgreSQL以便你随时可以使用它来满足你的任何需求。
![][5]
### 在 Ubuntu 上安装 PostgreSQL
PostgreSQL 可以从 Ubuntu 主存储库中获取。然而,和许多其它开发工具一样,它可能不是最新版本。
首先在终端中使用 [apt 命令][7] 检查 [Ubuntu 存储库][6] 中可用的 PostgreSQL 版本:
```
apt show postgresql
```
在我的 Ubuntu 18.04 中,它显示 PostgreSQL 的可用版本是 1010+190 表示版本 10而 PostgreSQL 版本 11 已经发布。
```
Package: postgresql
Version: 10+190
Priority: optional
Section: database
Source: postgresql-common (190)
Origin: Ubuntu
```
根据这些信息,你可以自主决定是安装 Ubuntu 提供的版本还是还是获取 PostgreSQL 的最新发行版。
我将向你介绍这两种方法:
#### 方法一:通过 Ubuntu 存储库安装 PostgreSQL
在终端中,使用以下命令安装 PostgreSQL
```
sudo apt update
sudo apt install postgresql postgresql-contrib
```
根据提示输入你的密码,依据于你的网速情况,程序将在几秒到几分钟安装完成。说到这一点,随时检查 [Ubuntu 中的各种网络带宽][8]。
> 什么是 postgresql-contrib?
> postgresql-contrib 或者说 contrib 包,包含一些不属于 PostgreSQL 核心包的实用工具和功能。在大多数情况下,最好将 contrib 包与 PostgreSQL 核心一起安装。
#### 方法二:在 Ubuntu 中安装最新版本的 PostgreSQL 11
要安装 PostgreSQL 11, 你需要在 `sources.list` 中添加官方 PostgreSQL 存储库和证书,然后从那里安装它。
不用担心,这并不复杂。 只需按照以下步骤。
首先添加 GPG 密钥:
```
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
```
现在,使用以下命令添加存储库。如果你使用的是 Linux Mint则必须手动替换你的 Mint 所基于的 Ubuntu 版本号:
```
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" >> /etc/apt/sources.list.d/pgdg.list'
```
现在一切就绪。使用以下命令安装 PostgreSQL
```
sudo apt update
sudo apt install postgresql postgresql-contrib
```
> PostgreSQL GUI 应用程序
> 你也可以安装用于管理 PostgreSQL 数据库的 GUI 应用程序pgAdmin
> `sudo apt install pgadmin4`
### PostgreSQL 配置
你可以通过执行以下命令来检查 PostgreSQL 是否正在运行:
```
service postgresql status
```
通过 `service` 命令,你可以启动、关闭或重启 `postgresql`。输入 `service postgresql` 并按回车将列出所有选项。现在,登录该用户。
默认情况下PostgreSQL 会创建一个拥有所权限的特殊用户 `postgres`。要实际使用 PostgreSQL你必须先登录该账户
```
sudo su postgres
```
你的提示符会更改为类似于以下的内容:
```
postgres@ubuntu-VirtualBox:/home/ubuntu$
```
现在,使用 `psql` 来启动 PostgreSQL Shell
```
psql
```
你应该会看到如下提示符:
```
postgress=#
```
你可以输入 `\q` 以退出,输入 `\?` 获取帮助。
要查看现有的所有表,输入如下命令:
```
\l
```
输出内容类似于下图所示(单击 `q` 键退出该视图):
![PostgreSQL Tables][10]
使用 `\du` 命令,你可以查看 PostgreSQL 用户:
![PostgreSQLUsers][11]
你可以使用以下命令更改任何用户(包括 `postgres`)的密码:
```
ALTER USER postgres WITH PASSWORD 'my_password';
```
**注意:**将 `postgres` 替换为你要更改的用户名,`my_password` 替换为所需要的密码。另外,不要忘记每条命令后面的 `;`(分号)。
建议你另外创建一个用户(不建议使用默认的 `postgres` 用户)。为此,请使用以下命令:
```
CREATE USER my_user WITH PASSWORD 'my_password';
```
运行 `\du`,你将看到该用户,但是,`my_user` 用户没有任何的属性。来让我们给它添加超级用户权限:
```
ALTER USER my_user WITH SUPERUSER;
```
你可以使用以下命令删除用户:
```
DROP USER my_user;
```
要使用其他用户登录,使用 `\q` 命令退出,然后使用以下命令登录:
```
psql -U my_user
```
你可以使用 `-d` 参数直接连接数据库:
```
psql -U my_user -d my_db
```
你可以使用其他已存在的用户调用 PostgreSQL。例如我使用 `ubuntu`。要登录,从终端执行以下命名:
```
psql -U ubuntu -d postgres
```
**注意:**你必须指定一个数据库(默认情况下,它将尝试将你连接到与登录的用户名相同的数据库)。
如果遇到如下错误:
```
psql: FATAL: Peer authentication failed for user "my_user"
```
确保以正确的用户身份登录,并使用管理员权限编辑 `/etc/postgresql/11/main/pg_hba.conf`
```
sudo vim /etc/postgresql/11/main/pg_hba.conf
```
**注意:**用你的版本替换 `11`(例如 `10`)。
对如下所示的一行进行替换:
```
local all postgres peer
```
替换为:
```
local all postgres md5
```
然后重启 PostgreSQL
```
sudo service postgresql restart
```
使用 PostgreSQL 与使用其他 SQL 类型的数据库相同。由于本文旨在帮助你进行初步的设置,因此不涉及具体的命令。不过,这里有个 [非常有用的要点][12] 可供参考! 另外, 手册(`man psql`)和 [文档][13] 也非常有用。
### 总结
阅读本文有望指导你完成在 Ubuntu 系统上安装和准备 PostgreSQL 的过程。如果你不熟悉 SQL你应该阅读 [基本的 SQL 命令][15]。
如果你有任何问题或疑惑,请随时在评论部分提出。
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-postgresql-ubuntu/
作者:[Sergiu][a]
选题:[lujun9972][b]
译者:[lnrCoder](https://github.com/lnrCoder)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sergiu/
[b]: https://github.com/lujun9972
[1]: https://www.postgresql.org/
[2]: https://www.codecademy.com/articles/what-is-rdbms-sql
[3]: https://www.apple.com/in/macos/server/
[4]: https://itsfoss.com/install-mysql-ubuntu/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-postgresql-ubuntu.png?resize=800%2C450&ssl=1
[6]: https://itsfoss.com/ubuntu-repositories/
[7]: https://itsfoss.com/apt-command-guide/
[8]: https://itsfoss.com/network-speed-monitor-linux/
[9]: https://itsfoss.com/fix-gvfsd-smb-high-cpu-ubuntu/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/07/postgresql_tables.png?fit=800%2C303&ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/07/postgresql_users.png?fit=800%2C244&ssl=1
[12]: https://gist.github.com/Kartones/dd3ff5ec5ea238d4c546
[13]: https://www.postgresql.org/docs/manuals/
[14]: https://itsfoss.com/sync-any-folder-with-dropbox/
[15]: https://itsfoss.com/basic-sql-commands/

View File

@ -0,0 +1,184 @@
[#]: collector: (lujun9972)
[#]: translator: (amwps290)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11477-1.html)
[#]: subject: "How to Install Linux on Intel NUC"
[#]: via: "https://itsfoss.com/install-linux-on-intel-nuc/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
在 Intel NUC 上安装 Linux
======
![](https://img.linux.net.cn/data/attachment/album/201910/18/221221pw3hbbi3bbbbprr4.jpg)
在上周,我买了一台 [InteL NUC][1]。虽然它是如此之小,但它与成熟的桌面型电脑差别甚小。实际上,大部分的[基于 Linux 的微型 PC][2] 都是基于 Intel NUC 构建的。
我买了第 8 代 Core i3 处理器的“<ruby>准系统<rt>barebone</rt></ruby>” NUC。准系统意味着该设备没有 RAM、没有硬盘显然也没有操作系统。我添加了一个 [Crucial 的 8 GB 内存条][3](大约 33 美元)和一个 [240 GB 的西数的固态硬盘][4](大约 45 美元)。
现在,我已经有了一台不到 400 美元的电脑。因为我已经有了一个电脑屏幕和键鼠套装,所以我没有把它们计算在内。
![在我的办公桌上放着一个崭新的英特尔 NUC NUC8i3BEH后面有树莓派 4][5]
我买这个 Intel NUC 的主要原因就是我想在实体机上测试各种各样的 Linux 发行版。我已经有一个 [树莓派 4][6] 设备作为一个入门级的桌面系统,但它是一个 [ARM][7] 设备,因此,只有少数 Linux 发行版可用于树莓派上。LCTT 译注:新发布的 Ubuntu 19.10 支持树莓派 4B
*这个文章里的亚马逊链接是(原文的)受益连接。请参阅我们的[受益政策][8]。*
### 在 NUC 上安装 Linux
现在我准备安装 Ubuntu 18.04 长期支持版,因为我现在就有这个系统的安装文件。你也可以按照这个教程安装其他的发行版。在最重要的分区之前,前边的步骤都大致相同。
#### 第一步:创建一个 USB 启动盘
你可以在 Ubuntu 官网下载它的安装文件。使用另一个电脑去[创建一个 USB 启动盘][9]。你可以使用像 [Rufus][10] 和 [Etcher][11] 这样的软件。在 Ubuntu上你可以使用默认的启动盘创建工具。
#### 第二步:确认启动顺序是正确的
将你的 USB 启动盘插入到你的电脑并开机。一旦你看到 “Intel NUC” 字样出现在你的屏幕上,快速的按下 `F2` 键进入到 BIOS 设置中。
![Intel NUC 的 BIOS 设置][12]
在这里,只是确认一下你的第一启动项是你的 USB 设备。如果不是,切换启动顺序。
如果你修改了一些选项,按 `F10` 键保存退出,否则直接按下 `ESC` 键退出 BIOS 设置。
#### 第三步:正确分区,安装 Linux
现在当机器重启的时候,你就可以看到熟悉的 Grub 界面,可以让你试用或者安装 Ubuntu。现在我们选择安装它。
开始的几个安装步骤非常简单,选择键盘的布局,是否连接网络还有一些其他简单的设置。
![在安装 Ubuntu Linux 时选择键盘布局][14]
你可能会使用常规安装,默认情况下会安装一些有用的应用程序。
![][15]
接下来的是要注意的部分。你有两种选择:
* “<ruby>擦除磁盘并安装 Ubuntu<rt>Erase disk and install Ubuntu</rt></ruby>”:最简单的选项,它将在整个磁盘上安装 Ubuntu。如果你只想在 Intel NUC 上使用一个操作系统请选择此选项Ubuntu 将负责剩余的工作。
* “<ruby>其他选项<rt>Something else</rt></ruby>”:这是一个控制所有选择的高级选项。就我而言,我想在同一 SSD 上安装多个 Linux 发行版。因此,我选择了此高级选项。
![][16]
**如果你选择了“<ruby>擦除磁盘并安装 Ubuntu<rt>Erase disk and install Ubuntu</rt></ruby>”,点击“<ruby>继续<rt>Continue</rt></ruby>”,直接跳到第四步,**
如果你选择了高级选项,请按照下面剩下的部分进行操作。
选择固态硬盘,然后点击“<ruby>新建分区表<rt>New Partition Table</rt></ruby>”。
![][17]
它会给你显示一个警告。直接点击“<ruby>继续<rt>Continue</rt></ruby>”。
![][18]
现在你就可以看到你 SSD 磁盘里的空闲空间。我的想法是为 EFI bootloader 创建一个 EFI 系统分区。一个根(`/`)分区,一个主目录(`/home`)分区。这里我并没有创建[交换分区][19]。Ubuntu 会根据自己的需要来创建交换分区。我也可以通过[创建新的交换文件][32]来扩展交换分区。
我将在磁盘上保留近 200 GB 的可用空间,以便可以在此处安装其他 Linux 发行版。你可以将其全部用于主目录分区。保留单独的根分区和主目录分区可以在你需要重新安装系统时帮你保存里面的数据。
选择可用空间,然后单击加号以添加分区。
![][20]
一般来说100MB 足够 EFI 的使用,但是某些发行版可能需要更多空间,因此我要使用 500MB 的 EFI 分区。
![][21]
接下来,我将使用 20GB 的根分区。如果你只使用一个发行版,则可以随意地将其增加到 40GB。
根目录(`/`)是系统文件存放的地方。你的程序缓存和你安装的程序将会有一些文件放在这个目录下边。我建议你可以阅读一下 [Linux 文件系统层次结构][22]来了解更多相关内容。
填入分区的大小,选择 Ext4 文件系统,选择 `/` 作为挂载点。
![][24]
接下来是创建一个主目录分区,我再说一下,如果你仅仅想使用一个 Linux 发行版。那就把剩余的空间都使用完吧。为主目录分区选择一个合适的大小。
主目录是你个人的文件,比如文档、图片、音乐、下载和一些其他的文件存储的地方。
![][25]
既然你创建好了 EFI 分区、根分区、主目录分区,那你就可以点击“<ruby>现在安装<rt>Install Now</rt></ruby>”按钮安装系统了。
![][26]
它将会提示你新的改变将会被写入到磁盘,点击“<ruby>继续<rt>Continue</rt></ruby>”。
![][27]
#### 第四步:安装 Ubuntu
事情到了这就非常明了了。现在选择你的分区或者以后选择也可以。
![][28]
接下来,输入你的用户名、主机名以及密码。
![][29]
看 7-8 分钟的幻灯片就可以安装完成了。
![][30]
一旦安装完成,你就可以重新启动了。
![][31]
当你重启的时候,你必须要移除你的 USB 设备,否则你将会再次进入安装系统的界面。
这就是在 Intel NUC 设备上安装 Linux 所需要做的一切。坦白说,你可以在其他任何系统上使用相同的过程。
### Intel NUC 和 Linux 在一起:如何使用它?
我非常喜欢 Intel NUC。它不占用太多的桌面空间而且有足够的能力去取代传统的桌面型电脑。你可以将它的内存升级到 32GB。你也可以安装两个 SSD 硬盘。总之,它提供了一些配置和升级范围。
如果你想购买一个桌面型的电脑,我非常推荐你购买使用 [Intel NUC][1] 迷你主机。如果你不想自己安装系统,那么你可以购买一个[基于 Linux 的已经安装好的系统迷你主机][2]。
你是否已经有了一个 Intel NUC有一些什么相关的经验你有什么相关的意见与我们分享吗可以在下面评论。
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-linux-on-intel-nuc/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[amwps290](https://github.com/amwps290)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.amazon.com/Intel-NUC-Mainstream-Kit-NUC8i3BEH/dp/B07GX4X4PW?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07GX4X4PW "Intel NUC"
[2]: https://itsfoss.com/linux-based-mini-pc/
[3]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B01BIWKP58?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01BIWKP58 "8GB RAM from Crucial"
[4]: https://www.amazon.com/Western-Digital-240GB-Internal-WDS240G1G0B/dp/B01M9B2VB7?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01M9B2VB7 "240 GB Western Digital SSD"
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/intel-nuc.jpg?resize=800%2C600&ssl=1
[6]: https://itsfoss.com/raspberry-pi-4/
[7]: https://en.wikipedia.org/wiki/ARM_architecture
[8]: https://itsfoss.com/affiliate-policy/
[9]: https://itsfoss.com/create-live-usb-of-ubuntu-in-windows/
[10]: https://rufus.ie/
[11]: https://www.balena.io/etcher/
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/boot-screen-nuc.jpg?ssl=1
[13]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/
[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-1_tutorial.jpg?ssl=1
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-2_tutorial.jpg?ssl=1
[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-3_tutorial.jpg?ssl=1
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-4_tutorial.jpg?ssl=1
[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-5_tutorial.jpg?ssl=1
[19]: https://itsfoss.com/swap-size/
[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-6_tutorial.jpg?ssl=1
[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-7_tutorial.jpg?ssl=1
[22]: https://linuxhandbook.com/linux-directory-structure/
[23]: https://itsfoss.com/share-folders-local-network-ubuntu-windows/
[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-8_tutorial.jpg?ssl=1
[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-9_tutorial.jpg?ssl=1
[26]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-10_tutorial.jpg?ssl=1
[27]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-11_tutorial.jpg?ssl=1
[28]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-12_tutorial.jpg?ssl=1
[29]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-13_tutorial.jpg?ssl=1
[30]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-14_tutorial.jpg?ssl=1
[31]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-15_tutorial.jpg?ssl=1
[32]: https://itsfoss.com/create-swap-file-linux/

View File

@ -0,0 +1,214 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11487-1.html)
[#]: subject: (Installation Guide of Manjaro 18.1 (KDE Edition) with Screenshots)
[#]: via: (https://www.linuxtechi.com/install-manjaro-18-1-kde-edition-screenshots/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
Manjaro 18.1KDE安装图解
======
在 Manjaro 18.0Illyria发布一年之际该团队发布了他们的下一个重要版本即 Manjaro 18.1,代号为 “Juhraya”。该团队还发布了一份官方声明称 Juhraya 包含了许多改进和错误修复。
### Manjaro 18.1 中的新功能
以下列出了 Manjaro 18.1 中的一些新功能和增强功能:
* 可以在 LibreOffice 或 Free Office 之间选择
* Xfce 版的新 Matcha 主题
* 在 KDE 版本中重新设计了消息传递系统
* 使用 bhau 工具支持 Snap 和 FlatPak 软件包
### 最小系统需求
* 1 GB RAM
* 1 GHz 处理器
* 大约 30 GB 硬盘空间
* 互联网连接
* 启动介质USB/DVD
### 安装 Manjaro 18.1KDE 版)的分步指南
要在系统中开始安装 Manjaro 18.1KDE 版),请遵循以下步骤:
#### 步骤 1) 下载 Manjaro 18.1 ISO
在安装之前,你需要从位于 [这里][1] 的官方下载页面下载 Manjaro 18.1 的最新副本。由于我们这里介绍的是 KDE 版本,因此我们选择 KDE 版本。但是对于所有桌面环境(包括 Xfce、KDE 和 Gnome 版本),安装过程都是相同的。
#### 步骤 2) 创建 USB 启动盘
从 Manjaro 下载页面成功下载 ISO 文件后,就可以创建 USB 磁盘了。将下载的 ISO 文件复制到 USB 磁盘中,然后创建可引导磁盘。确保将你的引导设置更改为使用 USB 引导,并重新启动系统。
#### 步骤 3) Manjaro Live 版安装环境
系统重新启动时,它将自动检测到 USB 驱动器,并开始启动进入 Manjaro Live 版安装屏幕。
![Boot-Manjaro-18-1-kde-installation][3]
接下来,使用箭头键选择 “<ruby>启动Manjaro x86\_64 kde<rt>Boot: Manjaro x86\_64 kde</rt></ruby>”,然后按回车键以启动 Manjaro 安装程序。
#### 安装 4) 选择启动安装程序
接下来,将启动 Manjaro 安装程序如果你已连接到互联网Manjaro 将自动检测你的位置和时区。单击 “<ruby>启动安装程序<rt>Launch Installer</rt></ruby>”,开始在系统中安装 Manjaro 18.1 KDE 版本。
![Choose-Launch-Installaer-Manjaro18-1-kde][4]
#### 步骤 5) 选择语言
接下来,安装程序将带你选择你的首选语言。
![Choose-Language-Manjaro18-1-Kde-Installation][5]
选择你想要的语言,然后单击“<ruby>下一步<rt>Next</rt></ruby>”。
#### 步骤 6) 选择时区和区域
在下一个屏幕中,选择所需的时区和区域,然后单击“<ruby>下一步<rt>Next</rt></ruby>”继续。
![Select-Location-During-Manjaro18-1-KDE-Installation][6]
#### 步骤 7) 选择键盘布局
在下一个屏幕中,选择你喜欢的键盘布局,然后单击“<ruby>下一步<rt>Next</rt></ruby>”继续。
![Select-Keyboard-Layout-Manjaro18-1-kde-installation][7]
#### 步骤 8) 选择分区类型
这是安装过程中非常关键的一步。 它将允许你选择分区方式:
* 擦除磁盘
* 手动分区
* 并存安装
* 替换分区
如果在 VM虚拟机中安装 Manjaro 18.1,则将看不到最后两个选项。
如果你不熟悉 Manjaro Linux那么我建议你使用第一个选项<ruby>擦除磁盘<rt>Erase Disk</rt></ruby>),它将为你自动创建所需的分区。如果要创建自定义分区,则选择第二个选项“<ruby>手动分区<rt>Manual Partitioning</rt></ruby>”,顾名思义,它将允许我们创建自己的自定义分区。
在本教程中,我将通过选择“<ruby>手动分区<rt>Manual Partitioning</rt></ruby>”选项来创建自定义分区:
![Manual-Partition-Manjaro18-1-KDE][8]
选择第二个选项,然后单击“<ruby>下一步<rt>Next</rt></ruby>”继续。
如我们所见,我有 40 GB 硬盘,因此我将在其上创建以下分区,
* `/boot`         2GBext4
* `/`             10 GBext4
* `/home`        22 GBext4
* `/opt`         4 GBext4
* <ruby>交换分区<rt>Swap</rt></ruby>       2 GB
当我们在上方窗口中单击“<ruby>下一步<rt>Next</rt></ruby>”时,将显示以下屏幕,选择“<ruby>新建分区表<rt>new partition table</rt></ruby>”:
![Create-Partition-Table-Manjaro18-1-Installation][9]
点击“<ruby>确定<rt>OK</rt></ruby>”。
现在选择可用空间,然后单击“<ruby>创建<rt>create</rt></ruby>”以将第一个分区设置为大小为 2 GB 的 `/boot`
![boot-partition-manjaro-18-1-installation][10]
单击“<ruby>确定<rt>OK</rt></ruby>”以继续操作,在下一个窗口中再次选择可用空间,然后单击“<ruby>创建<rt>create</rt></ruby>”以将第二个分区设置为 `/`,大小为 10 GB
![slash-root-partition-manjaro18-1-installation][11]
同样,将下一个分区创建为大小为 22 GB 的 `/home`
![home-partition-manjaro18-1-installation][12]
到目前为止,我们已经创建了三个分区作为主分区,现在创建下一个分区作为扩展分区:
![Extended-Partition-Manjaro18-1-installation][13]
单击“<ruby>确定<rt>OK</rt></ruby>”以继续。
创建大小分别为 5 GB 和 2 GB 的 `/opt` 和交换分区作为逻辑分区。
![opt-partition-manjaro-18-1-installation][14]
![swap-partition-manjaro18-1-installation][15]
完成所有分区的创建后,单击“<ruby>下一步<rt>Next</rt></ruby>”:
![choose-next-after-partition-creation][16]
#### 步骤 9) 提供用户信息
在下一个屏幕中,你需要提供用户信息,包括你的姓名、用户名、密码、计算机名等:
![User-creation-details-manjaro18-1-installation][17]
提供所有信息后,单击“<ruby>下一步<rt>Next</rt></ruby>”继续安装。
在下一个屏幕中,系统将提示你选择办公套件,因此请做出适合你的选择:
![Office-Suite-Selection-Manjaro18-1][18]
单击“<ruby>下一步<rt>Next</rt></ruby>”以继续。
#### 步骤 10) 摘要信息
在完成实际安装之前,安装程序将向你显示你选择的所有详细信息,包括语言、时区、键盘布局和分区信息等。单击“<ruby>安装<rt>Install</rt></ruby>”以继续进行安装过程。
![Summary-manjaro18-1-installation][19]
#### 步骤 11) 进行安装
现在,实际的安装过程开始,一旦完成,请重新启动系统以登录到 Manjaro 18.1 KDE 版:
![Manjaro18-1-Installation-Progress][20]
![Restart-Manjaro-18-1-after-installation][21]
#### 步骤 12) 安装成功后登录
重新启动后,我们将看到以下登录屏幕,使用我们在安装过程中创建的用户凭据登录:
![Login-screen-after-manjaro-18-1-installation][22]
点击“<ruby>登录<rt>Login</rt></ruby>”。
![KDE-Desktop-Screen-Manjaro-18-1][23]
就是这样!你已经在系统中成功安装了 Manjaro 18.1 KDE 版,并探索了所有令人兴奋的功能。请在下面的评论部分中发表你的反馈和建议。
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/install-manjaro-18-1-kde-edition-screenshots/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://manjaro.org/download/official/kde/
[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Boot-Manjaro-18-1-kde-installation.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Launch-Installaer-Manjaro18-1-kde.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Language-Manjaro18-1-Kde-Installation.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Location-During-Manjaro18-1-KDE-Installation.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Keyboard-Layout-Manjaro18-1-kde-installation.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Manual-Partition-Manjaro18-1-KDE.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Create-Partition-Table-Manjaro18-1-Installation.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/boot-partition-manjaro-18-1-installation.jpg
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/09/slash-root-partition-manjaro18-1-installation.jpg
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/09/home-partition-manjaro18-1-installation.jpg
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Extended-Partition-Manjaro18-1-installation.jpg
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/09/opt-partition-manjaro-18-1-installation.jpg
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/09/swap-partition-manjaro18-1-installation.jpg
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/09/choose-next-after-partition-creation.jpg
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/09/User-creation-details-manjaro18-1-installation.jpg
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Office-Suite-Selection-Manjaro18-1.jpg
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Summary-manjaro18-1-installation.jpg
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Manjaro18-1-Installation-Progress.jpg
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Restart-Manjaro-18-1-after-installation.jpg
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Login-screen-after-manjaro-18-1-installation.jpg
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/09/KDE-Desktop-Screen-Manjaro-18-1.jpg

View File

@ -0,0 +1,186 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11483-1.html)
[#]: subject: (Mutation testing by example: How to leverage failure)
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-tdd)
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic)
变异测试:如何利用故障?
======
> 使用事先设计好的故障以确保你的代码达到预期的结果,并遵循 .NET xUnit.net 测试框架来进行测试。
![](https://img.linux.net.cn/data/attachment/album/201910/20/200030ipm13zmi08mv8z34.jpg)
[在变异测试是 TDD 的演变][2] 一文中,我谈到了迭代的力量。在可度量的测试中,迭代能够保证找到问题的解决方案。在那篇文章中,我们讨论了迭代法帮助确定实现计算给定数字平方根的代码。
我还演示了最有效的方法是找到可衡量的目标或测试,然后以最佳猜测值开始迭代。正如所预期的,第一次测试通常会失败。因此,必须根据可衡量的目标或测试对失败的代码进行完善。根据运行结果,对测试值进行验证或进一步加以完善。
在此模型中,学习获得解决方案的唯一方法是反复失败。这听起来有悖常理,但它确实有效。
按照这种分析,本文探讨了在构建包含某些依赖项的解决方案时使用 DevOps 的最佳方法。第一步是编写一个预期结果失败的用例。
### 依赖性问题是你不能依赖它们
正如<ruby>迈克尔·尼加德<rt>Michael Nygard</rt></ruby>在《[没有终结状态的架构][3]》中机智的表示的那样依赖问题是一个很大的话题最好留到另一篇文章中讨论。在这里你将会看到依赖项给项目带来的一些潜在问题以及如何利用测试驱动开发TDD来避免这些陷阱。
首先,找到现实生活中的一个挑战,然后看看如何使用 TDD 解决它。
### 谁把猫放出来?
![一只猫站在屋顶][4]
在敏捷开发环境中,通过定义期望结果开始构建解决方案会很有帮助。通常,在 <ruby>[用户故事][5]<rt>user story</rt></ruby> 中描述期望结果:
> 我想使用我的家庭自动化系统HAS来控制猫何时可以出门因为我想保证它在夜间的安全。
现在你已经有了一个用户故事,你需要通过提供一些功能要求(即指定验收标准)来对其进行详细说明。 从伪代码中描述的最简单的场景开始:
> 场景 1在夜间关闭猫门
>
> * 用时钟监测到了晚上的时间
> * 时钟通知 HAS 系统
> * HAS 关闭支持物联网IoT的猫门
### 分解系统
开始构建之前你需要将正在构建的系统HAS进行分解分解为依赖项。你必须要做的第一件事是识别任何依赖项如果幸运的话你的系统没有依赖项这将会更容易但是这样的系统可以说不是非常有用
从上面的简单场景中,你可以看到所需的业务成果(自动控制猫门)取决于对夜间情况监测。这种依赖性取决于时钟。但是时钟是无法区分白天和夜晚的。需要你来提供这种逻辑。
正在构建的系统中的另一个依赖项是能够自动访问猫门并启用或关闭它。该依赖项很可能取决于具有 IoT 功能的猫门提供的 API。
### 依赖管理面临快速失败
为了满足依赖项,我们将构建确定当前时间是白天还是晚上的逻辑。本着 TDD 的精神,我们将从一个小小的失败开始。
有关如何设置此练习所需的开发环境和脚手架的详细说明,请参阅我的[上一篇文章][2]。我们将重用相同的 NET 环境和 [xUnit.net][6] 框架。
接下来,创建一个名为 HAS“家庭自动化系统”的新项目创建一个名为 `UnitTest1.cs` 的文件。在该文件中,编写第一个失败的单元测试。在此单元测试中,描述你的期望结果。例如,当系统运行时,如果时间是晚上 7 点,负责确定是白天还是夜晚的组件将返回值 `Nighttime`
这是描述期望值的单元测试:
```
using System;
using Xunit;
namespace unittest
{
public class UnitTest1
{
DayOrNightUtility dayOrNightUtility = new DayOrNightUtility();
[Fact]
public void Given7pmReturnNighttime()
{
var expected = "Nighttime";
var actual = dayOrNightUtility.GetDayOrNight();
Assert.Equal(expected, actual);
}
}
}
```
至此,你可能已经熟悉了单元测试的结构。快速复习一下:在此示例中,通过给单元测试一个描述性名称`Given7pmReturnNighttime` 来描述期望结果。然后,在单元测试的主体中,创建一个名为 `expected` 的变量,并为该变量指定期望值(在该示例中,值为 `Nighttime`)。然后,为实际值指定一个 `actual`(在组件或服务处理一天中的时间之后可用)。
最后,通过断言期望值和实际值是否相等来检查是否满足期望结果:`Assert.Equal(expected, actual)`。
你还可以在上面的列表中看到名为 `dayOrNightUtility` 的组件或服务。该模块能够接收消息`GetDayOrNight`,并且返回 `string` 类型的值。
同样,本着 TDD 的精神,描述的组件或服务还尚未构建(仅为了后面说明在此进行描述)。构建这些是由所描述的期望结果来驱动的。
`app` 文件夹中创建一个新文件,并将其命名为 `DayOrNightUtility.cs`。将以下 C 代码添加到该文件中并保存:
```
using System;
namespace app {
public class DayOrNightUtility {
public string GetDayOrNight() {
string dayOrNight = "Undetermined";
return dayOrNight;
}
}
}
```
现在转到命令行,将目录更改为 `unittests` 文件夹,然后运行:
```
[Xunit.net 00:00:02.33] unittest.UnitTest1.Given7pmReturnNighttime [FAIL]
Failed unittest.UnitTest1.Given7pmReturnNighttime
[...]
```
恭喜,你已经完成了第一个失败的单元测试。单元测试的期望结果是 `DayOrNightUtility` 方法返回字符串 `Nighttime`,但相反,它返回是 `Undetermined`
### 修复失败的单元测试
修复失败的测试的一种快速而粗略的方法是将值 `Undetermined` 替换为值 `Nighttime` 并保存更改:
```
using System;
namespace app {
public class DayOrNightUtility {
public string GetDayOrNight() {
string dayOrNight = "Nighttime";
return dayOrNight;
}
}
}
```
现在运行,成功了。
```
Starting test execution, please wait...
Total tests: 1. Passed: 1. Failed: 0. Skipped: 0.
Test Run Successful.
Test execution time: 2.6470 Seconds
```
但是,对值进行硬编码基本上是在作弊,最好为 `DayOrNightUtility` 方法赋予一些智能。修改 `GetDayOrNight` 方法以包括一些时间计算逻辑:
```
public string GetDayOrNight() {
string dayOrNight = "Daylight";
DateTime time = new DateTime();
if(time.Hour < 7) {
dayOrNight = "Nighttime";
}
return dayOrNight;
}
```
该方法现在从系统获取当前时间,并与 `Hour` 比较,查看其是否小于上午 7 点。如果小于,则处理逻辑将 `dayOrNight` 字符串值从 `Daylight` 转换为 `Nighttime`。现在,单元测试通过。
### 测试驱动解决方案的开始
现在,我们已经开始了基本的单元测试,并为我们的时间依赖项提供了可行的解决方案。后面还有更多的测试案例需要执行。
在下一篇文章中,我将演示如何对白天时间进行测试以及如何在整个过程中利用故障。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/mutation-testing-example-tdd
作者:[Alex Bunardzic][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/Morisun029)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-bunardzic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_failure_celebrate.png?itok=LbvDAEZF (failure sign at a party, celebrating failure)
[2]: https://linux.cn/article-11468-1.html
[3]: https://www.infoq.com/presentations/Architecture-Without-an-End-State/
[4]: https://opensource.com/sites/default/files/uploads/cat.png (Cat standing on a roof)
[5]: https://www.agilealliance.org/glossary/user-stories
[6]: https://xunit.net/
[7]: http://www.google.com/search?q=new+msdn.microsoft.com

View File

@ -1,32 +1,34 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11485-1.html)
[#]: subject: (Essential Accessories for Intel NUC Mini PC)
[#]: via: (https://itsfoss.com/intel-nuc-essential-accessories/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Intel NUC 迷你 PC 的基本配件
英特尔 NUC 迷你 PC 的基本配件
======
几周前,我买了一台 [Intel NUC 迷你 PC][1]。我[在上面安装了 Linux][2],我非常享受。这个小巧的无风扇机器取代了台式机那庞大的 CPU。
![](https://img.linux.net.cn/data/attachment/album/201910/20/224650me0qoiqjeiysqqph.jpg)
Intel NUC 通常采用准系统形式,这意味着它没有任何内存、硬盘,也显然没有操作系统。许多[基于 Linux 的微型 PC][3] 定制化 Intel NUC 并添加磁盘、RAM 和操作系统将它出售给终端用户
几周前,我买了一台 [英特尔 NUC 迷你 PC][1]。我[在上面安装了 Linux][2],我非常喜欢它。这个小巧的无风扇机器取代了台式机那庞大的 CPU
不用说,它不像大多数其他台式机那样带有键盘,鼠标或屏幕
英特尔 NUC 通常采用准系统形式,这意味着它没有任何内存、硬盘,也显然没有操作系统。许多[基于 Linux 的微型 PC][3] 定制化英特尔 NUC 并添加磁盘、RAM 和操作系统将它出售给终端用户
[Intel NUC][4] 是一款出色的设备,如果你要购买台式机,我强烈建议你购买它。如果你正在考虑购买 Intel NUC你需要买一些配件以便开始使用它
不用说,它不像大多数其他台式机那样带有键盘、鼠标或屏幕
### 基本的 Intel NUC 配件
[英特尔 NUC][4] 是一款出色的设备,如果你要购买台式机,我强烈建议你购买它。如果你正在考虑购买英特尔 NUC你需要买一些配件以便开始使用它。
### 基本的英特尔 NUC 配件
![][5]
_文章中的 Amazon 链接是联盟链接。请阅读我们的[联盟政策][6]。_
*文章中的 Amazon 链接是(原文的)受益链接。请阅读我们的[受益政策][6]。
#### 外围设备:显示器、键盘和鼠标
这很容易想到。你需要有屏幕、键盘和鼠标才能使用计算机。你需要一台有 HDMI 连接的显示器和一个 USB 或无线键盘鼠标。如果你已经有了这些东西,那你可以继续。
这很容易想到。你需要有屏幕、键盘和鼠标才能使用计算机。你需要一台有 HDMI 连接的显示器和一个 USB 或无线键盘鼠标。如果你已经有了这些东西,那你可以继续。
如果你正在寻求建议,我建议购买 LG IPS LED 显示器。我有两台 22 英寸的型号,我对它提供的清晰视觉效果感到满意。
@ -34,35 +36,27 @@ _文章中的 Amazon 链接是联盟链接。请阅读我们的[联盟政策][6]
![HP EliteDisplay Monitor][8]
我在多屏设置中同时连接了三台显示器。一台显示器连接到指定的 HDMI 端口。两台显示器通过[Club 3D 的 Thunderbolt 转 HDMI 分配器][9]连接到 Thunderbolt 端口。
我在多屏设置中同时连接了三台显示器。一台显示器连接到指定的 HDMI 端口。两台显示器通过 [Club 3D 的 Thunderbolt 转 HDMI 分配器][9]连接到 Thunderbolt 端口。
你也可以选择超宽显示器。我对此没有亲身经历。
#### 交流电源线
当你拿到 NUC 时,你会惊讶地发现,尽管它有电源适配器,但它并没有插头。
![][10]
由于不同国家/地区的插头不同,因此英特尔决定将其从 NUC 套件中删除。我使用的是旧笔记本的电源线,但是如果你没有笔记本的电源线,那么很可能你需要自己准备一个。
#### 内存
Intel NUC 有两个内存插槽,最多可支持 32GB 内存。由于我的是 i3 核心处理器,因此我选择了 [Crucial 的 8GB DDR4 内存][11],价格约为 $33。
英特尔 NUC 有两个内存插槽,最多可支持 32GB 内存。由于我的是 i3 核心处理器,因此我选择了 [Crucial 的 8GB DDR4 内存][11],价格约为 $33。
![][12]
8 GB 内存在大多数情况下都没问题,但是如果你的是 i7 核心处理器,那么可以选择 [16GB 内存][13],价格约为 $67。你可以加,以获得最大 32GB。选择全在于你。
8 GB 内存在大多数情况下都没问题,但是如果你的是 i7 核心处理器,那么可以选择 [16GB 内存][13],价格约为 $67。你可以加两条以获得最大 32GB。选择全在于你。
#### 硬盘(重要)
Intel NUC 同时支持 2.5 英寸驱动器和 M.2 SSD因此你可以同时使用两者以获得更多存储空间。
英特尔 NUC 同时支持 2.5 英寸驱动器和 M.2 SSD因此你可以同时使用两者以获得更多存储空间。
2.5 英寸插槽可同时容纳 SSD 和 HDD。我强烈建议选择 SSD因为它比 HDD 快得多。[480GB 2.5寸][14]的价格是 $60。我认为这是一个合理的价格。
2.5 英寸插槽可同时容纳 SSD 和 HDD。我强烈建议选择 SSD因为它比 HDD 快得多。[480GB 2.5寸][14]的价格是 $60。我认为这是一个合理的价格。
![][15]
2.5 英寸驱动器的标准 SATA 口速度为 6Gb/秒。根据你是否选择 NVMe SSDM.2 插槽可能会更快。 NVMe非易失性内存主机控制器接口规范SSD 的速度比普通 SSD也称为 SATA SSD快 4 倍。但是它们可能也比 SATA M2 SSD 贵一些。
2.5 英寸驱动器的标准 SATA 口速度为 6 Gb/秒。根据你是否选择 NVMe SSDM.2 插槽可能会更快。 NVMe非易失性内存主机控制器接口规范SSD 的速度比普通 SSD也称为 SATA SSD快 4 倍。但是它们可能也比 SATA M2 SSD 贵一些。
当购买 M.2 SSD 时,请检查产品图片。无论是 NVMe 还是 SATA SSD都应在磁盘本身的图片中提到。你可以考虑使用[经济的三星 EVO NVMe M.2 SSD][16]。
@ -70,19 +64,27 @@ Intel NUC 同时支持 2.5 英寸驱动器和 M.2 SSD因此你可以同时使
M.2 插槽和 2.5 英寸插槽中的 SATA SSD 具有相同的速度。这就是为什么如果你不想选择昂贵的 NVMe SSD建议你选择 2.5 英寸 SATA SSD并保留 M.2 插​​槽供以后升级。
#### 交流电源线
当我拿到 NUC 时,为惊讶地发现,尽管它有电源适配器,但它并没有插头。
正如一些读者指出的那样,你可能有完整的电源线。这取决于你的地理区域和供应商。因此,请检查产品说明和用户评论,以验证其是否具有完整的电源线。
![][10]
#### 其他配套配件
你需要使用 HDMI 线缆连接显示器。如果你要购买新显示器,通常应会有一根线缆。
如果要使用 M.2 插槽那么可能需要螺丝刀。Intel NUC 是一款出色的设备,你只需用手旋转四个脚即可拧开底部面板。你必须打开设备才能放置内存和磁盘。
如果要使用 M.2 插槽,那么可能需要螺丝刀。英特尔 NUC 是一款出色的设备,你只需用手旋转四个脚即可拧开底部面板。你必须打开设备才能放置内存和磁盘。
![Intel NUC with Security Cable | Image Credit Intel][18]
NUC 还有防盗孔,可与防盗绳一起使用。在业务环境中,建议使用防盗绳保护计算机安全。购买[防盗绳几美元][19]便可节省数百美元。
**你使用什么配件?**
### 你使用什么配件?
这些即使我在使用和建议使用的 Intel NUC 配件。你呢?如果你有一台 NUC你会使用哪些配件并推荐给其他 NUC 用户?
这些就是我在使用和建议使用的英特尔 NUC 配件。你呢?如果你有一台 NUC你会使用哪些配件并推荐给其他 NUC 用户?
--------------------------------------------------------------------------------
@ -91,14 +93,14 @@ via: https://itsfoss.com/intel-nuc-essential-accessories/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.amazon.com/Intel-NUC-Mainstream-Kit-NUC8i3BEH/dp/B07GX4X4PW?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07GX4X4PW (barebone Intel NUC mini PC)
[2]: https://itsfoss.com/install-linux-on-intel-nuc/
[2]: https://linux.cn/article-11477-1.html
[3]: https://itsfoss.com/linux-based-mini-pc/
[4]: https://www.intel.in/content/www/in/en/products/boards-kits/nuc.html
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/intel-nuc-accessories.png?ssl=1
@ -106,7 +108,7 @@ via: https://itsfoss.com/intel-nuc-essential-accessories/
[7]: https://www.amazon.com/HP-EliteDisplay-21-5-Inch-1FH45AA-ABA/dp/B075L4VKQF?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B075L4VKQF (HP EliteDisplay monitors)
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/hp-elitedisplay-monitor.png?ssl=1
[9]: https://www.amazon.com/Club3D-CSV-1546-USB-C-Multi-Monitor-Splitter/dp/B06Y2FX13G?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B06Y2FX13G (thunderbolt to HDMI splitter from Club 3D)
[10]: https://itsfoss.com/wp-content/uploads/2019/09/ac-power-cord-3-pongs.webp
[10]: https://img.linux.net.cn/data/attachment/album/201910/20/224718eebvzvvvm0b6f3ow.jpg
[11]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B01BIWKP58?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01BIWKP58 (8GB DDR4 RAM from Crucial)
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/crucial-ram.jpg?ssl=1
[13]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B019FRBHZ0?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B019FRBHZ0 (16 GB RAM)

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11481-1.html)
[#]: subject: (Top 10 open source video players for Linux)
[#]: via: (https://opensourceforu.com/2019/10/top-10-open-source-video-players-for-linux/)
[#]: author: (Stella Aldridge https://opensourceforu.com/author/stella-aldridge/)
@ -10,15 +10,15 @@
Linux 中的十大开源视频播放器
======
[![][1]][2]
![][1]
_选择合适的视频播放器有助于确保你获得最佳的观看体验,并为你提供[创建视频网站][3]的工具。你甚至可以根据个人喜好自定义正在观看的视频。_
> 选择合适的视频播放器有助于确保你获得最佳的观看体验,并为你提供[创建视频网站][3]的工具。你甚至可以根据个人喜好自定义正在观看的视频。
因此,为了帮助你挑选适合你需求的最佳播放器,我们列出了 Linux 中十大开源播放器。
因此,为了帮助你挑选适合你需求的最佳播放器,我们列出了 Linux 中十大开源播放器。
让我们来看看:
**1\. XBMC Kodi 媒体中心**
### 1、XBMC Kodi 媒体中心
这是一个灵活的跨平台播放器,核心使用 C++ 编写,并提供 Python 脚本作为附加组件。使用 Kodi 的好处包括:
@ -28,11 +28,9 @@ _选择合适的视频播放器有助于确保你获得最佳的观看体验
* 有很多不错的附加组件,如视频和音频流插件、主题、屏幕保护程序等
* 它支持多种格式,如 MPEG-1、2、4、RealVideo、HVC、HEVC 等
### 2、VLC 媒体播放器
**2\. VLC 媒体播放器**
由于该播放器在一系列操作系统上具有令人印象深刻的功能和可用性,他在列表上是理所当然的。它使用 C、C++ 和 Objective C 编写用户无需使用插件这要归功于它对解码库的广泛支持。VLC 媒体播放器的优势包括:
由于该播放器在一系列操作系统上具有令人印象深刻的功能和可用性,它出现在列表上是理所当然的。它使用 C、C++ 和 Objective C 编写用户无需使用插件这要归功于它对解码库的广泛支持。VLC 媒体播放器的优势包括:
* 在 Linux 上支持 DVD 播放器
* 能够播放 .iso 文件
@ -40,54 +38,45 @@ _选择合适的视频播放器有助于确保你获得最佳的观看体验
* 可以直接从 U 盘或外部驱动器运行
* API 支持和浏览器支持(通过插件)
**3\. BomiCMPlayer**
### 3、BomiCMPlayer
这个灵活和强大的播放器被许多普通用户选择,它的优势有:
* 易于使用的图形用户界面 GUI
* 易于使用的图形用户界面GUI
* 令人印象深刻的播放能力
* 恢复播放的选项
* 可以恢复播放
* 支持字幕,可以渲染多个字幕文件
![][4]
### 4、Miro 音乐与视频播放器
**[![][4]][5]
4\. Miro Music and Video Player**
以前被称为 Democracy Player DTV Miro 由分享文化基金会Participatory Culture Foundation重新开发是一个不错的跨平台音频视频播放器。令人印象深刻因为
以前被称为 Democracy PlayerDTVMiro 由<ruby>参与文化基金会<rt>Participatory Culture Foundation</rt></ruby>重新开发,是一个不错的跨平台音频视频播放器。令人印象深刻,因为:
* 支持一些高清音频和视频
* 提供超过 40 种语言版本
* 可以播放多种文件格式例如QuickTime、WMV、MPEG 文件、音频视频接口 AVI、XVID
* 可以播放多种文件格式例如QuickTime、WMV、MPEG 文件、AVI、XVID
* 一旦可用,可以自动通知用户并下载视频
### 5、SMPlayer
**5\. SMPlayer**
这个跨平台的媒体播放器,只使用 C++ 的 Qt 库编写,它是一个强大的,多功能播放器。我们喜欢它,因为:
这个跨平台的媒体播放器,只使用 C++ 的 Qt 库编写,它是一个强大的多功能播放器。我们喜欢它,因为:
* 有多语言选择
* 支持所有默认格式
* 支持 EDL 文件,你可以配置从 Internet 获取的字幕
* 支持 EDL 文件,你可以配置从互联网获取的字幕
* 可从互联网下载的各种皮肤
* 倍速播放
**6\. MPV Player**
### 6、MPV 播放器
它用 C、Objective-C、Lua 和 Python 编写,免费、易于使用,并且有许多新功能,便于使用。主要加分是:
* 可以编译为一个库,公开客户端 API从而增强控制
* 允许媒体编码
* 平滑
* 平滑动
**7\. Deepin Movie**
### 7、Deepin Movie
此播放器是开源媒体播放器的一个极好的例子,它有很多优势,包括:
@ -95,45 +84,37 @@ _选择合适的视频播放器有助于确保你获得最佳的观看体验
* 各种格式的视频文件可以通过这个播放器轻松播放
* 流媒体功能能让用户享受许多在线视频资源
### 8、Gnome 视频
以前称为 Totem这是 Gnome 桌面环境的播放器。
**8\. Gnome Videos**
以前称为 Totem这是 Gnome 桌面环境选择的播放器。
完全用 C 编写,使用 GStreamer 多媒体框架构建,另外的版本(&gt;2.7.1)使用 xine 作为后端。它是很棒的,因为:
完全用 C 编写,使用 GStreamer 多媒体框架构建,高于 2.7.1 的版本使用 xine 作为后端。它是很棒的,因为:
它支持大量的格式,包括:
* Supports for direct video playback from Internet channels such as Apple
* SHOUTcast、SMIL、M3U、Windows 媒体播放器格式等
* 你可以在播放过程中调整灯光设置,如亮度和对比度
* 加载 SubRip 字幕
* 支持从互联网频道(如 Apple直接播放视频
**9\. Xine Multimedia Player**
### 9、Xine 多媒体播放器
我们列表中用 C 编写的另外一个跨平台多媒体播放器。这是一个全能播放器,因为:
* 它支持物理媒体以及视频设备。3gp MatroskaMKV、 MOV Mp4、音频格式
* 它支持物理媒体以及视频设备。3gp、MKV、 MOV、Mp4、音频格式
* 网络协议V4L、DVB 和 PVR 等
* 它可以手动校正音频和视频流的同步
### 10、ExMPlayer
**10\. ExMPlayer**
最后单同样重要的一个ExMPlayer 是一个惊人的、强大的 MPlayer 的 GUI 前端。它的优点包括:
最后但同样重要的一个ExMPlayer 是一个惊人的、强大的 MPlayer 的 GUI 前端。它的优点包括:
* 可以播放任何媒体格式
* 支持网络流和字幕
* 易于使用的音频转换器
* 高品质的音频提取,而不会影响音质
上面的视频播放器在 Linux 上工作得很好。我们建议你尝试一下,选择一个最适合你的播放器。
上面这些视频播放器在 Linux 上工作得很好。我们建议你尝试一下,选择一个最适合你的播放器。
--------------------------------------------------------------------------------
@ -142,7 +123,7 @@ via: https://opensourceforu.com/2019/10/top-10-open-source-video-players-for-lin
作者:[Stella Aldridge][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,79 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11476-1.html)
[#]: subject: (Use sshuttle to build a poor mans VPN)
[#]: via: (https://fedoramagazine.org/use-sshuttle-to-build-a-poor-mans-vpn/)
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
使用 shuttle 构建一个穷人的虚拟专网
======
![][1]
如今,企业网络经常使用“虚拟专用网络”[来保证员工通信安全][2]。但是,使用的协议有时会降低性能。如果你可以使用 SSH 连接远程主机,那么你可以设置端口转发。但这可能会很痛苦,尤其是在你需要与该网络上的许多主机一起使用的情况下。试试 `sshuttle`,它可以通过 SSH 访问来设置快速简易的虚拟专网。请继续阅读以获取有关如何使用它的更多信息。
`sshuttle` 正是针对上述情况而设计的。远程端的唯一要求是主机必须有可用的 Python。这是因为 `sshuttle` 会构造并运行一些 Python 代码来帮助传输数据。
### 安装 sshuttle
`sshuttle` 被打包在官方仓库中,因此很容易安装。打开一个终端,并[使用 sudo][3] 来运行以下命令:
```
$ sudo dnf install sshuttle
```
安装后,你可以在手机页中找到相关信息:
```
$ man sshuttle
```
### 设置虚拟专网
最简单的情况就是将所有流量转发到远程网络。这不一定是一个疯狂的想法,尤其是如果你不在自己家里这样的受信任的本地网络中。将 `-r` 选项与 SSH 用户名和远程主机名一起使用:
```
$ sshuttle -r username@remotehost 0.0.0.0/0
```
但是,你可能希望将该虚拟专网限制为特定子网,而不是所有网络流量。(有关子网的完整讨论超出了本文的范围,但是你可以在[维基百科][4]上阅读更多内容。)假设你的办公室内部使用了预留的 A 类子网 10.0.0.0 和预留的 B 类子网 172.16.0.0。上面的命令变为:
```
$ sshuttle -r username@remotehost 10.0.0.0/8 172.16.0.0/16
```
这非常适合通过 IP 地址访问远程网络的主机。但是,如果你的办公室是一个拥有大量主机的大型网络,该怎么办?名称可能更方便,甚至是必须的。不用担心,`sshuttle` 还可以使用 `dns` 选项转发 DNS 查询:
```
$ sshuttle --dns -r username@remotehost 10.0.0.0/8 172.16.0.0/16
```
要使 `sshuttle` 以守护进程方式运行,请加上 `-D` 选项。它会以 syslog 兼容的日志格式发送到 systemd 日志中。
根据本地和远程系统的功能,可以将 `sshuttle` 用于基于 IPv6 的虚拟专网。如果需要,你还可以设置配置文件并将其与系统启动集成。如果你想阅读更多有关 `sshuttle` 及其工作方式的信息,请[查看官方文档][5]。要查看代码,请[进入 GitHub 页面][6]。
*题图由 [Kurt Cotoaga][7] 拍摄并发表在 [Unsplash][8] 上。*
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/use-sshuttle-to-build-a-poor-mans-vpn/
作者:[Paul W. Frields][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/pfrields/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/sshuttle-816x345.jpg
[2]: https://en.wikipedia.org/wiki/Virtual_private_network
[3]: https://fedoramagazine.org/howto-use-sudo/
[4]: https://en.wikipedia.org/wiki/Subnetwork
[5]: https://sshuttle.readthedocs.io/en/stable/index.html
[6]: https://github.com/sshuttle/sshuttle
[7]: https://unsplash.com/@kydroon?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[8]: https://unsplash.com/s/photos/shuttle?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -1,58 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Samsung introduces SSDs it claims will 'never die')
[#]: via: (https://www.networkworld.com/article/3440026/samsung-introduces-ssds-it-claims-will-never-die.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Samsung introduces SSDs it claims will 'never die'
======
New fail-in-place technology in Samsung's SSDs will allow the chips to gracefully recover from chip failure.
Samsung
[Solid-state drives][1] (SSDs) operate by writing to cells within the chip, and after so many writes, the cell eventually dies off and can no longer be written to. For that reason, SSDs have more actual capacity than listed. A 1TB drive, for example, has about 1.2TB of capacity, and as chips die off from repeated writes, new ones are brought online to keep the 1TB capacity.
But that's for gradual wear. Sometimes SSDs just up and die completely, and without warning after a whole chip fails, not just a few cells. So Samsung is trying to address that with a new generation of SSD memory chips with a technology it calls fail-in-place (FIP).
**Also read: [Inside Hyperconvergence: Combining compute, storage and networking][2]**
FIP technology allows a drive to cope with a failure by working around the dead chip and allowing the SSD to keep operating and just not using the bad chip. You will have less storage, but in all likelihood that drive will be replaced anyway, so this helps prevent data loss.
FIP also scans the data for any damage before copying it to the remaining NAND, which would be the first time I've ever seen a SSD with built-in data recovery.
### Built-in virtualization and machine learning technology
The new Samsung SSDs come with two other software innovations. The first is built-in virtualization technology, which allows a single SSD to be divided up into up to 64 smaller drives for a virtual environment.
The second is V-NAND machine learning technology, which helps to "accurately predict and verify cell characteristics, as well as detect any variation among circuit patterns through big data analytics," as Samsung put it. Doing so means much higher levels of performance from the drive.
As you can imagine, this technology is aimed at enterprises and large-scale data centers, not consumers. All told, Samsung is launching 19 models of these new SSDs called under the names PM1733 and PM1735.
**[ [Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][3] ]**
The PM1733 line features six models in a 2.5-inch U.2 form factor, offering storage capacity of between 960GB and 15.63TB, as well as four HHHL card-type drives with capacity ranging from 1.92TB to 30.72TB of storage. Each drive is guaranteed for one drive writes per day (DWPD) for five years. In other words, the warranty is good for writing the equivalent of the drive's total capacity once per day every day for five years.
The PM1735 drives have lower capacity, maxing out at 12.8TB, but they are far more durable, guaranteeing three DWPD for five years. Both drives support PCI Express 4, which has double the throughput of the widely used PCI Express 3. The PM1735 offers nearly 14 times the sequential performance of a SATA-based SSD, with 8GB/s for read operations and 3.8GB/s for writes.
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3440026/samsung-introduces-ssds-it-claims-will-never-die.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html
[2]: https://www.idginsiderpro.com/article/3409019/inside-hyperconvergence-combining-compute-storage-and-networking.html
[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -1,61 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale)
[#]: via: (https://opensourceforu.com/2019/09/global-tech-giants-form-presto-foundation-to-tackle-distributed-data-processing-at-scale/)
[#]: author: (Longjam Dineshwori https://opensourceforu.com/author/dineshwori-longjam/)
Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale
======
* _**The Foundation aims to make the database search engine “the fastest and most reliable SQL engine for massively distributed data processing.”**_
* _**Prestos architecture allows users to query a variety of data sources and move at scale and speed.**_
![Facebook][1]
Facebook, Uber, Twitter and Alibaba have joined hands to form a foundation to help Presto, a database search engine and processing tool, scale and diversify its community.
Under Presto will be now hosted under the Linux Foundation, the U.S.-based non-profit organization announced on Monday.
The newly established Presto Foundation will operate under a community governance model with representation from each of the founding members. It aims to make the engine “the fastest and most reliable SQL engine for massively distributed data processing.”
“The Linux Foundation is excited to work with the Presto community, collaborating to solve the increasing problem of massive distributed data processing at internet scale,” said Michael Dolan, VP of Strategic Programs at the Linux Foundation.”
**Presto can run on large clusters of machines**
Presto was developed at Facebook in 2012 as a high-performance distributed SQL query engine for large scale data analytics. Prestos architecture allows users to query a variety of data sources such as Hadoop, S3, Alluxio, MySQL, PostgreSQL, Kafka, MongoDB and move at scale and speed.
It can query data where it is stored without needing to move the data to a separate system. Its in-memory and distributed query processing results in query latencies of seconds to minutes.
“Presto has been designed for high performance exabyte-scale data processing on a large number of machines. Its flexible design allows processing data from a wide variety of data sources. From day one Presto has been designed with efficiency, scalability and reliability in mind, and it has been improved over the years to take on additional use cases at Facebook, such as batch and other application specific interactive use cases,” said Nezih Yigitbasi, Engineering Manager of Presto at Facebook.
Presto is being used by over a thousand Facebook employees for running several million queries and processing petabytes of data per day, according to Kathy Kam, Head of Open Source at Facebook.
**Expanding community for the benefit of all**
Facebook released the source code of Presto to developers in 2013 in the hope that other companies would help to drive the future direction of the project.
“It turns out many other companies were interested and so under The Linux Foundation, we believe the project can engage others and grow the community for the benefit of all,” said Kathy Kam.
Ubers data platform architecture uses Presto to extract critical insights from aggregated data. “Uber is honoured to partner with the Linux Foundation and major contributors from the tech community to bring the Presto Foundation to life. Our goal is to help create an open and collaborative community in which Presto developers can thrive,” asserted Brian Hsieh, Head of Open Source at Uber.
Liang Lin, Senior Director of Alibaba OLAP products, believes that the collaboration would eventually benefit the community as well as Alibaba and its customers.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/global-tech-giants-form-presto-foundation-to-tackle-distributed-data-processing-at-scale/
作者:[Longjam Dineshwori][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/dineshwori-longjam/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/06/Facebook-Like.jpg?resize=350%2C213&ssl=1

View File

@ -1,65 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco: 13 IOS, IOS XE security flaws you should patch now)
[#]: via: (https://www.networkworld.com/article/3441221/cisco-13-ios-ios-xe-security-flaws-you-should-patch-now.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco: 13 IOS, IOS XE security flaws you should patch now
======
Cisco says vulnerabilities in IOS/IOS XE could cause DOS situation; warns on Traceroute setting
Woolzian / Getty Images
Cisco this week warned its IOS and IOS XE customers of 13 vulnerabilities in the operating system software they should patch as soon as possible.
All of the vulnerabilities revealed in the companys semiannual [IOS and IOS XE Software Security Advisory Bundle][1] have a security impact rating (SIR) of "high". Successful exploitation of the vulnerabilities could allow an attacker to gain unauthorized access to, conduct a command injection attack on, or cause a denial of service (DoS) condition on an affected device, Cisco stated. 
["How to determine if Wi-Fi 6 is right for you"][2]
Two of the vulnerabilities affect both Cisco IOS Software and Cisco IOS XE Software. Two others affect Cisco IOS Software, and eight of the vulnerabilities affect Cisco IOS XE Software. The final one affects the Cisco IOx application environment. Cisco has confirmed that none of the vulnerabilities affect Cisco IOS XR Software or Cisco NX-OS Software.  Cisco [has released software updates][3] that address these problems.
Some of the worst exposures include:
* A [vulnerability in the IOx application environment][4] for Cisco IOS Software could let an authenticated, remote attacker gain unauthorized access to the Guest Operating System (Guest OS) running on an affected device. The vulnerability is due to incorrect role-based access control (RBAC) evaluation when a low-privileged user requests access to a Guest OS that should be restricted to administrative accounts. An attacker could exploit this vulnerability by authenticating to the Guest OS by using the low-privileged-user credentials. An exploit could allow the attacker to gain unauthorized access to the Guest OS as a root.This vulnerability affects Cisco 800 Series Industrial Integrated Services Routers and Cisco 1000 Series Connected Grid Routers (CGR 1000) that are running a vulnerable release of Cisco IOS Software with Guest OS installed.  While Cisco did not rate this vulnerability as critical, it did have a Common Vulnerability Scoring System (CVSS) of 9.9 out of 10.  Cisco recommends disabling the guest feature until a proper fix is installed.
* An exposure in the [Ident protocol handler of Cisco IOS and IOS XE][5] software could allow a remote attacker to cause an affected device to reload. The problem exists because the affected software incorrectly handles memory structures, leading to a NULL pointer dereference, Cisco stated. An attacker could exploit this vulnerability by opening a TCP connection to specific ports and sending traffic over that connection. A successful exploit could let the attacker cause the affected device to reload, resulting in a denial of service (DoS) condition. This vulnerability affects Cisco devices that are running a vulnerable release of Cisco IOS or IOS XE Software and that are configured to respond to Ident protocol requests.
* A vulnerability in the [common Session Initiation Protocol (SIP) library][6] of Cisco IOS and IOS XE Software could let an unauthenticated, remote attacker trigger a reload of an affected device, resulting in a denial of service (DoS). The vulnerability is due to insufficient sanity checks on an internal data structure. An attacker could exploit this vulnerability by sending a sequence of malicious SIP messages to an affected device. An exploit could allow the attacker to cause a NULL pointer dereference, resulting in a crash of the _iosd_ This triggers a reload of the device, Cisco stated.
* A [vulnerability in the ingress packet-processing][7] function of Cisco IOS Software for Cisco Catalyst 4000 Series Switches could let an aggressor cause a denial of service (DoS). The vulnerability is due to improper resource allocation when processing TCP packets directed to the device on specific Cisco Catalyst 4000 switches. An attacker could exploit this vulnerability by sending crafted TCP streams to an affected device. A successful exploit could cause the affected device to run out of buffer resources, impairing operations of control-plane and management-plane protocols, resulting in a DoS condition. This vulnerability can be triggered only by traffic that is destined to an affected device and cannot be exploited using traffic that transits an affected device Cisco stated.
In addition to the warnings, Cisco also [issued an advisory][8] for users to deal with problems in its IOS and IOS XE  Layer 2 (L2) traceroute utility program.  The traceroute identifies the L2 path that a packet takes from a source device to a destination device.
Cisco said that by design, the L2 traceroute server does not require authentication, but it allows certain information about an affected device to be read, including Hostname, hardware model, configured interfaces, IP addresses and other details.  Reading this information from multiple switches in the network could allow an attacker to build a complete L2 topology map of that network.
Depending on whether the L2 traceroute feature is used in the environment and whether the Cisco IOS or IOS XE Software release supports the CLI commands to implement the respective option, Cisco said there are several ways to secure the L2 traceroute server: disable it, restrict access to it through infrastructure access control lists (iACLs), restrict access through control plane policing (CoPP), and upgrade to a software release that disables the server by default.
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][9] ]**
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3441221/cisco-13-ios-ios-xe-security-flaws-you-should-patch-now.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://tools.cisco.com/security/center/viewErp.x?alertId=ERP-72547
[2]: https://www.networkworld.com/article/3356838/how-to-determine-if-wi-fi-6-is-right-for-you.html
[3]: https://tools.cisco.com/security/center/softwarechecker.x
[4]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190925-ios-gos-auth
[5]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190925-identd-dos
[6]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190925-sip-dos
[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190925-cat4000-tcp-dos
[8]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190925-l2-traceroute
[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -1,51 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (MG Motor Announces Developer Program and Grant in India)
[#]: via: (https://opensourceforu.com/2019/09/mg-motor-announces-developer-program-and-grant-in-india/)
[#]: author: (Mukul Yudhveer Singh https://opensourceforu.com/author/mukul-kumar/)
MG Motor Announces Developer Program and Grant in India
======
[![][1]][2]
* _**Launched in partnership with Adobe, Cognizant, SAP, Airtel, TomTom and Unlimit**_
* _**Initiative provides developers to build innovative mobility applications and experiences**_
![][3]MG Motor India has today announced the introduction of its MG Developer Program and Grant. Launched in collaboration with leading technology companies such as SAP, Cognizant, Adobe, Airtel, TomTom and Unlimit, the initiative is aimed at incentivizing Indian innovators and developers to build futuristic mobility applications and experiences. The program also brings in TiE Delhi NCR as the ecosystem partner.
Rajeev Chaba, president &amp; MD, MG Motor India said, “The automobile industry is currently witnessing sweeping transformations in the space of connected, electric and shared mobility. MG aims to take this revolution forward with its focus on attaining technological leadership in the automotive industry. We have partnered with leading tech giants to enable start-ups to build innovative applications that would enable unique experiences for customers across the entire automotive ecosystem. More partners are likely to join the program in due course.”
The company is encouraging developers to send in their ideas to the MG India Team. During the program, selected ideas will get access to resources from the likes of Airtel, SAP, Adobe, Unlimit and Cognizant.
**Grants ranging up to Rs 25 lakhs (2.5 million) for start-ups and innovators**
As part of the MG Developer Program &amp; Grant, MG Motor India will provide innovators with an unparalleled opportunity to secure mentorship and funding from industry leaders. Shortlisted ideas will receive specialized, high-level mentoring and networking opportunities to assist with the practical development of the solution, business plan and modelling, testing facilities, go-to-market strategy, etc. Winning ideas will also have access to a grant, the amount of which will be decided by the jury, on a case-to-case basis.
The MG Developer Program &amp; Grant will initially focus on driving innovation in the following verticals: electric vehicles and components, batteries and management,  harging infrastructure, connected mobility, voice recognition, AI &amp; ML, navigation technologies, customer experiences, car buying experiences, and autonomous vehicles.
“The MG Developer &amp; Grant Program is the latest in a series of initiatives as part of our commitment to innovation as a core organizational pillar. The program will ensure proper mentoring from over 20 industry leaders for start-ups, laying a foundation for them to excel in the future and trigger a stream of newer Internet Car use-cases that will, in turn, drive adoption of new technologies within the Indian automotive ecosystem. It has been our commitment in the market and Innovation is our key pillar,” added Chaba.
The program will award grants ranging from INR5 lakhs to INR25 Lakhs. The program will be open to both external developers including students, innovators, inventors, startups and other tech companies and internal employee teams at MG Motor and its program partners.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/mg-motor-announces-developer-program-and-grant-in-india/
作者:[Mukul Yudhveer Singh][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/mukul-kumar/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/MG-Developer-program.png?resize=660%2C440&ssl=1 (MG Developer program)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/MG-Developer-program.png?fit=660%2C440&ssl=1
[3]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/MG-Developer-program.png?resize=350%2C233&ssl=1

View File

@ -1,77 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fedora projects for Hacktoberfest)
[#]: via: (https://fedoramagazine.org/fedora-projects-for-hacktoberfest/)
[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/)
Fedora projects for Hacktoberfest
======
![][1]
Its October! That means its time for the annual [Hacktoberfest][2] presented by DigitalOcean and DEV. Hacktoberfest is a month-long event that encourages contributions to open source software projects. Participants who [register][3] and submit at least four pull requests to GitHub-hosted repositories during the month of October will receive a free t-shirt.
In a recent Fedora Magazine article, I listed some areas where would-be contributors could [get started contributing to Fedora][4]. In this article, I highlight some specific projects that provide an opportunity to help Fedora while you participate in Hacktoberfest.
### Fedora infrastructure
* [Bodhi][5] — When a package maintainer builds a new version of a software package to fix bugs or add new features, it doesnt go out to users right away. First it spends time in the updates-testing repository where in can receive some real-world usage. Bodhi manages the flow of updates from the testing repository into the updates repository and provides a web interface for testers to provide feedback.
* [the-new-hotness][6] — This project listens to [release-monitoring.org][7] (which is also on [GitHub][8]) and opens a Bugzilla issue when a new upstream release is published. This allows package maintainers to be quickly informed of new upstream releases.
* [koschei][9] — koschei enables continuous integration for Fedora packages. It is software for running a service for scratch-rebuilding RPM packages in Koji instance when their build-dependencies change or after some time elapses.
* [MirrorManager2][10] — Distributing Fedora packages to a global user base requires a lot of bandwidth. Just like developing Fedora, distributing Fedora is a collaborative effort. MirrorManager2 tracks the hundreds of public and private mirrors and routes each user to the “best” one.
* [fedora-messaging][11] — Actions within the Fedora community—from source code commits to participating in IRC meetings to…lots of things—generate messages that can be used to perform automated tasks or send notifications. fedora-messaging is the tool set that makes sending and receiving these messages possible.
* [fedocal][12] — When is that meeting? Which IRC channel was it in again? Fedocal is the calendar system used by teams in the Fedora community to coordinate meetings. Not only is it a good Hacktoberfest project, its also [looking for a new maintainer][13] to adopt it.
In addition to the projects above, the Fedora Infrastructure team has highlighted [good Hacktoberfest issues][14] across all of their GitHub projects.
### Community projects
* [bodhi-rs][15] — This project provides Rust bindings for Bodhi.
* [koji-rs][16] — Koji is the system used to build Fedora packages. Koji-rs provides bindings for Rust applications.
* [fedora-rs][17] — This project provides a Rust library for interacting with Fedora services like other languages like Python have.
* [feedback-pipeline][18] — One of the current Fedora Council objectives is [minimization][19]: work to reduce the installation and patching footprint of Fedora releases. feedback-pipeline is a tool developed by this team to generate reports of RPM sizes and dependencies.
### And many more
The projects above are only a small sample focused on software used to build Fedora. Many Fedora packages have upstreams hosted on GitHub—too many to list here. The best place to start is with a project thats important to you. Any contributions you make help improve the entire open source ecosystem. If youre looking for something in particular, the [Join Special Interest Group][20] can help. Happy hacking!
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-projects-for-hacktoberfest/
作者:[Ben Cotton][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bcotton/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/hacktoberfest-816x345.jpg
[2]: https://hacktoberfest.digitalocean.com/
[3]: https://hacktoberfest.digitalocean.com/register
[4]: https://fedoramagazine.org/how-to-contribute-to-fedora/
[5]: https://github.com/fedora-infra/bodhi
[6]: https://github.com/fedora-infra/the-new-hotness
[7]: https://release-monitoring.org/
[8]: https://github.com/release-monitoring/anitya
[9]: https://github.com/fedora-infra/koschei
[10]: https://github.com/fedora-infra/mirrormanager2
[11]: https://github.com/fedora-infra/fedora-messaging
[12]: https://github.com/fedora-infra/fedocal
[13]: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/message/GH4N3HYJ4ARFRP666O6EQCHDIQMXVUJB/
[14]: https://github.com/orgs/fedora-infra/projects/4
[15]: https://github.com/ironthree/bodhi-rs
[16]: https://github.com/ironthree/koji-rs
[17]: https://github.com/ironthree/fedora-rs
[18]: https://github.com/minimization/feedback-pipeline
[19]: https://docs.fedoraproject.org/en-US/minimization/
[20]: https://fedoraproject.org/wiki/SIGs/Join

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: (runningwater)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -126,7 +126,7 @@ via: https://opensourceforu.com/2019/09/the-protocols-that-help-things-to-commun
作者:[Sapna Panchal][a]
选题:[lujun9972][b]
译者:[runningwater](https://github.com/runningwater)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,112 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (hopefully2333)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How DevOps professionals can become security champions)
[#]: via: (https://opensource.com/article/19/9/devops-security-champions)
[#]: author: (Jessica Repka https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo)
How DevOps professionals can become security champions
======
Breaking down silos and becoming a champion for security will help you,
your career, and your organization.
![A lock on the side of a building][1]
Security is a misunderstood element in DevOps. Some see it as outside of DevOps' purview, while others find it important (and overlooked) enough to recommend moving to [DevSecOps][2]. No matter your perspective on where it belongs, it's clear that security affects everyone.
Each year, the [statistics on hacking][3] become more alarming. For example, there's a hacker attack every 39 seconds, which can lead to stolen records, identities, and proprietary projects you're writing for your company. It can take months (and possibly forever) for your security team to discover the who, what, where, or when behind a hack.
What are operations professionals to do about these dire problems? I say it is time for us to become part of the solution by becoming security champions.
### Silos and turf wars
Over my years of working side-by-side with my local IT security (ITSEC) teams, I've noticed a great many things. A big one is that tension is very common between DevOps and security. This tension almost always stems from the security team's efforts to protect against vulnerabilities (e.g., by setting rules or disabling things) that interrupt DevOps' work and hinder their ability to deploy apps quickly.
You've seen it, I've seen it, everyone you meet in the field has at least one story about it. A small set of grudges turns into a burned bridge that takes time to repair—or the groups begin a small turf war, and the resulting silos make achieving DevOps unlikely.
### Get a new perspective
To try to break down these silos and end the turf wars, I talk to at least one person on each security team to learn about the ins and outs of daily security operations in our organization. I started doing this out of general curiosity, but I've continued because it always gives me a valuable new perspective. For example, I've learned that for every deployment that's stopped due to failed security, the ITSEC team is feverishly trying to patch 10 other problems it sees. Their brashness and quickness to react are due to the limited time they have to fix something before it becomes a large problem.
Consider the immense amount of knowledge it takes to find, analyze, and undo what has been done. Or to figure out what the DevOps team is doing—without background information—then replicate and test it. And to do all of this with their usual greatly understaffed security team.
This is the daily life of your security team, and your DevOps team is not seeing it. ITSEC's daily work can mean overtime hours and overwork to make sure that the company, its teams, and the proprietary work its teams are producing are secure.
### Ways to be a security champion
This is where being your own security champion can help. This means—for everything you work on—you must take a good, hard look at all the ways someone could log into it and what could be taken from it.
Help your security team help you. Introduce tools into your pipelines to integrate what you know will work with what they will know will work. Start with small things, such as reading up on Common Vulnerabilities and Exposures (CVEs) and adding scanning functions to your [CI/CD][4] pipelines. For everything you build, there is an open source scanning tool, and adding small open source tools (such as the ones below) can go the extra mile in the long run.
**Container scanning tools:**
* [Anchore Engine][5]
* [Clair][6]
* [Vuls][7]
* [OpenSCAP][8]
**Code scanning tools:**
* [OWASP SonarQube][9]
* [Find Security Bugs][10]
* [Google Hacking Diggity Project][11]
**Kubernetes security tools:**
* [Project Calico][12]
* [Kube-hunter][13]
* [NeuVector][14]
### Keep your DevOps hat on
Learning about new technology and how to create new things with it is part of the job if you're in a DevOps-related role. Security is no different. Here's my list of ways to keep up to date on the security front while keeping your DevOps hat on.
* Read one article each week about something related to security in whatever you're working on.
* Look at the [CVE][15] website weekly to see what's new.
* Try doing a hackathon. Some companies do this once a month; check out the [Beginner Hack 1.0][16] site if yours doesn't and you'd like to learn more.
* Try to attend at least one security conference a year with a member of your security team to see things from their side.
### Be a champion for good
There are several reasons you should become your own security champion. The first and foremost is to further your knowledge and advance your career. The second reason is to help other teams, foster new relationships, and break down the silos that harm your organization. Creating friendships across your organization has multiple benefits, including setting a good example of bridging teams and encouraging people to work together. You will also foster sharing knowledge throughout the organization and provide everyone with a new lease on security and greater internal cooperation.
Overall, being a security champion will lead you to be a champion for good across your organization.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/devops-security-champions
作者:[Jessica Repka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
[2]: https://opensource.com/article/19/1/what-devsecops
[3]: https://hostingtribunal.com/blog/hacking-statistics/
[4]: https://opensource.com/article/18/8/what-cicd
[5]: https://github.com/anchore/anchore-engine
[6]: https://github.com/coreos/clair
[7]: https://vuls.io/
[8]: https://www.open-scap.org/
[9]: https://github.com/OWASP/sonarqube
[10]: https://find-sec-bugs.github.io/
[11]: https://resources.bishopfox.com/resources/tools/google-hacking-diggity/
[12]: https://www.projectcalico.org/
[13]: https://github.com/aquasecurity/kube-hunter
[14]: https://github.com/neuvector/neuvector-helm
[15]: https://cve.mitre.org/
[16]: https://www.hackerearth.com/challenges/hackathon/beginner-hack-10/

View File

@ -0,0 +1,75 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Data center liquid-cooling to gain momentum)
[#]: via: (https://www.networkworld.com/article/3446027/data-center-liquid-cooling-to-gain-momentum.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Data center liquid-cooling to gain momentum
======
The serious number-crunching demands of AI, IoT and big data - and the heat they generate - may mean air cooling is on its way out.
artisteer / Getty Images
Concern over escalating energy costs is among reasons liquid-cooling solutions could gain traction in the [data center][1].
Schneider Electric, a major energy-management specialist, this month announced refreshed impetus to a collaboration conceived in 2014 with [liquid-cooling specialist Iceotope][2]. Now, [technology solutions company Avnet has been brought into that collaboration][3].
[[Get regularly scheduled insights by signing up for Network World newsletters.]][4]
The three companies will develop chassis-level immersive liquid cooling for data centers, Schneider Electric says in a [press release][5]. Liquid-cooling systems submerge server components in a dielectric fluid as opposed to air-cooled systems which create ambient cooled air.
[][6]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][6]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
One reason for the shift: “Compute-intensive applications like AI and [IoT][7] are driving the need for better chip performance,” Kevin Brown, CTO and SVP of Innovation, Secure Power, Schneider Electric, is quoted as saying.
“Liquid Cooling [is] more efficient and less costly for power-dense applications,” the company explains. Thats in part because the use of Graphical Processing Units (GPUs) is replacing some traditional processing, and is gaining ground. GPUs are better suited to data-mining-type applications than traditional processors. They parallel-process and are now used extensively in artificial intelligence compute environments and processor-hungry analytics churning big data.
“This makes traditional data-center air-cooled architectures impractical, or costly and less efficient than liquid-cooled approaches.” Reasons liquid-cooling may become a new go-to cooling solution is also related to “space constraints, water usage restrictions and harsh IT environments,” [Schneider said in a white paper earlier this year][8]:
As chip density increases, and the resulting rack-space that is required to hold the gear decreases, the need for traditional air-based cooling-equipment space keeps going up. So even as greater computing density decreases the space the equipment occupies, the space required for air-cooling it increases. The heat created is so great with GPUs that it stops being practical to air-cool.
Additionally, as edge data centers become more important theres an advantage to using IT that can be placed anywhere. “As the demand for IT deployments in urban areas, high rise buildings, and at the Edge increase, the need for placement in constrained locations will increase,” the paper says. In such scenarios, not requiring space for hot and cold aisles would be an advantage.
Liquid cooling would allow for silent operation, too; there arent any fans and pumps making disruptive noise.
Liquid cooling would also address restrictions on water useage that can affect the ability to use evaporative cooling and cooling towers to carry off heat generated by data centers. Direct-to-chip liquid-cooling systems of the kind the three companies want to concentrate their efforts on narrowly target the cooling at the server, not at the building level.
In harsh environments such as factories and [industrial IoT][9] deployments, heat and air quality can hinder air-cooling systems. Liquid-cooling systems can be self-contained in sealed units, thus being protected from dust, for example.
Interestingly, as serious computer gamers will know, liquid cooling isnt a new technology, [Wendy Torell points out in a Schneider blog post][10] pitching the technology. “Its been around for decades and has historically focused on mainframes, high-performance computing (HPC), and gaming applications,” she explains. “Demand for IoT, artificial intelligence, machine learning, big data analytics, and edge applications is once again bringing it into the limelight.”
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3446027/data-center-liquid-cooling-to-gain-momentum.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
[2]: http://www.iceotope.com/about
[3]: https://www.avnet.com/wps/portal/us/about-avnet/overview/
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.prnewswire.com/news-releases/schneider-electric-announces-partnership-with-avnet-and-iceotope-to-develop-liquid-cooled-data-center-solutions-300929586.html
[6]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[7]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
[8]: https://www.schneider-electric.us/en/download/search/liquid%20cooling/?langFilterDisabled=true
[9]: https://www.networkworld.com/article/3243928/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
[10]: https://blog.se.com/datacenter/2019/07/11/not-just-about-chip-density-five-reasons-consider-liquid-cooling-data-center/
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,117 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Measuring the business value of open source communities)
[#]: via: (https://opensource.com/article/19/10/measuring-business-value-open-source)
[#]: author: (Jon Lawrence https://opensource.com/users/the3rdlaw)
Measuring the business value of open source communities
======
Corporate constituencies are interested in finding out the business
value of open source communities. Find out how to answer key questions
with the right metrics.
![Lots of people in a crowd.][1]
In _[Measuring the health of open source communities][2]_, I covered some of the key questions and metrics that weve explored as part of the [CHAOSS project][3] as they relate to project founders, maintainers, and contributors. In this article, we focus on open source corporate constituents (such as open source program offices, business risk and legal teams, human resources, and others) and end users.
Where the bulk of the metrics for core project teams are quantitative, for the remaining constituents our metrics must reflect a much broader range of interests, and address many more qualitative measures. From the metrics collection standpoint, much of the data collection for qualitative measures is much more manual and subjective, but it is nonetheless within the scope CHAOSS hopes to be able to address as the project matures.
While people on the business side of things do sometimes care about the metrics in use by the project itself, there are only two fundamental questions that corporate constituencies have. The first is about _value_: "Will this choice help our business make more money sooner?" The second is about _risk_: "Will this choice hurt our businesss chances of making money?"
Those questions can come in many different iterations across disciplines, from human resources to legal counsel and executive offices. But, at the end of the day, having answers that are based on data can make open source engagement more efficient, effective, and less risky.
Once again, the information below is structured in a Goal-Question-Metric format:
* Open source program offices (OSPOs)
* As an OSPO leader, I care about prioritizing our resources toward healthy communities:
* How [active][4] is the community?
**Metric:** [Code development][5] \- The number of commits and pull requests, review time for new code commits and pull requests, code reviews and merges, the number of accepted vs. rejected pull requests, and the frequency of new version releases.
**Metric:** [Issue resolution][6] \- The number of new issues, closed issues, the ratio of new vs. closed issues, and the average open time per issue.
**Metric:** Social - Social media mention counts, social media sentiment analysis, the activity of community blog, and news releases (_future release_).
* What is the [value][7] of our contributions to the project? (This is an area in active development.)
**Metric:** Time value - Time saved for training developers on new technologies, and time saved maintaining custom development once the improvements are upstreamed.
**Metric:** Dollar value - How much would it have cost to maintain changes and custom solutions internally, versus contributing upstream and ensuring compatibility with future community releases
* What is the value of contributions to the project by other contributors and organizations?
**Metric:** Time value - Time to market, new community-developed features released, and support for the project by the community versus the company.
**Metric:** Dollar value - How much would it cost to internally rebuild the features provided by the community, and what is the opportunity cost of lagging behind innovations in open source projects?
* Downstream value: How many other projects list our project as a dependency?
**Metric:** The value of the ecosystem that is around a project.
* How many forks of our project have there been?
**Metric:** Are core developers more active in the mainline or a fork?
**Metric:** Are the forks contributing back to the mainline, or developing in new directions?
* Engineering leadership
* As an approving architect, I care most about good design patterns that introduce a minimum of technical debt.
**Metric:** [Test Coverage][8] \- What percentage of the code is tested?
**Metric:** What is the percentage of code undergoing code reviews?
**Metric:** Does the project follow [Core][9] [Infrastructure][9] [Initiative (CII) Best Practices][9]?
* As an engineering executive, I care most about minimizing time-to-market and bugs, and maximizing platform stability and reliability.
**Metric:** The defect resolution velocity.
**Metric:** The defect density.
**Metric:** The feature development velocity.
* I also want social proofs that give me a level of comfort.
**Metric:** Sentiment analysis of social media related to the project.
**Metric:** Count of white papers.
**Metric:** Code Stability - Project version numbers and the frequency of new releases.
There is also the issue of legal counsel. This goal statement is: "As legal counsel, I care most about minimizing our companys chances of getting sued." The question is: "What kind of license does the software have, and what obligations do we have under the license?"
The metrics involved here are:
* **Metric:** [License Count][10] \- How many different licenses are declared in a given project?
* **Metric:** [License Declaration][11] \- What kinds of licenses are declared in a given project?
* **Metric:** [License Coverage][12] \- How much of a given codebase is covered by the declared license?
Lastly, there are further goals our project is considering to measure the impact of corporate open source policy as it relates to talent acquisition and retention. The goal for human resource managers is: "As an HR manager, I want to attract and retain the best talent I can." The questions and metrics are as follows:
* What impact do our open source policies have on talent acquisition?
**Metric:** Talent acquisition - Measure over time how many candidates report that its important to them that they get to work with open source technologies.
* What impact do our open source policies have on talent retention?
**Metric:** Talent retention - Measure how much employee churn can be reduced because of people being able to work with or use open source technologies.
* What is the impact on training employees who can learn from engaging in open source projects?
**Metric:** Talent development - Measure over time the importance to employees of being able to use open source tech effectively.
* How does allowing employees to work in a community outside of the company impact job satisfaction?
**Metric:** Talent satisfaction - Measure over time the importance to employees of being able to contribute to open source tech.
**Source:** Internal surveys.
**Source:** Exit interviews. Did our policies around open source technologies at all influence your decision to leave?
### Wrapping up
It is still the early days of building a platform for bringing together these disparate data sources. The CHAOSS core of [Augur][13] and [GrimoireLab][14] currently supports over two dozen sources, and Im excited to see what lies ahead for this project.
As the CHAOSS frameworks mature, Im optimistic that teams and projects that implement these types of measurement will be able to make better real-world decisions that result in healthier and more productive software development lifecycles.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/measuring-business-value-open-source
作者:[Jon Lawrence][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/the3rdlaw
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_community_1.png?itok=rT7EdN2m (Lots of people in a crowd.)
[2]: https://opensource.com/article/19/8/measure-project
[3]: https://github.com/chaoss/
[4]: https://github.com/chaoss/wg-evolution/blob/master/focus_areas/community_growth.md
[5]: https://github.com/chaoss/wg-evolution#metrics
[6]: https://github.com/chaoss/wg-evolution/blob/master/focus_areas/issue_resolution.md
[7]: https://github.com/chaoss/wg-value
[8]: https://chaoss.community/metric-test-coverage/
[9]: https://github.com/coreinfrastructure/best-practices-badge
[10]: https://github.com/chaoss/wg-risk/blob/master/metrics/License_Count.md
[11]: https://github.com/chaoss/wg-risk/blob/master/metrics/License_Declared.md
[12]: https://github.com/chaoss/wg-risk/blob/master/metrics/License_Coverage.md
[13]: https://github.com/chaoss/augur
[14]: https://github.com/chaoss/grimoirelab

View File

@ -0,0 +1,88 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Pennsylvania school district tackles network modernization)
[#]: via: (https://www.networkworld.com/article/3445976/pennsylvania-school-district-tackles-network-modernization.html)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
Pennsylvania school district tackles network modernization
======
NASD upgrades its campus core to be the foundation for digital learning.
Wenjie Dong / Getty Images
Success in business and education today starts with infrastructure modernization. In fact, my research has found that digitally-forward organizations spend more than twice what their non-digital counterparts spend on evolving their IT infrastructure. However, most of the focus from IT has been on upgrading the application and compute infrastructure with little thought given to a critical ingredient the network. Organizations can only be as agile as the least agile component of their infrastructure, and for most companies, thats the network.
### Manual processes plague network reliability
Legacy networks have outlived their useful life. The existing three+ tier architecture was designed for an era when network traffic was considered “best-effort,” where there was no way to guarantee performance or reserve bandwidth, and delivered non-mission-critical applications. Employees and educators ran applications locally, and the majority of critical data resided on workstations.
Today, everything has changed. Applications have moved to the cloud, workers are constantly on the go, and companies are connecting things to business networks at an unprecedented rate. One could argue that, for most organizations, the network is the business. Consider whats happened in our personal lives. People stream content, communicate using video, shop online, and rely on the network for almost every aspect of their lives.
[][1]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][1]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
The same thing is happening to digital organizations. Companies today must support the needs of applications that are becomingly increasingly dynamic and distributed. An unavailable or poorly performing network means the organization comes to a screeching halt.
Yet network engineering teams working with legacy networks cant keep up with demands; the rigid and manual processes required to hard-code configuration are slow and error-prone. In fact, ZK Research found that the largest cause of downtime with legacy networks is from human errors.
Given the importance of the network, this kind of madness must stop. Businesses will never be able to harness the potential of digital transformation without modernizing the network.
Whats required is a network that is more dynamic and intelligent, one that simplifies operations via automation. This can lead to better control and faster error detection, diagnosis and resolution. These buzzwords have been tossed around by many vendors and customers as the vision of where we are headed yet it's been difficult to find actual customer deployments.
### NASD modernizes wired and wireless network to support digital curriculum
The Nazareth Area School District (NASD).recently went through a network modernization project.
The Eastern Pennsylvania school district, which has roughly 4,800 students, has a bold vision: to inspire students to be innovative, collaborative and constructive members of the community who embrace the tenets of diversity, value, education and honesty. NASD aims to accomplish its vision by helping students build a strong worth ethic and sense of responsibility and by challenging them to be leaders and good global citizens.
To support its goals, NASD set out to completely revamp the way it teaches. The district embraced a number of modern technologies that would foster immersive learning and collaboration.
There's a heavy emphasis on science, technology, engineering, arts and mathematics (STEAM), which drives more focus on coding, robotics, and virtual and augmented reality. For example, the teachers are using Google Expeditions VR Classroom kits to integrate VR into the classroom. In addition, NASD has converted many of its classrooms into “affinity rooms” where students can work together on different projects in the areas of VR, AR, robotics, stop motion photography, and other advanced technologies.
NASD understood that modernizing education requires a modernized network. If new tools and applications dont perform as expected, it can hurt the learning process as students sit around waiting while network problems are solved. The district knew it needed to upgrade its network to one that was more intelligent, reliable and easier to diagnose.
NASD chose Aruba, a Hewlett Packard Enterprise company, to be its wired and wireless networking supplier.
In my opinion, the decision to upgrade the wired and wireless networks at the same time is a smart one. Many organizations put in a new Wi-Fi network only to find the wired backbone cant support the traffic or doesnt have the necessary reliability.
The high-availability switches are running the new ArubaOS-CX operating system designed for the digital transformation era. The network devices are configured through a centralized graphical interface and not a command line interface (CLI), and they have an onboard Network Analytics Engine to reduce the complexity of running the network.
NASD selected two Aruba 8320 switches to be the core of its network, to provide “utility-grade networking” that is always on and always available, much like power.
“By running two switches in tandem, we would gain a fully redundant network that made failovers, whether planned or unplanned, completely undetectable by our users,” said Mike Fahey, senior application and network administrator at NASD.
### Wanted: utility-grade Wi-Fi
Utility-grade Wi-Fi was a must for NASD as almost all of the new learning tools connect via Wi-Fi only. The school system had been using two Wi-Fi vendors, neither of which performed well and required long troubleshooting periods.
The Nazareth IT staff initially replaced the most problematic APs with Aruba APs. As this happened, Michael Uelses, director of IT, said that the teachers noticed a marked difference in Wi-Fi performance. Now, the entire school has standardized on Arubas gigabit Wi-Fi and has expanded it to outdoor locations. This has enabled the school to expand its security strategy and new emergency preparedness application to include playgrounds, parking lots and other outdoor areas where Wi-Fi previously did not reach.
Supporting gigabit Wi-Fi required upgrading the backbone network to 10 Gigabit, which the Aruba 8320 switches support. The switches can also be upgraded to high speeds, up to 100 Gigabit, if the need arises. NASD is planning to expand the use of bandwidth-hungry apps such as VR to immerse students in subjects including biology and engineering. The option to upgrade the switches gives NASD the confidence it has made the right network choices for the future.
What NASD is doing should be a message to all schools. Digital tools are here to stay and can change the way students learn. Success with digital education requires a rock-solid wired and wireless network to deliver utility-like services that are always on so students can always be learning.
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3445976/pennsylvania-school-district-tackles-network-modernization.html
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (To space and beyond with open source)
[#]: via: (https://opensource.com/article/19/10/open-source-space-exploration)
[#]: author: (Jaouhari Youssef https://opensource.com/users/jaouhari)
To space and beyond with open source
======
Open source projects are helping to satisfy our curiosity about what
lies far beyond Earth's atmosphere.
![Person looking up at the stars][1]
Carl Sagan once said, "The universe is a pretty big place. If it's just us, seems like an awful waste of space." In that vast desert of seeming nothingness hides some of the most mysterious and beautiful creations humankind ever has—or ever will—witness.
Our ancient ancestors looked up into the night sky and dreamed about space, just as we do today. Starting with simple, naked-eye observations of the sky and progressing to create [space telescopes][2] that uncover far reaches of the universe, we've come a long way toward understanding and redefining the concepts of time, space, and matter. Our exploration has provided some answers to humanity's enduring questions about the existence of extraterrestrial life, about the finite or infinite nature and origin of the universe, and so much more. And we still have so much to discover.
### Curiosity, a crucial component for space exploration
The Cambridge Dictionary defines [curiosity][3] as "an eager wish to know or learn about something." It's curiosity that fuels our drive to acquire knowledge about outer space, but what drives our curiosity, our "eager wish," in the first place?
I believe that our curiosity is driven by the desire to escape the unpleasant feeling of uncertainty that is triggered by acknowledging our lack of knowledge. The intrinsic reward that comes from escaping uncertainty pushes us to find a correct (or at least a less wrong) answer to whatever question is at hand.
If we want space discovery to advance at a faster pace, we need more people to become aware of the rewards that are waiting for them when they make the effort and discover answers for their questions about the universe. Space discovery is admittedly not an easy task, because finding correct answers requires following rigorous methods on a long-term scale.
Luckily, open source initiatives are emerging that make it easier for people to get started exploring and enjoying the beauty of outer space.
### Two open source initiatives for space discovery
#### OpenSpace Project
One of the most beautiful tools for exploring space is [OpenSpace][4], an open source visualization tool of the entire known universe. It is an incredible way to visualize the environment of other planets, such as Mars and Jupiter, galaxies, and more.
![The Moon visualized by the OpenSpace project][5]
To enjoy a smooth experience from the OpenSpace simulation (e.g., a minimum 30fps), you need a powerful GPU; check the [GitHub repository][6] for more information.
#### Libre Space Foundation
The [Libre Space Foundation][7]'s mission is "to promote, advance, and develop libre (free and open source) technologies and knowledge for space." Among other things, the project is working to create an open source network of satellite ground stations that can communicate with satellites, spaceships, and space stations. It also supports the [UPSat project][8], which aspires to be the first completely open source satellite launched.
### Advancing the human species
I believe that the efforts made by these open source initiatives are contributing to the advancement of the human species in space. By increasing our interest in space, we are creating opportunities to upgrade our civilization's technological level, moving further up on the [Kardashev scale][9] and possibly becoming a multi-planetary species. Maybe one day, we will build a [Dyson sphere][10] around the sun to capture energy emissions, thereby harnessing an energy resource that exceeds any found on Earth and opening up a whole new world of possibilities.
### Satisfy your curiosity
Our solar system is only a tiny dot swimming in a universe of gems, and the outer space environment has never stopped amazing and intriguing us.
If your curiosity is piqued and you want to learn more about outer space, check out [Kurzgesagt's][11] YouTube videos, which cover topics ranging from the origin of the universe to the strangest stars in a beautiful and concise manner.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/open-source-space-exploration
作者:[Jaouhari Youssef][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jaouhari
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/space_stars_cosmos_person.jpg?itok=XUtz_LyY (Person looking up at the stars)
[2]: https://en.wikipedia.org/wiki/List_of_space_telescopes
[3]: https://dictionary.cambridge.org/us/dictionary/english/curiosity
[4]: https://www.openspaceproject.com/
[5]: https://opensource.com/sites/default/files/uploads/moon.png (The Moon visualized by the OpenSpace project)
[6]: https://github.com/OpenSpace/OpenSpace
[7]: https://libre.space/
[8]: https://upsat.gr/
[9]: https://en.wikipedia.org/wiki/Kardashev_scale
[10]: https://en.wikipedia.org/wiki/Dyson_sphere
[11]: https://kurzgesagt.org/

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Project Trident Ditches BSD for Linux)
[#]: via: (https://itsfoss.com/bsd-project-trident-linux/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
Project Trident Ditches BSD for Linux
======
Recently a BSD distribution announced that it was going to rebase on Linux. Yep, you heard me correctly. Project Trident is moving to Void Linux.
### What is Going on with Project Trident?
Recently, Project Trident [announced][1] that they had been working behind the scenes to move away from FreeBSD. This is quite a surprising move (and an unprecedented one).
According to a [later post][2], the move was motivated by long-standing issues with FreeBSD. These issues include “hardware compatibility, communications standards, or package availability continue to limit Project Trident users”. According to a conversation on [Telegram][3], FreeBSD has just updated its build of the Telegram client and it was nine release behind everyone else.
The lead dev of Project Trident, [Ken Moore][4], is also the main developer of the Lumina Desktop. The [Lumina Desktop][5] has been on hold for a while because the [Project Trident][6] team had to do so much work just to keep their packages updated. (Once they complete the transition to Void Linux, Ken will start working on Lumina again.)
After much searching and testing, the Project Trident team decided to use [Void Linux][7] as their new base.
According to the Project Trident team, the move to Void Linux will have the [following benefits][2]:
* Better GPU support
* Better sound card and streaming support
* Better wireless support
* Bluetooth support for the first time
* Up to date versions of applications
* Faster boot times
* Hybrid EFI/Legacy installation and boot support
### Moving Plans
![][8]
Project Trident currently has two different versions available: Trident-stable and Trident-release. Trident-stable is based on FreeBSD 12 and will continue to get updates until January of 2020 with the ports repo being deleted in April of 2020. On the other hand, Trident-release (which is based on FreeBSD 13) will receive no further updates. That ports repo will be deleted in January of 2020.
The first Void Linux-based releases should be available in January of 2020. Ken said that they might issue an alpha iso or two to show off their progress, but they would be for testing purposes only.
Currently, Ken said that they are working to port all of their “in-house utilities over to work natively on Void Linux”. Void Linux does not support ZFS-on-root, which is a big part of the BSDs. However, Project Trident is planning to use their knowledge of ZFS to add support for it to Void.
There will not be a migration path from the FreeBSD-based version to the Void-based version. If you are currently using Project Trident, you will need to backup your `/home/*` directory before performing a clean install of the new version.
### Final Thoughts
Im looking forward to trying out the new Void Linux-based Project Trident. I have installed and used Void Linux in the past. I have also tried out [TrueOS][9] (the precursor of Project Trident). However, I could never get Project Trident to work on my laptop.
When I was using Void Linux, I ran into two main issues: installing a desktop environment was a pain and the GUI package manager wasnt that great. Project Trident plans to address these issues. Their original goal was to find an operating system that didnt come with a desktop environment as default and their distro would add desktop support out-of-the-box. They wont be able to port the AppCafe package manager to Void because it is a part of TrueOS SysAdm utility. They do plan to “develop a new graphical front-end to the XBPS package manager for Void Linux”.
Interestingly, Void Linux was created by a former NetBSD developer. I asked Ken if that fact influenced their decision. He said, “Actually none! I liked the way that Void Linux was set up and that most/all of the utilities were either MIT or BSD licensed, but I never guessed that it was created by a former NetBSD developer. That definitely helps to explain why Void Linux “feels” more comfortable to me since I have been using FreeBSD exclusively for the last 7 or more years.”
Ive seen some people on the web speaking disparagingly of the move to Void Linux. They mentioned that the name changes (from PC-BSD to TrueOS to Project Trident) and the changes in architecture (from FreeBSD to TrueOS/FreeBSD to Void Linux) show that the developers dont know what they are doing. On the other hand, I believe that Project Trident has finally found its niche where it will be able to grow and blossom. I will be watching the future of Project Trident with much anticipation. You will probably be reading a review of the new version when it is released.
Have you ever used Project Trident? What is your favorite BSD? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][10].
--------------------------------------------------------------------------------
via: https://itsfoss.com/bsd-project-trident-linux/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://project-trident.org/post/train_changes/
[2]: https://project-trident.org/post/os_migration/
[3]: https://t.me/ProjectTrident
[4]: https://github.com/beanpole135
[5]: https://lumina-desktop.org/
[6]: https://itsfoss.com/project-trident-interview/
[7]: https://voidlinux.org/
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/10/bsd-linux.jpg?resize=800%2C450&ssl=1
[9]: https://itsfoss.com/trueos-bsd-review/
[10]: https://reddit.com/r/linuxusersgroup

View File

@ -1,3 +1,4 @@
luming translating
23 open source audio-visual production tools
======

View File

@ -1,158 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What is a Java constructor?)
[#]: via: (https://opensource.com/article/19/6/what-java-constructor)
[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/ashleykoree)
What is a Java constructor?
======
Constructors are powerful components of programming. Use them to unlock
the full potential of Java.
![][1]
Java is (disputably) the undisputed heavyweight in open source, cross-platform programming. While there are many [great][2] [cross-platform][2] [frameworks][3], few are as unified and direct as [Java][4].
Of course, Java is also a pretty complex language with subtleties and conventions all its own. One of the most common questions about Java relates to **constructors** : What are they and what are they used for?
Put succinctly: a constructor is an action performed upon the creation of a new **object** in Java. When your Java application creates an instance of a class you have written, it checks for a constructor. If a constructor exists, Java runs the code in the constructor while creating the instance. That's a lot of technical terms crammed into a few sentences, but it becomes clearer when you see it in action, so make sure you have [Java installed][5] and get ready for a demo.
### Life without constructors
If you're writing Java code, you're already using constructors, even though you may not know it. All classes in Java have a constructor because even if you haven't created one, Java does it for you when the code is compiled. For the sake of demonstration, though, ignore the hidden constructor that Java provides (because a default constructor adds no extra features), and take a look at life without an explicit constructor.
Suppose you're writing a simple Java dice-roller application because you want to produce a pseudo-random number for a game.
First, you might create your dice class to represent a physical die. Knowing that you play a lot of [Dungeons and Dragons][6], you decide to create a 20-sided die. In this sample code, the variable **dice** is the integer 20, representing the maximum possible die roll (a 20-sided die cannot roll more than 20). The variable **roll** is a placeholder for what will eventually be a random number, and **rand** serves as the random seed.
```
import java.util.Random;
public class DiceRoller {
private int dice = 20;
private int roll;
private [Random][7] rand = new [Random][7]();
```
Next, create a function in the **DiceRoller** class to execute the steps the computer must take to emulate a die roll: Take an integer from **rand** and assign it to the **roll** variable, add 1 to account for the fact that Java starts counting at 0 but a 20-sided die has no 0 value, then print the results.
```
public void Roller() {
roll = rand.nextInt(dice);
roll += 1;
[System][8].out.println (roll);
}
```
Finally, spawn an instance of the **DiceRoller** class and invoke its primary function, **Roller** :
```
// main loop
public static void main ([String][9][] args) {
[System][8].out.printf("You rolled a ");
DiceRoller App = new DiceRoller();
App.Roller();
}
}
```
As long as you have a Java development environment installed (such as [OpenJDK][10]), you can run your application from a terminal:
```
$ java dice.java
You rolled a 12
```
In this example, there is no explicit constructor. It's a perfectly valid and legal Java application, but it's a little limited. For instance, if you set your game of Dungeons and Dragons aside for the evening to play some Yahtzee, you would need 6-sided dice. In this simple example, it wouldn't be that much trouble to change the code, but that's not a realistic option in complex code. One way you could solve this problem is with a constructor.
### Constructors in action
The **DiceRoller** class in this example project represents a virtual dice factory: When it's called, it creates a virtual die that is then "rolled." However, by writing a custom constructor, you can make your Dice Roller application ask what kind of die you'd like to emulate.
Most of the code is the same, with the exception of a constructor accepting some number of sides. This number doesn't exist yet, but it will be created later.
```
import java.util.Random;
public class DiceRoller {
private int dice;
private int roll;
private [Random][7] rand = new [Random][7]();
// constructor
public DiceRoller(int sides) {
dice = sides;
}
```
The function emulating a roll remains unchanged:
```
public void Roller() {
roll = rand.nextInt(dice);
roll += 1;
[System][8].out.println (roll);
}
```
The main block of code feeds whatever arguments you provide when running the application. Were this a complex application, you would parse the arguments carefully and check for unexpected results, but for this sample, the only precaution taken is converting the argument string to an integer type:
```
public static void main ([String][9][] args) {
[System][8].out.printf("You rolled a ");
DiceRoller App = new DiceRoller( [Integer][11].parseInt(args[0]) );
App.Roller();
}
}
```
Launch the application and provide the number of sides you want your die to have:
```
$ java dice.java 20
You rolled a 10
$ java dice.java 6
You rolled a 2
$ java dice.java 100
You rolled a 44
```
The constructor has accepted your input, so when the class instance is created, it is created with the **sides** variable set to whatever number the user dictates.
Constructors are powerful components of programming. Practice using them to unlock the full potential of Java.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/what-java-constructor
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth/users/ashleykoree
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag
[2]: https://opensource.com/resources/python
[3]: https://opensource.com/article/17/4/pyqt-versus-wxpython
[4]: https://opensource.com/resources/java
[5]: https://openjdk.java.net/install/index.html
[6]: https://opensource.com/article/19/5/free-rpg-day
[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+random
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[10]: https://openjdk.java.net/
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+integer

View File

@ -1,267 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Install and Configure PostgreSQL on Ubuntu)
[#]: via: (https://itsfoss.com/install-postgresql-ubuntu/)
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
How to Install and Configure PostgreSQL on Ubuntu
======
_**In this tutorial, youll learn how to install and use the open source database PostgreSQL on Ubuntu Linux.**_
[PostgreSQL][1] (or Postgres) is a powerful, free and open-source relational database management system ([RDBMS][2]) that has a strong reputation for reliability, feature robustness, and performance. It is designed to handle various tasks, of any size. It is cross-platform, and the default database for [macOS Server][3].
PostgreSQL might just be the right tool for you if youre a fan of a simple to use SQL database manager. It supports SQL standards and offers additional features, while also being heavily extendable by the user as the user can add data types, functions, and do many more things.
Earlier I discussed [installing MySQL on Ubuntu][4]. In this article, Ill show you how to install and configure PostgreSQL, so that you are ready to use it to suit whatever your needs may be.
![][5]
### Installing PostgreSQL on Ubuntu
PostgreSQL is available in Ubuntu main repository. However, like many other development tools, it may not be the latest version.
First check the PostgreSQL version available in [Ubuntu repositories][6] using this [apt command][7] in the terminal:
```
apt show postgresql
```
In my Ubuntu 18.04, it showed that the available version of PostgreSQL is version 10 (10+190 means version 10) whereas PostgreSQL version 11 is already released.
```
Package: postgresql
Version: 10+190
Priority: optional
Section: database
Source: postgresql-common (190)
Origin: Ubuntu
```
Based on this information, you can make your mind whether you want to install the version available from Ubuntu or you want to get the latest released version of PostgreSQL.
Ill show both methods to you.
#### Method 1: Install PostgreSQL from Ubuntu repositories
In the terminal, use the following command to install PostgreSQL
```
sudo apt update
sudo apt install postgresql postgresql-contrib
```
Enter your password when asked and you should have it installed in a few seconds/minutes depending on your internet speed. Speaking of that, feel free to check various [network bandwidth in Ubuntu][8].
What is postgresql-contrib?
The postgresql-contrib or the contrib package consists some additional utilities and functionalities that are not part of the core PostgreSQL package. In most cases, its good to have the contrib package installed along with the PostgreSQL core.
[][9]
Suggested read  Fix gvfsd-smb-browse Taking 100% CPU In Ubuntu 16.04
#### Method 2: Installing the latest version 11 of PostgreSQL in Ubuntu
To install PostgreSQL 11, you need to add the official PostgreSQL repository in your sources.list, add its certificate and then install it from there.
Dont worry, its not complicated. Just follow these steps.
Add the GPG key first:
```
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
```
Now add the repository with the below command. If you are using Linux Mint, youll have to manually replace the `lsb_release -cs` the Ubuntu version your Mint release is based on.
```
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" >> /etc/apt/sources.list.d/pgdg.list'
```
Everything is ready now. Install PostgreSQL with the following commands:
```
sudo apt update
sudo apt install postgresql postgresql-contrib
```
PostgreSQL GUI application
You may also install a GUI application (pgAdmin) for managing PostgreSQL databases:
_sudo apt install pgadmin4_
### Configuring PostgreSQL
You can check if **PostgreSQL** is running by executing:
```
service postgresql status
```
Via the **service** command you can also **start**, **stop** or **restart** **postgresql**. Typing in **service postgresql** and pressing **Enter** should output all options. Now, onto the users.
By default, PostgreSQL creates a special user postgres that has all rights. To actually use PostgreSQL, you must first log in to that account:
```
sudo su postgres
```
Your prompt should change to something similar to:
```
[email protected]:/home/ubuntu$
```
Now, run the **PostgreSQL Shell** with the utility **psql**:
```
psql
```
You should be prompted with:
```
postgress=#
```
You can type in **\q** to **quit** and **\?** for **help**.
To see all existing tables, enter:
```
\l
```
The output will look similar to this (Hit the key **q** to exit this view):
![PostgreSQL Tables][10]
With **\du** you can display the **PostgreSQL users**:
![PostgreSQLUsers][11]
You can change the password of any user (including **postgres**) with:
```
ALTER USER postgres WITH PASSWORD 'my_password';
```
**Note:** _Replace **postgres** with the name of the user and **my_password** with the wanted password._ Also, dont forget the **;** (**semicolumn**) after every statement.
It is recommended that you create another user (it is bad practice to use the default **postgres** user). To do so, use the command:
```
CREATE USER my_user WITH PASSWORD 'my_password';
```
If you run **\du**, you will see, however, that **my_user** has no attributes yet. Lets add **Superuser** to it:
```
ALTER USER my_user WITH SUPERUSER;
```
You can **remove users** with:
```
DROP USER my_user;
```
To **log in** as another user, quit the prompt (**\q**) and then use the command:
```
psql -U my_user
```
You can connect directly to a database with the **-d** flag:
```
psql -U my_user -d my_db
```
You should call the PostgreSQL user the same as another existing user. For example, my use is **ubuntu**. To log in, from the terminal I use:
```
psql -U ubuntu -d postgres
```
**Note:** _You must specify a database (by default it will try connecting you to the database named the same as the user you are logged in as)._
If you have a the error:
```
psql: FATAL: Peer authentication failed for user "my_user"
```
Make sure you are logging as the correct user and edit **/etc/postgresql/11/main/pg_hba.conf** with administrator rights:
```
sudo vim /etc/postgresql/11/main/pg_hba.conf
```
**Note:** _Replace **11** with your version (e.g. **10**)._
Here, replace the line:
```
local all postgres peer
```
With:
```
local all postgres md5
```
Then restart **PostgreSQL**:
```
sudo service postgresql restart
```
Using **PostgreSQL** is the same as using any other **SQL** type database. I wont go into the specific commands, since this article is about getting you started with a working setup. However, here is a [very useful gist][12] to reference! Also, the man page (**man psql**) and the [documentation][13] are very helpful.
[][14]
Suggested read  [How To] Share And Sync Any Folder With Dropbox in Ubuntu
**Wrapping Up**
Reading this article has hopefully guided you through the process of installing and preparing PostgreSQL on an Ubuntu system. If you are new to SQL, you should read this article to know the [basic SQL commands][15]:
[Basic SQL Commands][15]
If you have any issues or questions, please feel free to ask in the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-postgresql-ubuntu/
作者:[Sergiu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sergiu/
[b]: https://github.com/lujun9972
[1]: https://www.postgresql.org/
[2]: https://www.codecademy.com/articles/what-is-rdbms-sql
[3]: https://www.apple.com/in/macos/server/
[4]: https://itsfoss.com/install-mysql-ubuntu/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-postgresql-ubuntu.png?resize=800%2C450&ssl=1
[6]: https://itsfoss.com/ubuntu-repositories/
[7]: https://itsfoss.com/apt-command-guide/
[8]: https://itsfoss.com/network-speed-monitor-linux/
[9]: https://itsfoss.com/fix-gvfsd-smb-high-cpu-ubuntu/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/07/postgresql_tables.png?fit=800%2C303&ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/07/postgresql_users.png?fit=800%2C244&ssl=1
[12]: https://gist.github.com/Kartones/dd3ff5ec5ea238d4c546
[13]: https://www.postgresql.org/docs/manuals/
[14]: https://itsfoss.com/sync-any-folder-with-dropbox/
[15]: https://itsfoss.com/basic-sql-commands/

View File

@ -1,191 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (amwps290)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Install Linux on Intel NUC)
[#]: via: (https://itsfoss.com/install-linux-on-intel-nuc/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
How to Install Linux on Intel NUC
======
The previous week, I got myself an [Intel NUC][1]. Though it is a tiny device, it is equivalent to a full-fledged desktop CPU. Most of the [Linux-based mini PCs][2] are actually built on top of the Intel NUC devices.
I got the barebone NUC with 8th generation Core i3 processor. Barebone means that the device has no RAM, no hard disk and obviously, no operating system. I added an [8GB RAM from Crucial][3] (around $33) and a [240 GB Western Digital SSD][4] (around $45).
Altogether, I had a desktop PC ready in under $400. I already have a screen and keyboard-mouse pair so I am not counting them in the expense.
![A brand new Intel NUC NUC8i3BEH at my desk with Raspberry Pi 4 lurking behind][5]
The main reason why I got Intel NUC is that I want to test and review various Linux distributions on real hardware. I have a [Raspberry Pi 4][6] which works as an entry-level desktop but its an [ARM][7] device and thus there are only a handful of Linux distributions available for Raspberry Pi.
_The Amazon links in the article are affiliate links. Please read our [affiliate policy][8]._
### Installing Linux on Intel NUC
I started with Ubuntu 18.04 LTS version because thats what I had available at the moment. You can follow this tutorial for other distributions as well. The steps should remain the same at least till the partition step which is the most important one in the entire procedure.
#### Step 1: Create a live Linux USB
Download Ubuntu 18.04 from its website. Use another computer to [create a live Ubuntu USB][9]. You can use a tool like [Rufus][10] or [Etcher][11]. On Ubuntu, you can use the default Startup Disk Creator tool.
#### Step 2: Make sure the boot order is correct
Insert your USB and power on the NUC. As soon as you see the Intel NUC written on the screen, press F2 to go to BIOS settings.
![BIOS Settings in Intel NUC][12]
In here, just make sure that boot order is set to boot from USB first. If not, change the boot order.
If you had to make any changes, press F10 to save and exit. Else, use Esc to exit the BIOS.
#### Step 3: Making the correct partition to install Linux
Now when it boots again, youll see the familiar Grub screen that allows you to try Ubuntu live or install it. Choose to install it.
[][13]
Suggested read  3 Ways to Check Linux Kernel Version in Command Line
First few installation steps are simple. You choose the keyboard layout, and the network connection (if any) and other simple steps.
![Choose the keyboard layout while installing Ubuntu Linux][14]
You may go with the normal installation that has a handful of useful applications installed by default.
![][15]
The interesting screen comes next. You have two options:
* **Erase disk and install Ubuntu**: Simplest option that will install Ubuntu on the entire disk. If you want to use only one operating system on the Intel NUC, choose this option and Ubuntu will take care of the rest.
* **Something Else**: This is the advanced option if you want to take control of things. In my case, I want to install multiple Linux distribution on the same SSD. So I am opting for this advanced option.
![][16]
_**If you opt for “Erase disk and install Ubuntu”, click continue and go to the step 4.**_
If you are going with the advanced option, follow the rest of the step 3.
Select the SSD disk and click on New Partition Table.
![][17]
It will show you a warning. Just hit Continue.
![][18]
Now youll see a free space of the size of your SSD disk. My idea is to create an EFI System Partition for the EFI boot loader, a root partition and a home partition. I am not creating a [swap partition][19]. Ubuntu creates a swap file on its own and if the need be, I can extend the swap by creating additional swap files.
Ill leave almost 200 GB of free space on the disk so that I could install other Linux distributions here. You can utilize all of it for your home partitions. Keeping separate root and home partitions help you when you want to save reinstall the system
Select the free space and click on the plus sign to add a partition.
![][20]
Usually 100 MB is sufficient for the EFI but some distributions may need more space so I am going with 500 MB of EFI partition.
![][21]
Next, I am using 20 GB of root space. If you are going to use only one distributions, you can increase it to 40 GB easily.
Root is where the system files are kept. Your program cache and installed applications keep some files under the root directory. I recommend [reading about the Linux filesystem hierarchy][22] to get more knowledge on this topic.
[][23]
Suggested read  Share Folders On Local Network Between Ubuntu And Windows
Provide the size, choose Ext4 file system and use / as the mount point.
![][24]
The next is to create a home partition. Again, if you want to use only one Linux distribution, go for the remaining free space. Else, choose a suitable disk space for the Home partition.
Home is where your personal documents, pictures, music, download and other files are stored.
![][25]
Now that you have created EFI, root and home partitions, you are ready to install Ubuntu Linux. Hit the Install Now button.
![][26]
It will give you a warning about the new changes being written to the disk. Hit continue.
![][27]
#### Step 4: Installing Ubuntu Linux
Things are pretty straightforward from here onward. Choose your time zone right now or change it later.
![][28]
On the next screen, choose a username, hostname and the password.
![][29]
Its a wait an watch game for next 7-8 minutes.
![][30]
Once the installation is over, youll be prompted for a restart.
![][31]
When you restart, you should remove the live USB otherwise youll boot into the installation media again.
Thats all you need to do to install Linux on an Intel NUC device. Quite frankly, you can use the same procedure on any other system.
**Intel NUC and Linux: how do you use it?**
I am loving the Intel NUC. It doesnt take space on the desk and yet it is powerful enough to replace the regular bulky desktop CPU. You can easily upgrade it to 32GB of RAM. You can install two SSD on it. Altogether, it provides some scope of configuration and upgrade.
If you are looking to buy a desktop computer, I highly recommend [Intel NUC][1] mini PC. If you are not comfortable installing the OS on your own, you can [buy one of the Linux-based mini PCs][2].
Do you own an Intel NUC? Hows your experience with it? Do you have any tips to share it with us? Do leave a comment below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-linux-on-intel-nuc/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.amazon.com/Intel-NUC-Mainstream-Kit-NUC8i3BEH/dp/B07GX4X4PW?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07GX4X4PW (Intel NUC)
[2]: https://itsfoss.com/linux-based-mini-pc/
[3]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B01BIWKP58?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01BIWKP58 (8GB RAM from Crucial)
[4]: https://www.amazon.com/Western-Digital-240GB-Internal-WDS240G1G0B/dp/B01M9B2VB7?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01M9B2VB7 (240 GB Western Digital SSD)
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/intel-nuc.jpg?resize=800%2C600&ssl=1
[6]: https://itsfoss.com/raspberry-pi-4/
[7]: https://en.wikipedia.org/wiki/ARM_architecture
[8]: https://itsfoss.com/affiliate-policy/
[9]: https://itsfoss.com/create-live-usb-of-ubuntu-in-windows/
[10]: https://rufus.ie/
[11]: https://www.balena.io/etcher/
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/boot-screen-nuc.jpg?ssl=1
[13]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/
[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-1_tutorial.jpg?ssl=1
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-2_tutorial.jpg?ssl=1
[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-3_tutorial.jpg?ssl=1
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-4_tutorial.jpg?ssl=1
[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-5_tutorial.jpg?ssl=1
[19]: https://itsfoss.com/swap-size/
[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-6_tutorial.jpg?ssl=1
[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-7_tutorial.jpg?ssl=1
[22]: https://linuxhandbook.com/linux-directory-structure/
[23]: https://itsfoss.com/share-folders-local-network-ubuntu-windows/
[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-8_tutorial.jpg?ssl=1
[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-9_tutorial.jpg?ssl=1
[26]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-10_tutorial.jpg?ssl=1
[27]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-11_tutorial.jpg?ssl=1
[28]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-12_tutorial.jpg?ssl=1
[29]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-13_tutorial.jpg?ssl=1
[30]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-14_tutorial.jpg?ssl=1
[31]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/install-ubuntu-linux-on-intel-nuc-15_tutorial.jpg?ssl=1

View File

@ -1,222 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Installation Guide of Manjaro 18.1 (KDE Edition) with Screenshots)
[#]: via: (https://www.linuxtechi.com/install-manjaro-18-1-kde-edition-screenshots/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
Installation Guide of Manjaro 18.1 (KDE Edition) with Screenshots
======
Within a year of releasing **Manjaro 18.0** (**Illyria**), the team has come out with their next big release with **Manjaro 18.1**, codenamed “**Juhraya**“. The team also have come up with an official announcement saying that Juhraya comes packed with a lot of improvements and bug fixes.
### New Features in Manjaro 18.1
Some of the new features and enhancements in Manjaro 18.1 are listed below:
* Option to choose between LibreOffice or Free Office
* New Matcha theme for Xfce edition
* Redesigned messaging system in KDE edition
* Support for Snap and FlatPak packages using “bhau” tool
### Minimum System Requirements for Manjaro 18.1
* 1 GB RAM
* One GHz Processor
* Around 30 GB Hard disk space
* Internet Connection
* Bootable Media (USB/DVD)
### Step by Step Guide to Install Manjaro 18.1 (KDE Edition)
To start installing Manjaro 18.1 (KDE Edition) in your system, please follow the steps outline below:
### Step 1) Download Manjaro 18.1 ISO
Before installing, you need to download the latest copy of Manjaro 18.1 from its official download page located **[here][1]**. Since we are seeing about the KDE version, we chose to install the KDE version. But the installation process is the same for all desktop environments including Xfce, KDE and Gnome editions.
### Step 2) Create a USB Bootable Disk
Once you have successfully downloaded the ISO file from Manjaro downloads page, it is time to create an USB disk. Copy the downloaded ISO file in a USB disk and create a bootable disk. Make sure to change your boot settings to boot using a USB and restart your system
### Step 3) Manjaro Live Installation Environment
When the system restarts, it will automatically detect the USB drive and starts booting into the Manjaro Live Installation Screen.
[![Boot-Manjaro-18-1-kde-installation][2]][3]
Next use the arrow keys to choose “**Boot: Manjaro x86_64 kde**” and hit enter to launch the Manjaro Installer.
### Step 4) Choose Launch Installer
Next the Manjaro installer will be launched and If you are connected to the internet, Manjaro will automatically detect your location and time zone. Click “**Launch Installer**” start installing Manjaro 18.1 KDE edition in your system.
[![Choose-Launch-Installaer-Manjaro18-1-kde][2]][4]
### Step 5) Choose Your Language
Next the installer will take you to choose your preferred language.
[![Choose-Language-Manjaro18-1-Kde-Installation][2]][5]
Select your desired language and click “Next”
### Step 6) Choose Your time zone and region
In the next screen, select your desired time zone and region and click “Next” to continue
[![Select-Location-During-Manjaro18-1-KDE-Installation][2]][6]
### Step 7) Choose Keyboard layout
In the next screen, select your preferred keyboard layout and click “Next” to continue.
[![Select-Keyboard-Layout-Manjaro18-1-kde-installation][2]][7]
### Step 8) Choose Partition Type
This is a very critical step in the installation process. It will allow you to choose between:
* Erase Disk
* Manual Partitioning
* Install Alongside
* Replace a Partition
If you are installing Manjaro 18.1 in a VM (Virtual Machine), then you wont be able to see the last 2 options.
If you are new to Manjaro Linux then I would suggest you should go with first option (**Erase Disk**), it will automatically create required partitions for you. If you want to create custom partitions then choose the second option “**Manual Partitioning**“, as its name suggests it will allow us to create our own custom partitions.
In this tutorial I will be creating custom partitions by selecting “Manual Partitioning” option,
[![Manual-Partition-Manjaro18-1-KDE][2]][8]
Choose the second option and click “Next” to continue.
As we can see i have 40 GB hard disk, so I will create following partitions on it,
* /boot         2GB (ext4 file system)
* /                 10 GB (ext4 file system)
* /home       22 GB (ext4 file system)
* /opt           4 GB (ext4 file system)
* Swap         2 GB
When we click on Next in above window, we will get the following screen, choose to create a **new partition table**,
[![Create-Partition-Table-Manjaro18-1-Installation][2]][9]
Click on Ok,
Now choose the free space and then click on **create** to setup the first partition as /boot of size 2 GB,
[![boot-partition-manjaro-18-1-installation][2]][10]
Click on OK to proceed with further, in the next window choose again free space and then click on create  to setup second partition as / of size 10 GB,
[![slash-root-partition-manjaro18-1-installation][2]][11]
Similarly create next partition as /home of size 22 GB,
[![home-partition-manjaro18-1-installation][2]][12]
As of now we have created three partitions as primary, now create next partition as extended,
[![Extended-Partition-Manjaro18-1-installation][2]][13]
Click on OK to proceed further,
Create /opt and Swap partition of size 5 GB and 2 GB respectively as logical partitions
[![opt-partition-manjaro-18-1-installation][2]][14]
[![swap-partition-manjaro18-1-installation][2]][15]
Once are done with all the partitions creation, click on Next
[![choose-next-after-partition-creation][2]][16]
### Step 9) Provide User Information
In the next screen, you need to provide the user information including your name, username, password, computer name etc.
[![User-creation-details-manjaro18-1-installation][2]][17]
Click “Next” to continue with the installation after providing all the information.
In the next screen you will be prompted to choose the office suite, so make a choice that suits to your installation,
[![Office-Suite-Selection-Manjaro18-1][2]][18]
Click on Next to proceed further,
### Step 10) Summary Information
Before the actual installation is done, the installer will show you all the details youve chosen including the language, time zone, keyboard layout and partitioning information etc. Click “**Install**” to proceed with the installation process.
[![Summary-manjaro18-1-installation][2]][19]
### Step 11) Install Manjaro 18.1 KDE Edition
Now the actual installation process begins and once it gets completed, restart the system to login to Manjaro 18.1 KDE edition ,
[![Manjaro18-1-Installation-Progress][2]][20]
[![Restart-Manjaro-18-1-after-installation][2]][21]
### Step:12) Login after successful installation
After the restart we will get the following login screen, use the users credentials that we created during the installation
[![Login-screen-after-manjaro-18-1-installation][2]][22]
Click on Login,
[![KDE-Desktop-Screen-Manjaro-18-1][2]][23]
Thats it! Youve successfully installed Manjaro 18.1 KDE edition in your system and explore all the exciting features. Please post your feedback and suggestions in the comments section below.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/install-manjaro-18-1-kde-edition-screenshots/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://manjaro.org/download/official/kde/
[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Boot-Manjaro-18-1-kde-installation.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Launch-Installaer-Manjaro18-1-kde.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Language-Manjaro18-1-Kde-Installation.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Location-During-Manjaro18-1-KDE-Installation.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Keyboard-Layout-Manjaro18-1-kde-installation.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Manual-Partition-Manjaro18-1-KDE.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Create-Partition-Table-Manjaro18-1-Installation.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/boot-partition-manjaro-18-1-installation.jpg
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/09/slash-root-partition-manjaro18-1-installation.jpg
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/09/home-partition-manjaro18-1-installation.jpg
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Extended-Partition-Manjaro18-1-installation.jpg
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/09/opt-partition-manjaro-18-1-installation.jpg
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/09/swap-partition-manjaro18-1-installation.jpg
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/09/choose-next-after-partition-creation.jpg
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/09/User-creation-details-manjaro18-1-installation.jpg
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Office-Suite-Selection-Manjaro18-1.jpg
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Summary-manjaro18-1-installation.jpg
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Manjaro18-1-Installation-Progress.jpg
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Restart-Manjaro-18-1-after-installation.jpg
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Login-screen-after-manjaro-18-1-installation.jpg
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/09/KDE-Desktop-Screen-Manjaro-18-1.jpg

View File

@ -1,195 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mutation testing by example: How to leverage failure)
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-tdd)
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic)
Mutation testing by example: How to leverage failure
======
Use planned failure to ensure your code meets expected outcomes and
follow along with the .NET xUnit.net testing framework.
![failure sign at a party, celebrating failure][1]
In my article _[Mutation testing is the evolution of TDD][2]_, I exposed the power of iteration to guarantee a solution when a measurable test is available. In that article, an iterative approach helped to determine how to implement code that calculates the square root of a given number.
I also demonstrated that the most effective method is to find a measurable goal or test, then start iterating with best guesses. The first guess at the correct answer will most likely fail, as expected, so the failed guess needs to be refined. The refined guess must be validated against the measurable goal or test. Based on the result, the guess is either validated or must be further refined.
In this model, the only way to learn how to reach the solution is to fail repeatedly. It sounds counterintuitive, but amazingly, it works.
Following in the footsteps of that analysis, this article examines the best way to use a DevOps approach when building a solution containing some dependencies. The first step is to write a test that can be expected to fail.
### The problem with dependencies is that you can't depend on them
The problem with dependencies, as Michael Nygard wittily expresses in _[Architecture without an end state][3]_, is a huge topic better left for another article. Here, you'll look into potential pitfalls that dependencies tend to bring to a project and how to leverage test-driven development (TDD) to avoid those pitfalls.
First, pose a real-life challenge, then see how it can be solved using TDD.
### Who let the cat out?
![Cat standing on a roof][4]
In Agile development environments, it's helpful to start building the solution by defining the desired outcomes. Typically, the desired outcomes are described in a [_user story_][5]:
> _Using my home automation system (HAS),
> I want to control when the cat can go outside,
> because I want to keep the cat safe overnight._
Now that you have a user story, you need to elaborate on it by providing some functional requirements (that is, by specifying the _acceptance criteria_). Start with the simplest of scenarios described in pseudo-code:
> _Scenario #1: Disable cat trap door during nighttime_
>
> * Given that the clock detects that it is nighttime
> * When the clock notifies the HAS
> * Then HAS disables the Internet of Things (IoT)-capable cat trap door
>
### Decompose the system
The system you are building (the HAS) needs to be _decomposed_broken down to its dependenciesbefore you can start working on it. The first thing you must do is identify any dependencies (if you're lucky, your system has no dependencies, which would make it easy to build, but then it arguably wouldn't be a very useful system).
From the simple scenario above, you can see that the desired business outcome (automatically controlling a cat door) depends on detecting nighttime. This dependency hinges upon the clock. But the clock is not capable of determining whether it is daylight or nighttime. It's up to you to supply that logic.
Another dependency in the system you're building is the ability to automatically access the cat door and enable or disable it. That dependency most likely hinges upon an API provided by the IoT-capable cat door.
### Fail fast toward dependency management
To satisfy one dependency, we will build the logic that determines whether the current time is daylight or nighttime. In the spirit of TDD, we will start with a small failure.
Refer to my [previous article][2] for detailed instructions on how to set the development environment and scaffolds required for this exercise. We will be reusing the same NET environment and relying on the [xUnit.net][6] framework.
Next, create a new project called HAS (for "home automation system") and create a file called **UnitTest1.cs**. In this file, write the first failing unit test. In this unit test, describe your expectations. For example, when the system runs, if the time is 7pm, then the component responsible for deciding whether it's daylight or nighttime returns the value "Nighttime."
Here is the unit test that describes that expectation:
```
using System;
using Xunit;
namespace unittest
{
   public class UnitTest1
   {
       DayOrNightUtility dayOrNightUtility = [new][7] DayOrNightUtility();
       [Fact]
       public void Given7pmReturnNighttime()
       {
           var expected = "Nighttime";
           var actual = dayOrNightUtility.GetDayOrNight();
           Assert.Equal(expected, actual);
       }
   }
}
```
By this point, you may be familiar with the shape and form of a unit test. A quick refresher: describe the expectation by giving the unit test a descriptive name, **Given7pmReturnNighttime**, in this example. Then in the body of the unit test, a variable named **expected** is created, and it is assigned the expected value (in this case, the value "Nighttime"). Following that, a variable named **actual** is assigned the actual value (available after the component or service processes the time of day).
Finally, it checks whether the expectation has been met by asserting that the expected and actual values are equal: **Assert.Equal(expected, actual)**.
You can also see in the above listing a component or service called **dayOrNightUtility**. This module is capable of receiving the message **GetDayOrNight** and is supposed to return the value of the type **string**.
Again, in the spirit of TDD, the component or service being described hasn't been built yet (it is merely being described with the intention to prescribe it later). Building it is driven by the described expectations.
Create a new file in the **app** folder and give it the name **DayOrNightUtility.cs**. Add the following C# code to that file and save it:
```
using System;
namespace app {
   public class DayOrNightUtility {
       public string GetDayOrNight() {
           string dayOrNight = "Undetermined";
           return dayOrNight;
       }
   }
}
```
Now go to the command line, change directory to the **unittests** folder, and run the test:
```
[Xunit.net 00:00:02.33] unittest.UnitTest1.Given7pmReturnNighttime [FAIL]
Failed unittest.UnitTest1.Given7pmReturnNighttime
[...]
```
Congratulations, you have written the first failing unit test. The unit test was expecting **DayOrNightUtility** to return string value "Nighttime" but instead, it received the string value "Undetermined."
### Fix the failing unit test
A quick and dirty way to fix the failing test is to replace the value "Undetermined" with the value "Nighttime" and save the change:
```
using System;
namespace app {
   public class DayOrNightUtility {
       public string GetDayOrNight() {
           string dayOrNight = "Nighttime";
           return dayOrNight;
       }
   }
}
```
Now when we run the test, it passes:
```
Starting test execution, please wait...
Total tests: 1. Passed: 1. Failed: 0. Skipped: 0.
Test Run Successful.
Test execution time: 2.6470 Seconds
```
However, hardcoding the values is basically cheating, so it's better to endow **DayOrNightUtility** with some intelligence. Modify the **GetDayOrNight** method to include some time-calculation logic:
```
public string GetDayOrNight() {
    string dayOrNight = "Daylight";
    DateTime time = new DateTime();
    if(time.Hour &lt; 7) {
        dayOrNight = "Nighttime";
    }
    return dayOrNight;
}
```
The method now gets the current time from the system and compares the **Hour** value to see if it is less than 7am. If it is, the logic transforms the **dayOrNight** string value from "Daylight" to "Nighttime." The unit test now passes.
### The start of a test-driven solution
We now have the beginnings of a base case unit test and a viable solution for our time dependency. There are more than a few more cases to work through. 
In the next article, I'll demonstrate how to test for daylight hours and how to leverage failure along the way.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/mutation-testing-example-tdd
作者:[Alex Bunardzic][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-bunardzic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_failure_celebrate.png?itok=LbvDAEZF (failure sign at a party, celebrating failure)
[2]: https://opensource.com/article/19/8/mutation-testing-evolution-tdd
[3]: https://www.infoq.com/presentations/Architecture-Without-an-End-State/
[4]: https://opensource.com/sites/default/files/uploads/cat.png (Cat standing on a roof)
[5]: https://www.agilealliance.org/glossary/user-stories
[6]: https://xunit.net/
[7]: http://www.google.com/search?q=new+msdn.microsoft.com

View File

@ -1,98 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Object-Oriented Programming and Essential State)
[#]: via: (https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
Object-Oriented Programming and Essential State
======
Back in 2015, Brian Will wrote a provocative blog post: [Object-Oriented Programming: A Disaster Story][1]. He followed it up with a video called [Object-Oriented Programming is Bad][2], which is much more detailed. I recommend taking the time to watch the video, but heres my one-paragraph summary:
The Platonic ideal of OOP is a sea of decoupled objects that send stateless messages to one another. No one really makes software like that, and Brian points out that it doesnt even make sense: objects need to know which other objects to send messages to, and that means they need to hold references to one another. Most of the video is about the pain that happens trying to couple objects for control flow, while pretending that theyre decoupled by design.
Overall his ideas resonate with my own experiences of OOP: objects can be okay, but Ive just never been satisfied with object-_orientation_ for modelling a programs control flow, and trying to make code “properly” object-oriented always seems to create layers of unneccessary complexity.
Theres one thing I dont think he explains fully. He says outright that “encapsulation does not work”, but follows it with the footnote “at fine-grained levels of code”, and goes on to acknowledge that objects can sometimes work, and that encapsulation can be okay at the level of, say, a library or file. But he doesnt explain exactly why it sometimes works and sometimes doesnt, and how/where to draw the line. Some people might say that makes his “OOP is bad” claim flawed, but I think his point stands, and that the line can be drawn between essential state and accidental state.
If you havent heard this usage of the terms “essential” and “accidental” before, you should check out Fred Brooks classic [No Silver Bullet][3] essay. (Hes written many great essays about building software systems, by the way.) Ive aleady written [my own post about essential and accidential complexity][4] before, but heres a quick TL;DR: Software is complex. Partly thats because we want software to solve messy real-world problems, and we call that “essential complexity”. “Accidental complexity” is all the other complexity that exists because were trying to use silicon and metal to solve problems that have nothing to do with silicon and metal. For example, code for memory management, or transferring data between RAM and disk, or parsing text formats, is all “accidental complexity” for most programs.
Suppose youre building a chat application that supports multiple channels. Messages can arrive for any channel at any time. Some channels are especially interesting and the user wants to be notified or pinged when a new message comes in. Other channels are muted: the message is stored, but the user isnt interrupted. You need to keep track of the users preferred setting for each channel.
One way to do it is to use a map (a.k.a, hash table, dictionary or associative array) between the channels and channel settings. Note that a map is the kind of abstract data type (ADT) that Brian Will said can work as an object.
If we get a debugger and look inside the map object in memory, what will we see? Well find channel IDs and channel settings data of course (or pointers to them, at least). But well also find other data. If the map is implemented using a red-black tree, well see tree node objects with red/black labels and pointers to other nodes. The channel-related data is the essential state, and the tree nodes are the accidental state. Notice something, though: The map effectively encapsulates its accidental state — you could replace the map with another one implemented using AVL trees and your chat app would still work. On the other hand, the map doesnt encapsulate the essential state (simply using `get()` and `set()` methods to access data isnt encapsulation). In fact, the map is as agnostic as possible about the essential state — you could use basically the same map data structure to store other mappings unrelated to channels or notifications.
And thats why the map ADT is so successful: it encapsulates accidental state and is decoupled from essential state. If you think about it, the problems that Brian describes with encapsulation are problems with trying to encapsulate essential state. The benefits that others describe are benefits from encapsulating accidental state.
Its pretty hard to make entire software systems meet this ideal, but scaling up, I think it looks something like this:
* No global, mutable state
* Accidental state encapsulated (in objects or modules or whatever)
* Stateless accidental complexity enclosed in free functions, decoupled from data
* Inputs and outputs made explicit using tricks like dependency injection
* Components fully owned and controlled from easily identifiable locations
Some of this goes against instincts I had a long time ago. For example, if you have a function that makes a database query, the interface looks simpler and nicer if the database connection handling is hidden inside the function, and the only parameters are the query parameters. However, when you build a software system out of functions like this, it actually becomes more complex to coordinate the database usage. Not only are the components doing things their own ways, theyre trying to hide what theyre doing as “implementation details”. The fact that a database query requires a database connection never was an implementation detail. If something cant be hidden, its saner to make it explicit.
Im wary of feeding the OOP and functional programming false dichotomy, but I think its interesting that FP goes to the opposite extreme of OOP: OOP tries to encapsulate things, including the essential complexity that cant be encapsulated, while pure FP tends to make things explicit, including some accidental complexity. Most of the time, thats the safer side to go wrong, but sometimes (such as when [building self-referential data structures in a purely functional language][5]) you can get designs that are more for the sake of FP than for the sake of simplicity (which is why [Haskell includes some escape hatches][6]). Ive written before about [the middle ground of so-called “weak purity”][7].
Brian found that encapsulation works at a larger scale for a couple of reasons. One is that larger components are simply more likely to contain accidental state, just because of size. Another is that whats “accidental” is relative to what problem youre solving. From the chat app users point of view, “accidental complexity” is anything unrelated to messages and channels and users, etc. As you break the problems into subproblems, however, more things become essential. For example, the mapping between channel names and channel IDs is arguably accidental complexity when solving the “build a chat app” problem, but its essential complexity when solving the “implement the `getChannelIdByName()` function” subproblem. So, encapsulation tends to be less useful for subcomponents than supercomponents.
By the way, at the end of his video, Brian Will wonders if any language supports anonymous functions that _cant_ access they scope theyre in. [D][8] does. Anonymous lambdas in D are normally closures, but anonymous stateless functions can also be declared if thats what you want:
```
import std.stdio;
void main()
{
int x = 41;
// Value from immediately executed lambda
auto v1 = () {
return x + 1;
}();
writeln(v1);
// Same thing
auto v2 = delegate() {
return x + 1;
}();
writeln(v2);
// Plain functions aren't closures
auto v3 = function() {
// Can't access x
// Can't access any mutable global state either if also marked pure
return 42;
}();
writeln(v3);
}
```
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://medium.com/@brianwill/object-oriented-programming-a-personal-disaster-1b044c2383ab
[2]: https://www.youtube.com/watch?v=QM1iUe6IofM
[3]: http://www.cs.nott.ac.uk/~pszcah/G51ISS/Documents/NoSilverBullet.html
[4]: https://theartofmachinery.com/2017/06/25/compression_complexity_and_software.html
[5]: https://wiki.haskell.org/Tying_the_Knot
[6]: https://en.wikibooks.org/wiki/Haskell/Mutable_objects#The_ST_monad
[7]: https://theartofmachinery.com/2016/03/28/dirtying_pure_functions_can_be_useful.html
[8]: https://dlang.org

View File

@ -1,81 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Use sshuttle to build a poor mans VPN)
[#]: via: (https://fedoramagazine.org/use-sshuttle-to-build-a-poor-mans-vpn/)
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
Use sshuttle to build a poor mans VPN
======
![][1]
Nowadays, business networks often use a VPN (virtual private network) for [secure communications with workers][2]. However, the protocols used can sometimes make performance slow. If you can reach reach a host on the remote network with SSH, you could set up port forwarding. But this can be painful, especially if you need to work with many hosts on that network. Enter **sshuttle** — which lets you set up a quick and dirty VPN with just SSH access. Read on for more information on how to use it.
The sshuttle application was designed for exactly the kind of scenario described above. The only requirement on the remote side is that the host must have Python available. This is because sshuttle constructs and runs some Python source code to help transmit data.
### Installing sshuttle
The sshuttle application is packaged in the official repositories, so its easy to install. Open a terminal and use the following command [with sudo][3]:
```
$ sudo dnf install sshuttle
```
Once installed, you may find the manual page interesting:
```
$ man sshuttle
```
### Setting up the VPN
The simplest case is just to forward all traffic to the remote network. This isnt necessarily a crazy idea, especially if youre not on a trusted local network like your own home. Use the _-r_ switch with the SSH username and the remote host name:
```
$ sshuttle -r username@remotehost 0.0.0.0/0
```
However, you may want to restrict the VPN to specific subnets rather than all network traffic. (A complete discussion of subnets is outside the scope of this article, but you can read more [here on Wikipedia][4].) Lets say your office internally uses the reserved Class A subnet 10.0.0.0 and the reserved Class B subnet 172.16.0.0. The command above becomes:
```
$ sshuttle -r username@remotehost 10.0.0.0/8 172.16.0.0/16
```
This works great for working with hosts on the remote network by IP address. But what if your office is a large network with lots of hosts? Names are probably much more convenient — maybe even required. Never fear, sshuttle can also forward DNS queries to the office with the _dns_ switch:
```
$ sshuttle --dns -r username@remotehost 10.0.0.0/8 172.16.0.0/16
```
To run sshuttle like a daemon, add the _-D_ switch. This also will send log information to the systemd journal via its syslog compatibility.
Depending on the capabilities of your system and the remote system, you can use sshuttle for an IPv6 based VPN. You can also set up configuration files and integrate it with your system startup if desired. If you want to read even more about sshuttle and how it works, [check out the official documentation][5]. For a look at the code, [head over to the GitHub page][6].
* * *
_Photo by _[_Kurt Cotoaga_][7]_ on _[_Unsplash_][8]_._
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/use-sshuttle-to-build-a-poor-mans-vpn/
作者:[Paul W. Frields][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/pfrields/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/10/sshuttle-816x345.jpg
[2]: https://en.wikipedia.org/wiki/Virtual_private_network
[3]: https://fedoramagazine.org/howto-use-sudo/
[4]: https://en.wikipedia.org/wiki/Subnetwork
[5]: https://sshuttle.readthedocs.io/en/stable/index.html
[6]: https://github.com/sshuttle/sshuttle
[7]: https://unsplash.com/@kydroon?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[8]: https://unsplash.com/s/photos/shuttle?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,81 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux sudo flaw can lead to unauthorized privileges)
[#]: via: (https://www.networkworld.com/article/3446036/linux-sudo-flaw-can-lead-to-unauthorized-privileges.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Linux sudo flaw can lead to unauthorized privileges
======
Exploiting a newly discovered sudo flaw in Linux can enable certain users with to run commands as root despite restrictions against it.
Thinkstock
A newly discovered and serious flaw in the [**sudo**][1] command can, if exploited, enable users to run commands as root in spite of the fact that the syntax of the  **/etc/sudoers** file specifically disallows them from doing so.
Updating **sudo** to version 1.8.28 should address the problem, and Linux admins are encouraged to do so as soon as possible. 
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
How the flaw might be exploited depends on specific privileges granted in the **/etc/sudoers** file. A rule that allows a user to edit files as any user except root, for example, would actually allow that user to edit files as root as well. In this case, the flaw could lead to very serious problems.
[][3]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][3]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
For a user to exploit the flaw, **a user** needs to be assigned privileges in the **/etc/sudoers **file that allow that user to run commands as some other users, and the flaw is limited to the command privileges that are assigned in this way.  
This problem affects versions prior to 1.8.28. To check your sudo version, use this command:
```
$ sudo -V
Sudo version 1.8.27 <===
Sudoers policy plugin version 1.8.27
Sudoers file grammar version 46
Sudoers I/O plugin version 1.8.27
```
The vulnerability has been assigned [CVE-2019-14287][4] in the **Common Vulnerabilities and Exposures** database. The risk is that any user who has been given the ability to run even a single command as an arbitrary user may be able to escape the restrictions and run that command as root even if the specified privilege is written to disallow running the command as root.
The lines below are meant to give the user "jdoe" the ability to edit files with **vi** as any user except root (**!root** means "not root") and nemo the right to run the **id** command as any user except root:
```
# affected entries on host "dragonfly"
jdoe dragonfly = (ALL, !root) /usr/bin/vi
nemo dragonfly = (ALL, !root) /usr/bin/id
```
However, given the flaw, either of these users would be able to circumvent the restriction and edit files or run the **id** command as root as well.
The flaw can be exploited by an attacker to run commands as root by specifying the user ID "-1" or "4294967295."  
The response of "1" demonstrates that the command is being run as root (showing root's user ID).
Joe Vennix from Apple Information Security both found and analyzed the problem.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3446036/linux-sudo-flaw-can-lead-to-unauthorized-privileges.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3236499/some-tricks-for-using-sudo.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14287
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,146 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to type emoji on Linux)
[#]: via: (https://opensource.com/article/19/10/how-type-emoji-linux)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
How to type emoji on Linux
======
The GNOME desktop makes it easy to use emoji in your communications.
![A cat under a keyboard.][1]
Emoji are those fanciful pictograms that snuck into the Unicode character space. They're all the rage online, and people use them for all kinds of surprising things, from signifying reactions on social media to serving as visual labels for important file names. There are many ways to enter Unicode characters on Linux, but the GNOME desktop makes it easy to find and type an emoji.
![Emoji in Emacs][2]
### Requirements
For this easy method, you must be running Linux with the [GNOME][3] desktop.
You must also have an emoji font installed. There are many to choose from, so do a search for _emoji_ using your favorite software installer application or package manager.
For example, on Fedora:
```
$ sudo dnf search emoji
emoji-picker.noarch : An emoji selection tool
unicode-emoji.noarch : Unicode Emoji Data Files
eosrei-emojione-fonts.noarch : A color emoji font
twitter-twemoji-fonts.noarch : Twitter Emoji for everyone
google-android-emoji-fonts.noarch : Android Emoji font released by Google
google-noto-emoji-fonts.noarch : Google “Noto Emoji” Black-and-White emoji font
google-noto-emoji-color-fonts.noarch : Google “Noto Color Emoji” colored emoji font
[...]
```
On Ubuntu or Debian, use **apt search** instead.
I'm using [Google Noto Color Emoji][4] in this article.
### Get set up
To get set up, launch GNOME's Settings application.
1. In Settings, click the **Region &amp; Language** category in the left column.
2. Click the plus symbol (**+**) under the **Input Sources** heading to bring up the **Add an Input Source** panel.
![Add a new input source][5]
3. In the **Add an Input Source** panel, click the hamburger menu at the bottom of the input list.
![Add an Input Source panel][6]
4. Scroll to the bottom of the list and select **Other**.
5. In the **Other** list, find **Other (Typing Booster)**. (You can type **boost** in the search field at the bottom to filter the list.)
![Find Other \(Typing Booster\) in inputs][7]
6. Click the **Add** button in the top-right corner of the panel to add the input source to GNOME.
Once you've done that, you can close the Settings window.
#### Switch to Typing Booster
You now have a new icon in the top-right of your GNOME desktop. By default, it's set to the two-letter abbreviation of your language (**en** for English, **eo** for Esperanto, **es** for Español, and so on). If you press the **Super** key (the key with a Linux penguin, Windows logo, or Mac Command symbol) and the **Spacebar** together on your keyboard, you will switch input sources from your default source to the next on your input list. In this example, you only have two input sources: your default language and Typing Booster.
Try pressing **Super**+**Spacebar** together and watch the input name and icon change.
#### Configure Typing Booster
With the Typing Booster input method active, click the input sources icon in the top-right of your screen, select **Unicode symbols and emoji predictions**, and set it to **On**.
![Set Unicode symbols and emoji predictions to On][8]
This makes Typing Booster dedicated to typing emoji, which isn't all Typing Booster is good for, but in the context of this article it's exactly what is needed.
### Type emoji
With Typing Booster still active, open a text editor like Gedit, a web browser, or anything that you know understands Unicode characters, and type "_thumbs up_." As you type, Typing Booster searches for matching emoji names.
![Typing Booster searching for emojis][9]
To leave emoji mode, press **Super**+**Spacebar** again, and your input source goes back to your default language.
### Switch the switcher
If the **Super**+**Spacebar** keyboard shortcut is not natural for you, then you can change it to a different combination. In GNOME Settings, navigate to **Devices** and select **Keyboard**.
In the top bar of the **Keyboard** window, search for **Input** to filter the list. Set **Switch to next input source** to a key combination of your choice.
![Changing keystroke combination in GNOME settings][10]
### Unicode input
The fact is, keyboards were designed for a 26-letter (or thereabouts) alphabet along with as many numerals and symbols. ASCII has more characters than what you find on a typical keyboard, to say nothing of the millions of characters within Unicode. If you want to type Unicode characters into a modern Linux application but don't want to switch to Typing Booster, then you can use the Unicode input shortcut.
1. With your default language active, open a text editor like Gedit, a web browser, or any application you know accepts Unicode.
2. Press **Ctrl**+**Shift**+**U** on your keyboard to enter Unicode entry mode. Release the keys.
3. You are currently in Unicode entry mode, so type a number of a Unicode symbol. For instance, try **1F44D** for a 👍 symbol, or **2620** for a ☠ symbol. To get the number code of a Unicode symbol, you can search the internet or refer to the [Unicode specification][11].
### Pragmatic emoji-ism
Emoji are fun and expressive. They can make your text unique to you. They can also be utilitarian. Because emoji are Unicode characters, they can be used anywhere a font can be used, and they can be used the same way any alphabetic character can be used. For instance, if you want to mark a series of files with a special symbol, you can add an emoji to the name, and you can filter by that emoji in Search.
![Labeling a file with emoji][12]
Use emoji all you want because Linux is a Unicode-friendly environment, and it's getting friendlier with every release.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/how-type-emoji-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead_cat-keyboard.png?itok=fuNmiGV- (A cat under a keyboard.)
[2]: https://opensource.com/sites/default/files/uploads/emacs-emoji.jpg (Emoji in Emacs)
[3]: https://www.gnome.org/
[4]: https://www.google.com/get/noto/help/emoji/
[5]: https://opensource.com/sites/default/files/uploads/gnome-setting-region-add.png (Add a new input source)
[6]: https://opensource.com/sites/default/files/uploads/gnome-setting-input-list.png (Add an Input Source panel)
[7]: https://opensource.com/sites/default/files/uploads/gnome-setting-input-other-typing-booster.png (Find Other (Typing Booster) in inputs)
[8]: https://opensource.com/sites/default/files/uploads/emoji-input-on.jpg (Set Unicode symbols and emoji predictions to On)
[9]: https://opensource.com/sites/default/files/uploads/emoji-input.jpg (Typing Booster searching for emojis)
[10]: https://opensource.com/sites/default/files/uploads/gnome-setting-keyboard-switch-input.jpg (Changing keystroke combination in GNOME settings)
[11]: http://unicode.org/emoji/charts/full-emoji-list.html
[12]: https://opensource.com/sites/default/files/uploads/file-label.png (Labeling a file with emoji)

View File

@ -0,0 +1,218 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Intro to the Linux useradd command)
[#]: via: (https://opensource.com/article/19/10/linux-useradd-command)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
Intro to the Linux useradd command
======
Add users (and customize their accounts as needed) with the useradd
command.
![people in different locations who are part of the same team][1]
Adding a user is one of the most fundamental exercises on any computer system; this article focuses on how to do it on a Linux system.
Before getting started, I want to mention three fundamentals to keep in mind. First, like with most operating systems, Linux users need an account to be able to log in. This article specifically covers local accounts, not network accounts such as LDAP. Second, accounts have both a name (called a username) and a number (called a user ID). Third, users are typically placed into a group. Groups also have a name and group ID.
As you'd expect, Linux includes a command-line utility for adding users; it's called **useradd**. You may also find the command **adduser**. Many distributions have added this symbolic link to the **useradd** command as a matter of convenience.
```
$ file `which adduser`
/usr/sbin/adduser: symbolic link to useradd
```
Let's take a look at **useradd**.
> Note: The defaults described in this article reflect those in Red Hat Enterprise Linux 8.0. You may find subtle differences in these files and certain defaults on other Linux distributions or other Unix operating systems such as FreeBSD or Solaris.
### Default behavior
The basic usage of **useradd** is quite simple: A user can be added just by providing their username.
```
`$ sudo useradd sonny`
```
In this example, the **useradd** command creates an account called _sonny_. A group with the same name is also created, and _sonny_ is placed in it to be used as the primary group. There are other parameters, such as language and shell, that are applied according to defaults and values set in the configuration files **/etc/default/useradd** and **/etc/login.defs**. This is generally sufficient for a single, personal system or a small, one-server business environment.
While the two files above govern the behavior of **useradd**, user information is stored in other files found in the **/etc** directory, which I will refer to throughout this article.
File | Description | Fields (bold—set by useradd)
---|---|---
passwd | Stores user account details | **username**:unused:**uid**:**gid**:**comment**:**homedir**:**shell**
shadow | Stores user account security details | **username**:password:lastchange:minimum:maximum:warn:**inactive**:**expire**:unused
group | Stores group details | **groupname**:unused:**gid**:**members**
### Customizable behavior
The command line allows customization for times when an administrator needs finer control, such as to specify a user's ID number.
#### User and group ID numbers
By default, **useradd** tries to use the same number for the user ID (UID) and primary group ID (GID), but there are no guarantees. Although it's not necessary for the UID and GID to match, it's easier for administrators to manage them when they do.
I have just the scenario to explain. Suppose I add another account, this time for Timmy. Comparing the two users, _sonny_ and _timmy_, shows that both users and their respective primary groups were created by using the **getent** command.
```
$ getent passwd sonny timmy
sonny❌1001:1002:Sonny:/home/sonny:/bin/bash
timmy❌1002:1003::/home/timmy:/bin/bash
$ getent group sonny timmy
sonny❌1002:
timmy❌1003:
```
Unfortunately, neither users' UID nor primary GID match. This is because the default behavior is to assign the next available UID to the user and then attempt to assign the same number to the primary group. However, if that number is already used, the next available GID is assigned to the group. To explain what happened, I hypothesize that a group with GID 1001 already exists and enter a command to confirm.
```
$ getent group 1001
book❌1001:alan
```
The group _book_ with the ID _1001_ has caused the GIDs to be off by one. This is an example where a system administrator would need to take more control of the user-creation process. To resolve this issue, I must first determine the next available user and group ID that will match. The commands **getent group** and **getent passwd** will be helpful in determining the next available number. This number can be passed with the **-u** argument.
```
$ sudo useradd -u 1004 bobby
$ getent passwd bobby; getent group bobby
bobby❌1004:1004::/home/bobby:/bin/bash
bobby❌1004:
```
Another good reason to specify the ID is for users that will be accessing files on a remote system using the Network File System (NFS). NFS is easier to administer when all client and server systems have the same ID configured for a given user. I cover this in a bit more detail in my article on [using autofs to mount NFS shares][2].
### More customization
Very often though, other account parameters need to be specified for a user. Here are brief examples of the most common customizations you may need to use.
#### Comment
The comment option is a plain-text field for providing a short description or other information using the **-c** argument.
```
$ sudo useradd -c "Bailey is cool" bailey
$ getent passwd bailey
bailey❌1011:1011:Bailey is cool:/home/bailey:/bin/bash
```
#### Groups
A user can be assigned one primary group and multiple secondary groups. The **-g** argument specifies the name or GID of the primary group. If it's not specified, **useradd** creates a primary group with the user's same name (as demonstrated above). The **-G** (uppercase) argument is used to pass a comma-separated list of groups that the user will be placed into; these are known as secondary groups.
```
$ sudo useradd -G tgroup,fgroup,libvirt milly
$ id milly
uid=1012(milly) gid=1012(milly) groups=1012(milly),981(libvirt),4000(fgroup),3000(tgroup)
```
#### Home directory
The default behavior of **useradd** is to create the user's home directory in **/home**. However, different aspects of the home directory can be overridden with the following arguments. The **-b** sets another directory where user homes can be placed. For example, **/home2** instead of the default **/home**.
```
$ sudo useradd -b /home2 vicky
$ getent passwd vicky
vicky❌1013:1013::/home2/vicky:/bin/bash
```
The **-d** lets you specify a home directory with a different name from the user.
```
$ sudo useradd -d /home/ben jerry
$ getent passwd jerry
jerry❌1014:1014::/home/ben:/bin/bash
```
#### The skeleton directory
The **-k** instructs the new user's new home directory to be populated with any files in the **/etc/skel** directory. These are usually shell configuration files, but they can be anything that a system administrator would like to make available to all new users.
#### Shell
The **-s** argument can be used to specify the shell. The default is used if nothing else is specified. For example, in the following, shell **bash** is defined in the default configuration file, but Wally has requested **zsh**.
```
$ grep SHELL /etc/default/useradd
SHELL=/bin/bash
$ sudo useradd -s /usr/bin/zsh wally
$ getent passwd wally
wally❌1004:1004::/home/wally:/usr/bin/zsh
```
#### Security
Security is an essential part of user management, so there are several options available with the **useradd** command. A user account can be given an expiration date, in the form YYYY-MM-DD, using the **-e** argument.
```
$ sudo useradd -e 20191231 sammy
$ sudo getent shadow sammy
sammy:!!:18171:0:99999:7::20191231:
```
An account can also be disabled automatically if the password expires. The **-f** argument will set the number of days after the password expires before the account is disabled. Zero is immediate.
```
$ sudo useradd -f 30 willy
$ sudo getent shadow willy
willy:!!:18171:0:99999:7:30::
```
### A real-world example
In practice, several of these arguments may be used when creating a new user account. For example, if I need to create an account for Perry, I might use the following command:
```
$ sudo useradd -u 1020 -c "Perry Example" \
-G tgroup -b /home2 \
-s /usr/bin/zsh \
-e 20201201 -f 5 perry
```
Refer to the sections above to understand each option. Verify the results with:
```
$ getent passwd perry; getent group perry; getent shadow perry; id perry
perry❌1020:1020:Perry Example:/home2/perry:/usr/bin/zsh
perry❌1020:
perry:!!:18171:0:99999:7:5:20201201:
uid=1020(perry) gid=1020(perry) groups=1020(perry),3000(tgroup)
```
### Some final advice
The **useradd** command is a "must-know" for any Unix (not just Linux) administrator. It is important to understand all of its options since user creation is something that you want to get right the first time. This means having a well-thought-out naming convention that includes a dedicated UID/GID range reserved for your users across your enterprise, not just on a single system—particularly when you're working in a growing organization.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/linux-useradd-command
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/connection_people_team_collaboration.png?itok=0_vQT8xV (people in different locations who are part of the same team)
[2]: https://opensource.com/article/18/6/using-autofs-mount-nfs-shares

View File

@ -0,0 +1,132 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using multitail on Linux)
[#]: via: (https://www.networkworld.com/article/3445228/using-multitail-on-linux.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Using multitail on Linux
======
[Glen Bowman][1] [(CC BY-SA 2.0)][2]
The **multitail** command can be very helpful whenever you want to watch activity on a number of files at the same time  especially log files. It works like a multi-windowed **tail -f** command. That is, it displays the bottoms of files and new lines as they are being added. While easy to use in general, **multitail** does provide some command-line and interactive options that you should be aware of before you start to use it routinely.
### Basic multitail-ing
The simplest use of **multitail** is to list the names of the files that you wish to watch on the command line. This command splits the screen horizontally (i.e., top and bottom), displaying the bottom of each of the files along with updates.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
```
$ multitail /var/log/syslog /var/log/dmesg
```
The display will be split like this:
[][4]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][4]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
```
+-----------------------+
| |
| |
+-----------------------|
| |
| |
+-----------------------+
```
The lines displayed from each of the files would be followed by a single line per file that includes the assigned file number (starting with 00), the file name, the file size, and the date and time the most recent content was added. Each of the files will be allotted half the space available regardless of its size or activity. For example:
```
content lines from my1.log
more content
more lines
00] my1.log 59KB - 2019/10/14 12:12:09
content lines from my2.log
more content
more lines
01] my2.log 120KB - 2019/10/14 14:22:29
```
Note that **multitail** will not complain if you ask it to display non-text files or files that you have no permission to view; you just won't see the contents.
You can also use wild cards to specify the files that you want to watch:
```
$ multitail my*.log
```
One thing to keep in mind is that **multitail** is going to split the screen evenly. If you specify too many files, you will see only a few lines from each and you will only see the first seven or so of the requested files if you list too many unless you take extra steps to view the later files (see the scrolling option described below). The exact result depends on the how many lines are available in your terminal window.
Press **q** to quit **multitail** and return to your normal screen view.
### Dividing the screen
**Multitail** will split your terminal window vertically (i.e., left and right) if you prefer. For this, use the **-s** option. If you specify three files, the right side of your screen will be divided horizontally as well. With four, you'll have four equal-sized windows.
```
+-----------+-----------+ +-----------+-----------+ +-----------+-----------+
| | | | | | | | |
| | | | | | | | |
| | | | +-----------+ +-----------+-----------+
| | | | | | | | |
| | | | | | | | |
+-----------+-----------+ +-----------+-----------+ +-----------+-----------+
2 files 3 files 4 files
```
Use **multitail -s 3 file1 file2 file3** if you want to split the screen into three columns.
```
+-------+-------+-------+
| | | |
| | | |
| | | |
| | | |
| | | |
+-------+-------+-------+
3 files with -s 3
```
### Scrolling
You can scroll up and down through displayed files, but you need to press **b** to bring up a selection menu and then use the up and arrow buttons to select the file you wish to scroll through. Then press the **enter** key. You can then scroll through the lines in an enlarged area, again using the up and down arrows. Press **q** when you're done to go back to the normal view.
### Getting Help
Pressing **h** in **multitail** will open a help menu describing some of the basic operations, though the man page provides quite a bit more information and is worth perusing if you want to learn even more about using this tool.
**Multitail** will not likely be installed on your system by default, but using **apt-get** or **yum** should get you to an easy install. The tool provides a lot of functionality, but with its character-based display, window borders will just be strings of **q**'s and **x**'s. It's a very handy when you need to keep an eye on file updates.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3445228/using-multitail-on-linux.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.flickr.com/photos/glenbowman/7992498919/in/photolist-dbgDtv-gHfRRz-5uRM4v-gHgFnz-6sPqTZ-5uaP7H-USFPqD-pbtRUe-fiKiYn-nmgWL2-pQNepR-q68p8d-dDsUxw-dbgFKG-nmgE6m-DHyqM-nCKA4L-2d7uFqH-Kbqzk-8EwKg-8Vy72g-2X3NSN-78Bv84-buKWXF-aeM4ok-yhweWf-4vwpyX-9hu8nq-9zCoti-v5nzP5-23fL48r-24y6pGS-JhWDof-6zF75k-24y6nHS-9hr19c-Gueh6G-Guei7u-GuegFy-24y6oX5-26qu5iX-wKrnMW-Gueikf-24y6oYh-27y4wwA-x4z19F-x57yP4-24BY6gc-24y6nPo-QGwbkf
[2]: https://creativecommons.org/licenses/by-sa/2.0/legalcode
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,210 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Configure Rsyslog Server in CentOS 8 / RHEL 8)
[#]: via: (https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/)
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
How to Configure Rsyslog Server in CentOS 8 / RHEL 8
======
**Rsyslog** is a free and opensource logging utility that exists by default on  **CentOS** 8 and **RHEL** 8 systems. It provides an easy and effective way of **centralizing logs** from client nodes to a single central server. The centralization of logs is beneficial in two ways. First,  it simplifies viewing of logs as the Systems administrator can view all the logs of remote servers from a central point without logging into every client system to check the logs. This is greatly beneficial if there are several servers that need to be monitored and secondly, in the event that a remote client suffers a crash, you need not worry about losing the logs because all the logs will be saved on the **central rsyslog server**. Rsyslog has replaced syslog which only supported **UDP** protocol. It extends the basic syslog protocol with superior features such as support for both **UDP** and **TCP** protocols in transporting logs, augmented filtering abilities, and flexible configuration options. That said, lets explore how to configure the Rsyslog server in CentOS 8 / RHEL 8 systems.
[![configure-rsyslog-centos8-rhel8][1]][2]
### Prerequisites
We are going to have the following lab setup to test the centralized logging process:
* **Rsyslog server**       CentOS 8 Minimal    IP address: 10.128.0.47
* **Client system**         RHEL 8 Minimal      IP address: 10.128.0.48
From the setup above, we will demonstrate how you can set up the Rsyslog server and later configure the client system to ship logs to the Rsyslog server for monitoring.
Lets get started!
### Configuring the Rsyslog Server on CentOS 8
By default, Rsyslog comes installed on CentOS 8 / RHEL 8 servers. To verify the status of Rsyslog, log in via SSH and issue the command:
```
$ systemctl status rsyslog
```
Sample Output
![rsyslog-service-status-centos8][1]
If rsyslog is not present for whatever reason, you can install it using the command:
```
$ sudo yum install rsyslog
```
Next, you need to modify a few settings in the Rsyslog configuration file. Open the configuration file.
```
$ sudo vim /etc/rsyslog.conf
```
Scroll and uncomment the lines shown below to allow reception of logs via UDP protocol
```
module(load="imudp") # needs to be done just once
input(type="imudp" port="514")
```
![rsyslog-conf-centos8-rhel8][1]
Similarly, if you prefer to enable TCP rsyslog reception uncomment the lines:
```
module(load="imtcp") # needs to be done just once
input(type="imtcp" port="514")
```
![rsyslog-conf-tcp-centos8-rhel8][1]
Save and exit the configuration file.
To receive the logs from the client system,  we need to open Rsyslog default port 514 on the firewall. To achieve this, run
```
# sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
```
Next, reload the firewall to save the changes
```
# sudo firewall-cmd --reload
```
Sample Output
![firewall-ports-rsyslog-centos8][1]
Next, restart Rsyslog server
```
$ sudo systemctl restart rsyslog
```
To enable Rsyslog on boot, run beneath command
```
$ sudo systemctl enable rsyslog
```
To confirm that the Rsyslog server is listening on port 514, use the netstat command as follows:
```
$ sudo netstat -pnltu
```
Sample Output
![netstat-rsyslog-port-centos8][1]
Perfect! we have successfully configured our Rsyslog server to receive logs from the client system.
To view log messages in real-time run the command:
```
$ tail -f /var/log/messages
```
Lets now configure the client system.
### Configuring the client system on RHEL 8
Like the Rsyslog server, log in and check if the rsyslog daemon is running by issuing the command:
```
$ sudo systemctl status rsyslog
```
Sample Output
![client-rsyslog-service-rhel8][1]
Next, proceed to open the rsyslog configuration file
```
$ sudo vim /etc/rsyslog.conf
```
At the end of the file, append the following line
```
*.* @10.128.0.47:514 # Use @ for UDP protocol
*.* @@10.128.0.47:514 # Use @@ for TCP protocol
```
Save and exit the configuration file. Just like the Rsyslog Server, open port 514 which is the default Rsyslog port on the firewall
```
$ sudo firewall-cmd --add-port=514/tcp --zone=public --permanent
```
Next, reload the firewall to save the changes
```
$ sudo firewall-cmd --reload
```
Next,  restart the rsyslog service
```
$ sudo systemctl restart rsyslog
```
To enable Rsyslog on boot, run following command
```
$ sudo systemctl enable rsyslog
```
### Testing the logging operation
Having successfully set up and configured Rsyslog Server and client system, its time to verify of your configuration is working as intended.
On the client system issue the command:
```
# logger "Hello guys! This is our first log"
```
Now head out to the Rsyslog server and run the command below to check the logs messages in real-time
```
# tail -f /var/log/messages
```
The output from the command run on the client system should register on the Rsyslog servers log messages to imply that the  Rsyslog server is now receiving logs from the client system.
![centralize-logs-rsyslogs-centos8][1]
And thats it, guys! We have successfully setup the Rsyslog server to receive log messages from a client system.
Read Also: **[How to Setup Multi Node Elastic Stack Cluster on RHEL 8 / CentOS 8][3]**
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/configure-rsyslog-server-centos-8-rhel-8/
作者:[James Kiarie][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/james/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/configure-rsyslog-centos8-rhel8.jpg
[3]: https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-centos8/

View File

@ -0,0 +1,516 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to use Protobuf for data interchange)
[#]: via: (https://opensource.com/article/19/10/protobuf-data-interchange)
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
How to use Protobuf for data interchange
======
Protobuf encoding increases efficiency when exchanging data between
applications written in different languages and running on different
platforms.
![metrics and data shown on a computer screen][1]
Protocol buffers ([Protobufs][2]), like XML and JSON, allow applications, which may be written in different languages and running on different platforms, to exchange data. For example, a sending application written in Go could encode a Go-specific sales order in Protobuf, which a receiver written in Java then could decode to get a Java-specific representation of the received order. Here is a sketch of the architecture over a network connection:
```
`Go sales order--->Pbuf-encode--->network--->Pbuf-decode--->Java sales order`
```
Protobuf encoding, in contrast to its XML and JSON counterparts, is binary rather than text, which can complicate debugging. However, as the code examples in this article confirm, the Protobuf encoding is significantly more efficient in size than either XML or JSON encoding.
Protobuf is efficient in another way. At the implementation level, Protobuf and other encoding systems serialize and deserialize structured data. Serialization transforms a language-specific data structure into a bytestream, and deserialization is the inverse operation that transforms a bytestream back into a language-specific data structure. Serialization and deserialization may become the bottleneck in data interchange because these operations are CPU-intensive. Efficient serialization and deserialization is another Protobuf design goal.
Recent encoding technologies, such as Protobuf and FlatBuffers, derive from the [DCE/RPC][3] (Distributed Computing Environment/Remote Procedure Call) initiative of the early 1990s. Like DCE/RPC, Protobuf contributes to both the [IDL][4] (interface definition language) and the encoding layer in data interchange.
This article will look at these two layers then provide code examples in Go and Java to flesh out Protobuf details and show that Protobuf is easy to use.
### Protobuf as an IDL and encoding layer
DCE/RPC, like Protobuf, is designed to be language- and platform-neutral. The appropriate libraries and utilities allow any language and platform to play in the DCE/RPC arena. Furthermore, the DCE/RPC architecture is elegant. An IDL document is the contract between the remote procedure on the one side and callers on the other side. Protobuf, too, centers on an IDL document.
An IDL document is text and, in DCE/RPC, uses basic C syntax along with syntactic extensions for metadata (square brackets) and a few new keywords such as **interface**. Here is an example:
```
[uuid (2d6ead46-05e3-11ca-7dd1-426909beabcd), version(1.0)]
interface echo {
   const long int ECHO_SIZE = 512;
   void echo(
      [in]          handle_t h,
      [in, string]  idl_char from_client[ ],
      [out, string] idl_char from_service[ECHO_SIZE]
   );
}
```
This IDL document declares a procedure named **echo**, which takes three arguments: the **[in]** arguments of type **handle_t** (implementation pointer) and **idl_char** (array of ASCII characters) are passed to the remote procedure, whereas the **[out]** argument (also a string) is passed back from the procedure. In this example, the **echo** procedure does not explicitly return a value (the **void** to the left of **echo**) but could do so. A return value, together with one or more **[out]** arguments, allows the remote procedure to return arbitrarily many values. The next section introduces a Protobuf IDL, which differs in syntax but likewise serves as a contract in data interchange.
The IDL document, in both DCE/RPC and Protobuf, is the input to utilities that create the infrastructure code for exchanging data:
```
`IDL document--->DCE/PRC or Protobuf utilities--->support code for data interchange`
```
As relatively straightforward text, the IDL is likewise human-readable documentation about the specifics of the data interchange—in particular, the number of data items exchanged and the data type of each item.
Protobuf can used in a modern RPC system such as [gRPC][5]; but Protobuf on its own provides only the IDL layer and the encoding layer for messages passed from a sender to a receiver. Protobuf encoding, like the DCE/RPC original, is binary but more efficient.
At present, XML and JSON encodings still dominate in data interchange through technologies such as web services, which make use of in-place infrastructure such as web servers, transport protocols (e.g., TCP, HTTP), and standard libraries and utilities for processing XML and JSON documents. Moreover, database systems of various flavors can store XML and JSON documents, and even legacy relational systems readily generate XML encodings of query results. Every general-purpose programming language now has libraries that support XML and JSON. What, then, recommends a return to a _binary_ encoding system such as Protobuf?
Consider the negative decimal value **-128**. In the 2's complement binary representation, which dominates across systems and languages, this value can be stored in a single 8-bit byte: 10000000. The text encoding of this integer value in XML or JSON requires multiple bytes. For example, UTF-8 encoding requires four bytes for the string, literally **-128**, which is one byte per character (in hex, the values are 0x2d, 0x31, 0x32, and 0x38). XML and JSON also add markup characters, such as angle brackets and braces, to the mix. Details about Protobuf encoding are forthcoming, but the point of interest now is a general one: Text encodings tend to be significantly less compact than binary ones.
### A code example in Go using Protobuf
My code examples focus on Protobuf rather than RPC. Here is an overview of the first example:
* The IDL file named _dataitem.proto_ defines a Protobuf **message** with six fields of different types: integer values with different ranges, floating-point values of a fixed size, and strings of two different lengths.
* The Protobuf compiler uses the IDL file to generate a Go-specific version (and, later, a Java-specific version) of the Protobuf **message** together with supporting functions.
* A Go app populates the native Go data structure with randomly generated values and then serializes the result to a local file. For comparison, XML and JSON encodings also are serialized to local files.
* As a test, the Go application reconstructs an instance of its native data structure by deserializing the contents of the Protobuf file.
* As a language-neutrality test, the Java application also deserializes the contents of the Protobuf file to get an instance of a native data structure.
This IDL file and two Go and one Java source files are available as a ZIP file on [my website][6].
The all-important Protobuf IDL document is shown below. The document is stored in the file _dataitem.proto_, with the customary _.proto_ extension.
#### Example 1. Protobuf IDL document
```
syntax = "proto3";
package main;
message DataItem {
  int64  oddA  = 1;
  int64  evenA = 2;
  int32  oddB  = 3;
  int32  evenB = 4;
  float  small = 5;
  float  big   = 6;
  string short = 7;
  string long  = 8;
}
```
The IDL uses the current proto3 rather than the earlier proto2 syntax. The package name (in this case, **main**) is optional but customary; it is used to avoid name conflicts. The structured **message** contains eight fields, each of which has a Protobuf data type (e.g., **int64**, **string**), a name (e.g., **oddA**, **short**), and a numeric tag (aka key) after the equals sign **=**. The tags, which are 1 through 8 in this example, are unique integer identifiers that determine the order in which the fields are serialized.
Protobuf messages can be nested to arbitrary levels, and one message can be the field type in the other. Here's an example that uses the **DataItem** message as a field type:
```
message DataItems {
  repeated DataItem item = 1;
}
```
A single **DataItems** message consists of repeated (none or more) **DataItem** messages.
Protobuf also supports enumerated types for clarity:
```
enum PartnershipStatus {
  reserved "FREE", "CONSTRAINED", "OTHER";
}
```
The **reserved** qualifier ensures that the numeric values used to implement the three symbolic names cannot be reused.
To generate a language-specific version of one or more declared Protobuf **message** structures, the IDL file containing these is passed to the _protoc_ compiler (available in the [Protobuf GitHub repository][7]). For the Go code, the supporting Protobuf library can be installed in the usual way (with **%** as the command-line prompt):
```
`% go get github.com/golang/protobuf/proto`
```
The command to compile the Protobuf IDL file _dataitem.proto_ into Go source code is:
```
`% protoc --go_out=. dataitem.proto`
```
The flag **\--go_out** directs the compiler to generate Go source code; there are similar flags for other languages. The result, in this case, is a file named _dataitem.pb.go_, which is small enough that the essentials can be copied into a Go application. Here are the essentials from the generated code:
```
var _ = proto.Marshal
type DataItem struct {
   OddA  int64   `protobuf:"varint,1,opt,name=oddA" json:"oddA,omitempty"`
   EvenA int64   `protobuf:"varint,2,opt,name=evenA" json:"evenA,omitempty"`
   OddB  int32   `protobuf:"varint,3,opt,name=oddB" json:"oddB,omitempty"`
   EvenB int32   `protobuf:"varint,4,opt,name=evenB" json:"evenB,omitempty"`
   Small float32 `protobuf:"fixed32,5,opt,name=small" json:"small,omitempty"`
   Big   float32 `protobuf:"fixed32,6,opt,name=big" json:"big,omitempty"`
   Short string  `protobuf:"bytes,7,opt,name=short" json:"short,omitempty"`
   Long  string  `protobuf:"bytes,8,opt,name=long" json:"long,omitempty"`
}
func (m *DataItem) Reset()         { *m = DataItem{} }
func (m *DataItem) String() string { return proto.CompactTextString(m) }
func (*DataItem) ProtoMessage()    {}
func init() {}
```
The compiler-generated code has a Go structure **DataItem**, which exports the Go fields—the names are now capitalized—that match the names declared in the Protobuf IDL. The structure fields have standard Go data types: **int32**, **int64**, **float32**, and **string**. At the end of each field line, as a string, is metadata that describes the Protobuf types, gives the numeric tags from the Protobuf IDL document, and provides information about JSON, which is discussed later.
There are also functions; the most important is **proto.Marshal** for serializing an instance of the **DataItem** structure into Protobuf format. The helper functions include **Reset**, which clears a **DataItem** structure, and **String**, which produces a one-line string representation of a **DataItem**.
The metadata that describes Protobuf encoding deserves a closer look before analyzing the Go program in more detail.
### Protobuf encoding
A Protobuf message is structured as a collection of key/value pairs, with the numeric tag as the key and the corresponding field as the value. The field names, such as **oddA** and **small**, are for human readability, but the _protoc_ compiler does use the field names in generating language-specific counterparts. For example, the **oddA** and **small** names in the Protobuf IDL become the fields **OddA** and **Small**, respectively, in the Go structure.
The keys and their values both get encoded, but with an important difference: some numeric values have a fixed-size encoding of 32 or 64 bits, whereas others (including the **message** tags) are _varint_ encoded—the number of bits depends on the integer's absolute value. For example, the integer values 1 through 15 require 8 bits to encode in _varint_, whereas the values 16 through 2047 require 16 bits. The _varint_ encoding, similar in spirit (but not in detail) to UTF-8 encoding, favors small integer values over large ones. (For a detailed analysis, see the Protobuf [encoding guide][8].) The upshot is that a Protobuf **message** should have small integer values in fields, if possible, and as few keys as possible, but one key per field is unavoidable.
Table 1 below gives the gist of Protobuf encoding:
**Table 1. Protobuf data types**
Encoding | Sample types | Length
---|---|---
varint | int32, uint32, int64 | Variable length
fixed | fixed32, float, double | Fixed 32-bit or 64-bit length
byte sequence | string, bytes | Sequence length
Integer types that are not explicitly **fixed** are _varint_ encoded; hence, in a _varint_ type such as **uint32** (**u** for unsigned), the number 32 describes the integer's range (in this case, 0 to 232 \- 1) rather than its bit size, which differs depending on the value. For fixed types such as **fixed32** or **double**, by contrast, the Protobuf encoding requires 32 and 64 bits, respectively. Strings in Protobuf are byte sequences; hence, the size of the field encoding is the length of the byte sequence.
Another efficiency deserves mention. Recall the earlier example in which a **DataItems** message consists of repeated **DataItem** instances:
```
message DataItems {
  repeated DataItem item = 1;
}
```
The **repeated** means that the **DataItem** instances are _packed_: the collection has a single tag, in this case, 1. A **DataItems** message with repeated **DataItem** instances is thus more efficient than a message with multiple but separate **DataItem** fields, each of which would require a tag of its own.
With this background in mind, let's return to the Go program.
### The dataItem program in detail
The _dataItem_ program creates a **DataItem** instance and populates the fields with randomly generated values of the appropriate types. Go has a **rand** package with functions for generating pseudo-random integer and floating-point values, and my **randString** function generates pseudo-random strings of specified lengths from a character set. The design goal is to have a **DataItem** instance with field values of different types and bit sizes. For example, the **OddA** and **EvenA** values are 64-bit non-negative integer values of odd and even parity, respectively; but the **OddB** and **EvenB** variants are 32 bits in size and hold small integer values between 0 and 2047. The random floating-point values are 32 bits in size, and the strings are 16 (**Short**) and 32 (**Long**) characters in length. Here is the code segment that populates the **DataItem** structure with random values:
```
// variable-length integers
n1 := rand.Int63()        // bigger integer
if (n1 &amp; 1) == 0 { n1++ } // ensure it's odd
...
n3 := rand.Int31() % UpperBound // smaller integer
if (n3 &amp; 1) == 0 { n3++ }       // ensure it's odd
// fixed-length floats
...
t1 := rand.Float32()
t2 := rand.Float32()
...
// strings
str1 := randString(StrShort)
str2 := randString(StrLong)
// the message
dataItem := &amp;DataItem {
   OddA:  n1,
   EvenA: n2,
   OddB:  n3,
   EvenB: n4,
   Big:   f1,
   Small: f2,
   Short: str1,
   Long:  str2,
}
```
Once created and populated with values, the **DataItem** instance is encoded in XML, JSON, and Protobuf, with each encoding written to a local file:
```
func encodeAndserialize(dataItem *DataItem) {
   bytes, _ := xml.MarshalIndent(dataItem, "", " ")  // Xml to dataitem.xml
   ioutil.WriteFile(XmlFile, bytes, 0644)            // 0644 is file access permissions
   bytes, _ = json.MarshalIndent(dataItem, "", " ")  // Json to dataitem.json
   ioutil.WriteFile(JsonFile, bytes, 0644)
   bytes, _ = proto.Marshal(dataItem)                // Protobuf to dataitem.pbuf
   ioutil.WriteFile(PbufFile, bytes, 0644)
}
```
The three serializing functions use the term _marshal_, which is roughly synonymous with _serialize_. As the code indicates, each of the three **Marshal** functions returns an array of bytes, which then are written to a file. (Possible errors are ignored for simplicity.) On a sample run, the file sizes were:
```
dataitem.xml:  262 bytes
dataitem.json: 212 bytes
dataitem.pbuf:  88 bytes
```
The Protobuf encoding is significantly smaller than the other two. The XML and JSON serializations could be reduced slightly in size by eliminating indentation characters, in this case, blanks and newlines.
Below is the _dataitem.json_ file resulting eventually from the **json.MarshalIndent** call, with added comments starting with **##**:
```
{
 "oddA":  4744002665212642479,                ## 64-bit &gt;= 0
 "evenA": 2395006495604861128,                ## ditto
 "oddB":  57,                                 ## 32-bit &gt;= 0 but &lt; 2048
 "evenB": 468,                                ## ditto
 "small": 0.7562016,                          ## 32-bit floating-point
 "big":   0.85202795,                         ## ditto
 "short": "ClH1oDaTtoX$HBN5",                 ## 16 random chars
 "long":  "xId0rD3Cri%3Wt%^QjcFLJgyXBu9^DZI"  ## 32 random chars
}
```
Although the serialized data goes into local files, the same approach would be used to write the data to the output stream of a network connection.
### Testing serialization/deserialization
The Go program next runs an elementary test by deserializing the bytes, which were written earlier to the _dataitem.pbuf_ file, into a **DataItem** instance. Here is the code segment, with the error-checking parts removed:
```
filebytes, err := ioutil.ReadFile(PbufFile) // get the bytes from the file
...
testItem.Reset()                            // clear the DataItem structure
err = proto.Unmarshal(filebytes, testItem)  // deserialize into a DataItem instance
```
The **proto.Unmarshal** function for deserializing Protbuf is the inverse of the **proto.Marshal** function. The original **DataItem** and the deserialized clone are printed to confirm an exact match:
```
Original:
2041519981506242154 3041486079683013705 1192 1879
0.572123 0.326855
boPb#T0O8Xd&amp;Ps5EnSZqDg4Qztvo7IIs 9vH66AiGSQgCDxk&amp;
Deserialized:
2041519981506242154 3041486079683013705 1192 1879
0.572123 0.326855
boPb#T0O8Xd&amp;Ps5EnSZqDg4Qztvo7IIs 9vH66AiGSQgCDxk&amp;
```
### A Protobuf client in Java
The example in Java is to confirm Protobuf's language neutrality. The original IDL file could be used to generate the Java support code, which involves nested classes. To suppress warnings, however, a slight addition can be made. Here is the revision, which specifies a **DataMsg** as the name for the outer class, with the inner class automatically named **DataItem** after the Protobuf message:
```
syntax = "proto3";
package main;
option java_outer_classname = "DataMsg";
message DataItem {
...
```
With this change in place, the _protoc_ compilation is the same as before, except the desired output is now Java rather than Go:
```
`% protoc --java_out=. dataitem.proto`
```
The resulting source file (in a subdirectory named _main_) is _DataMsg.java_ and about 1,120 lines in length: Java is not terse. Compiling and then running the Java code requires a JAR file with the library support for Protobuf. This file is available in the [Maven repository][9].
With the pieces in place, my test code is relatively short (and available in the ZIP file as _Main.java_):
```
package main;
import java.io.FileInputStream;
public class Main {
   public static void main(String[] args) {
      String path = "dataitem.pbuf";  // from the Go program's serialization
      try {
         DataMsg.DataItem deserial =
           DataMsg.DataItem.newBuilder().mergeFrom(new FileInputStream(path)).build();
         System.out.println(deserial.getOddA()); // 64-bit odd
         System.out.println(deserial.getLong()); // 32-character string
      }
      catch(Exception e) { System.err.println(e); }
    }
}
```
Production-grade testing would be far more thorough, of course, but even this preliminary test confirms the language-neutrality of Protobuf: the _dataitem.pbuf_ file results from the Go program's serialization of a Go **DataItem**, and the bytes in this file are deserialized to produce a **DataItem** instance in Java. The output from the Java test is the same as that from the Go test.
### Wrapping up with the numPairs program
Let's end with an example that highlights Protobuf efficiency but also underscores the cost involved in any encoding technology. Consider this Protobuf IDL file:
```
syntax = "proto3";
package main;
message NumPairs {
  repeated NumPair pair = 1;
}
message NumPair {
  int32 odd = 1;
  int32 even = 2;
}
```
A **NumPair** message consists of two **int32** values together with an integer tag for each field. A **NumPairs** message is a sequence of embedded **NumPair** messages.
The _numPairs_ program in Go (below) creates 2 million **NumPair** instances, with each appended to the **NumPairs** message. This message can be serialized and deserialized in the usual way.
#### Example 2. The numPairs program
```
package main
import (
   "math/rand"
   "time"
   "encoding/xml"
   "encoding/json"
   "io/ioutil"
   "github.com/golang/protobuf/proto"
)
// protoc-generated code: start
var _ = proto.Marshal
type NumPairs struct {
   Pair []*NumPair `protobuf:"bytes,1,rep,name=pair" json:"pair,omitempty"`
}
func (m *NumPairs) Reset()         { *m = NumPairs{} }
func (m *NumPairs) String() string { return proto.CompactTextString(m) }
func (*NumPairs) ProtoMessage()    {}
func (m *NumPairs) GetPair() []*NumPair {
   if m != nil { return m.Pair }
   return nil
}
type NumPair struct {
   Odd  int32 `protobuf:"varint,1,opt,name=odd" json:"odd,omitempty"`
   Even int32 `protobuf:"varint,2,opt,name=even" json:"even,omitempty"`
}
func (m *NumPair) Reset()         { *m = NumPair{} }
func (m *NumPair) String() string { return proto.CompactTextString(m) }
func (*NumPair) ProtoMessage()    {}
func init() {}
// protoc-generated code: finish
var numPairsStruct NumPairs
var numPairs = &amp;numPairsStruct
func encodeAndserialize() {
   // XML encoding
   filename := "./pairs.xml"
   bytes, _ := xml.MarshalIndent(numPairs, "", " ")
   ioutil.WriteFile(filename, bytes, 0644)
   // JSON encoding
   filename = "./pairs.json"
   bytes, _ = json.MarshalIndent(numPairs, "", " ")
   ioutil.WriteFile(filename, bytes, 0644)
   // ProtoBuf encoding
   filename = "./pairs.pbuf"
   bytes, _ = proto.Marshal(numPairs)
   ioutil.WriteFile(filename, bytes, 0644)
}
const HowMany = 200 * 100  * 100 // two million
func main() {
   rand.Seed(time.Now().UnixNano())
   // uncomment the modulus operations to get the more efficient version
   for i := 0; i &lt; HowMany; i++ {
      n1 := rand.Int31() // % 2047
      if (n1 &amp; 1) == 0 { n1++ } // ensure it's odd
      n2 := rand.Int31() // % 2047
      if (n2 &amp; 1) == 1 { n2++ } // ensure it's even
      next := &amp;NumPair {
                 Odd:  n1,
                 Even: n2,
              }
      numPairs.Pair = append(numPairs.Pair, next)
   }
   encodeAndserialize()
}
```
The randomly generated odd and even values in each **NumPair** range from zero to 2 billion and change. In terms of raw rather than encoded data, the integers generated in the Go program add up to 16MB: two integers per **NumPair** for a total of 4 million integers in all, and each value is four bytes in size.
For comparison, the table below has entries for the XML, JSON, and Protobuf encodings of the 2 million **NumPair** instances in the sample **NumsPairs** message. The raw data is included, as well. Because the _numPairs_ program generates random values, output differs across sample runs but is close to the sizes shown in the table.
**Table 2. Encoding overhead for 16MB of integers**
Encoding | File | Byte size | Pbuf/other ratio
---|---|---|---
None | pairs.raw | 16MB | 169%
Protobuf | pairs.pbuf | 27MB | —
JSON | pairs.json | 100MB | 27%
XML | pairs.xml | 126MB | 21%
As expected, Protobuf shines next to XML and JSON. The Protobuf encoding is about a quarter of the JSON one and about a fifth of the XML one. But the raw data make clear that Protobuf incurs the overhead of encoding: the serialized Protobuf message is 11MB larger than the raw data. Any encoding, including Protobuf, involves structuring the data, which unavoidably adds bytes.
Each of the serialized 2 million **NumPair** instances involves _four_ integer values: one apiece for the **Even** and **Odd** fields in the Go structure, and one tag per each field in the Protobuf encoding. As raw rather than encoded data, this would come to 16 bytes per instance, and there are 2 million instances in the sample **NumPairs** message. But the Protobuf tags, like the **int32** values in the **NumPair** fields, use _varint_ encoding and, therefore, vary in byte length; in particular, small integer values (which include the tags, in this case) require fewer than four bytes to encode.
If the _numPairs_ program is revised so that the two **NumPair** fields hold values less than 2048, which have encodings of either one or two bytes, then the Protobuf encoding drops from 27MB to 16MB—the very size of the raw data. The table below summarizes the new encoding sizes from a sample run.
**Table 3. Encoding with 16MB of integers &lt; 2048**
Encoding | File | Byte size | Pbuf/other ratio
---|---|---|---
None | pairs.raw | 16MB | 100%
Protobuf | pairs.pbuf | 16MB | —
JSON | pairs.json | 77MB | 21%
XML | pairs.xml | 103MB | 15%
In summary, the modified _numPairs_ program, with field values less than 2048, reduces the four-byte size for each integer value in the raw data. But the Protobuf encoding still requires tags, which add bytes to the Protobuf message. Protobuf encoding does have a cost in message size, but this cost can be reduced by the _varint_ factor if relatively small integer values, whether in fields or keys, are being encoded.
For moderately sized messages consisting of structured data with mixed types—and relatively small integer values—Protobuf has a clear advantage over options such as XML and JSON. In other cases, the data may not be suited for Protobuf encoding. For example, if two applications need to share a huge set of text records or large integer values, then compression rather than encoding technology may be the way to go.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/protobuf-data-interchange
作者:[Marty Kalin][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mkalindepauledu
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
[2]: https://developers.google.com/protocol-buffers/
[3]: https://en.wikipedia.org/wiki/DCE/RPC
[4]: https://en.wikipedia.org/wiki/Interface_description_language
[5]: https://grpc.io/
[6]: http://condor.depaul.edu/mkalin
[7]: https://github.com/protocolbuffers/protobuf
[8]: https://developers.google.com/protocol-buffers/docs/encoding
[9]: https://mvnrepository.com/artifact/com.google.protobuf/protobuf-java

View File

@ -0,0 +1,122 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Perceiving Python programming paradigms)
[#]: via: (https://opensource.com/article/19/10/python-programming-paradigms)
[#]: author: (Jigyasa Grover https://opensource.com/users/jigyasa-grover)
Perceiving Python programming paradigms
======
Python supports imperative, functional, procedural, and object-oriented
programming; here are tips on choosing the right one for a specific use
case.
![A python with a package.][1]
Early each year, TIOBE announces its Programming Language of The Year. When its latest annual [TIOBE index][2] report came out, I was not at all surprised to see [Python again winning the title][3], which was based on capturing the most search engine ranking points (especially on Google, Bing, Yahoo, Wikipedia, Amazon, YouTube, and Baidu) in 2018.
![Python data from TIOBE Index][4]
Adding weight to TIOBE's findings, earlier this year, nearly 90,000 developers took Stack Overflow's annual [Developer Survey][5], which is the largest and most comprehensive survey of people who code around the world. The main takeaway from this year's results was:
> "Python, the fastest-growing major programming language, has risen in the ranks of programming languages in our survey yet again, edging out Java this year and standing as the second most loved language (behind Rust)."
Ever since I started programming and exploring different languages, I have seen admiration for Python soaring high. Since 2003, it has consistently been among the top 10 most popular programming languages. As TIOBE's report stated:
> "It is the most frequently taught first language at universities nowadays, it is number one in the statistical domain, number one in AI programming, number one in scripting and number one in writing system tests. Besides this, Python is also leading in web programming and scientific computing (just to name some other domains). In summary, Python is everywhere."
There are several reasons for Python's rapid rise, bloom, and dominance in multiple domains, including web development, scientific computing, testing, data science, machine learning, and more. The reasons include its readable and maintainable code; extensive support for third-party integrations and libraries; modular, dynamic, and portable structure; flexible programming; learning ease and support; user-friendly data structures; productivity and speed; and, most important, community support. The diverse application of Python is a result of its combined features, which give it an edge over other languages.
But in my opinion, the comparative simplicity of its syntax and the staggering flexibility it provides developers coming from many other languages win the cake. Very few languages can match Python's ability to conform to a developer's coding style rather than forcing him or her to code in a particular way. Python lets more advanced developers use the style they feel is best suited to solve a particular problem.
While working with Python, you are like a snake charmer. This allows you to take advantage of Python's promise to offer a non-conforming environment for developers to code in the style best suited for a particular situation and to make the code more readable, testable, and coherent.
## Python programming paradigms
Python supports four main [programming paradigms][6]: imperative, functional, procedural, and object-oriented. Whether you agree that they are valid or even useful, Python strives to make all four available and working. Before we dive in to see which programming paradigm is most suitable for specific use cases, it is a good time to do a quick review of them.
### Imperative programming paradigm
The [imperative programming paradigm][7] uses the imperative mood of natural language to express directions. It executes commands in a step-by-step manner, just like a series of verbal commands. Following the "how-to-solve" approach, it makes direct changes to the state of the program; hence it is also called the stateful programming model. Using the imperative programming paradigm, you can quickly write very simple yet elegant code, and it is super-handy for tasks that involve data manipulation. Owing to its comparatively slower and sequential execution strategy, it cannot be used for complex or parallel computations.
[![Linus Torvalds quote][8]][9]
Consider this example task, where the goal is to take a list of characters and concatenate it to form a string. A way to do it in an imperative programming style would be something like:
```
&gt;&gt;&gt; sample_characters = ['p','y','t','h','o','n']
&gt;&gt;&gt; sample_string = ''
&gt;&gt;&gt; sample_string
''
&gt;&gt;&gt; sample_string = sample_string + sample_characters[0]
&gt;&gt;&gt; sample_string
'p'
&gt;&gt;&gt; sample_string = sample_string + sample_characters[1]
&gt;&gt;&gt; sample_string
'py'
&gt;&gt;&gt; sample_string = sample_string + sample_characters[2]
&gt;&gt;&gt; sample_string
'pyt'
&gt;&gt;&gt; sample_string = sample_string + sample_characters[3]
&gt;&gt;&gt; sample_string
'pyth'
&gt;&gt;&gt; sample_string = sample_string + sample_characters[4]
&gt;&gt;&gt; sample_string
'pytho'
&gt;&gt;&gt; sample_string = sample_string + sample_characters[5]
&gt;&gt;&gt; sample_string
'python'
&gt;&gt;&gt;
```
Here, the variable **sample_string** is also like a state of the program that is getting changed after executing the series of commands, and it can be easily extracted to track the progress of the program. The same can be done using a **for** loop (also considered imperative programming) in a shorter version of the above code:
```
&gt;&gt;&gt; sample_characters = ['p','y','t','h','o','n']
&gt;&gt;&gt; sample_string = ''
&gt;&gt;&gt; sample_string
&gt;&gt;&gt; for c in sample_characters:
...    sample_string = sample_string + c
...    print(sample_string)
...
p
py
pyt
pyth
pytho
python
&gt;&gt;&gt;
```
### Functional programming paradigm
The [functional programming paradigm][10] treats program computation as the evaluation of mathematical functions based on [lambda calculus][11]. Lambda calculus is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It follows the "what-to-solve" approach—that is, it expresses logic without describing its control flow—hence it is also classified as the declarative programming model.
The functional programming paradigm promotes stateless functions, but it's important to note that Python's implementation of functional programming deviates from standard implementation. Python is said to be an _impure_ functional language because it is possible to maintain state and create side effects if you are not careful. That said, functional programming is handy for parallel processing and is super-efficient for tasks requiring recursion and concurrent execution.
```
&gt;&gt;&gt; sample_characters = ['p','y','t','h','o','n']
&gt;&gt;&gt; import functools
&gt;&gt;&gt; sample_string = functools.reduce(lambda s,c: s + c, sample_characters)
&gt;&gt;&gt; sample_string
'python'
&gt;&gt;&gt;
```
Using the same example, the functional way of concatenating a list of characters to form a string would be the same as above. Since the computation happens in a single line, there is no explicit way to obtain the state of the program with **sample_string** and track the progress. The functional programming implementation of this example is fascinating, as it reduces the lines of code and simply does its job in a single line, with the exception of using the **functools** module and the **reduce** method. The three keywords—**functools**, **reduce**, and **lambda**—are defined as follows:
* **functools** is a module for higher-order functions and provides for functions that act on or return other functions. It encourages writing reusable code, as it is easier to replicate existing functions with some arguments already passed and create a new version of a function in a well-documented manner.
* **reduce** is a method that applies a function of two arguments cumulatively to the items in sequence, from left to right, to reduce the sequence to a single value. For example: [code] &gt;&gt;&gt; sample_list = [1,2,3,4,5]
&gt;&gt;&gt; import functools
&gt;&gt;&gt; sum = functools.reduce(lambda x,y: x + y, sample_list)
&gt;&gt;&gt; sum
15
&gt;&gt;&gt; ((((1+2)+3)+4)+5)
15
&gt;&gt;&gt;
```
* **lambda functions** are small, anonymized (i.e., nameless) functions that can take any number of arguments but spit out only one value. They are useful when they are used as an argu

View File

@ -0,0 +1,241 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (14 SCP Command Examples to Securely Transfer Files in Linux)
[#]: via: (https://www.linuxtechi.com/scp-command-examples-in-linux/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
14 SCP Command Examples to Securely Transfer Files in Linux
======
**SCP** (Secure Copy) is command line tool in Linux and Unix like systems which is used to transfer files and directories across the systems securely over the network. When we use scp command to copy files and directories from our local system to remote system then in the backend it makes **ssh connection** to remote system. In other words, we can say scp uses the same **SSH security mechanism** in the backend, it needs either password or keys for authentication.
[![scp-command-examples-linux][1]][2]
In this tutorial we will discuss 14 useful Linux scp command examples.
**Syntax of scp command:**
### scp &lt;options&gt; &lt;files_or_directories&gt; [root@linuxtechi][3]_host:/&lt;folder&gt;
### scp &lt;options&gt; [root@linuxtechi][3]_host:/files   &lt;folder_local_system&gt;
First syntax of scp command demonstrate how to copy files or directories from local system to target host under the specific folder.
Second syntax of scp command demonstrate how files from target host is copied into local system.
Some of the most widely used options in scp command are listed below,
*  -C         Enable Compression
*  -i           identity File or private key
*  -l           limit the bandwidth while copying
*  -P          ssh port number of target host
*  -p          Preserves permissions, modes and access time of files while copying
*  -q          Suppress warning message of SSH
*   -r          Copy files and directories recursively
*   -v          verbose output
Lets jump into the examples now!!!!
###### Example:1) Copy a file from local system to remote system using scp
Lets assume we want to copy jdk rpm package from our local Linux system to remote system (172.20.10.8) using scp command, use the following command,
```
[root@linuxtechi ~]$ scp jdk-linux-x64_bin.rpm root@linuxtechi:/opt
root@linuxtechi's password:
jdk-linux-x64_bin.rpm 100% 10MB 27.1MB/s 00:00
[root@linuxtechi ~]$
```
Above command will copy jdk rpm package file to remote system under /opt folder.
###### Example:2) Copy a file from remote System to local system using scp
Lets suppose we want to copy a file from remote system to our local system under the /tmp folder, execute the following scp command,
```
[root@linuxtechi ~]$ scp root@linuxtechi:/root/Technical-Doc-RHS.odt /tmp
root@linuxtechi's password:
Technical-Doc-RHS.odt 100% 1109KB 31.8MB/s 00:00
[root@linuxtechi ~]$ ls -l /tmp/Technical-Doc-RHS.odt
-rwx------. 1 pkumar pkumar 1135521 Oct 19 11:12 /tmp/Technical-Doc-RHS.odt
[root@linuxtechi ~]$
```
######  Example:3) Verbose Output while transferring files using scp (-v)
In scp command, we can enable the verbose output using -v option, using verbose output we can easily find what exactly is happening in the background. This becomes very useful in **debugging connection**, **authentication** and **configuration problems**.
```
root@linuxtechi ~]$ scp -v jdk-linux-x64_bin.rpm root@linuxtechi:/opt
Executing: program /usr/bin/ssh host 172.20.10.8, user root, command scp -v -t /opt
OpenSSH_7.8p1, OpenSSL 1.1.1 FIPS 11 Sep 2018
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Reading configuration data /etc/ssh/ssh_config.d/05-redhat.conf
debug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config
debug1: /etc/ssh/ssh_config.d/05-redhat.conf line 8: Applying options for *
debug1: Connecting to 172.20.10.8 [172.20.10.8] port 22.
debug1: Connection established.
…………
debug1: Next authentication method: password
root@linuxtechi's password:
```
###### Example:4) Transfer multiple files to remote system
Multiple files can be copied / transferred to remote system using scp command in one go, in scp command specify the multiple files separated by space, example is shown below
```
[root@linuxtechi ~]$ scp install.txt index.html jdk-linux-x64_bin.rpm root@linuxtechi:/mnt
root@linuxtechi's password:
install.txt 100% 0 0.0KB/s 00:00
index.html 100% 85KB 7.2MB/s 00:00
jdk-linux-x64_bin.rpm 100% 10MB 25.3MB/s 00:00
[root@linuxtechi ~]$
```
###### Example:5) Transfer files across two remote hosts
Using scp command we can copy files and directories between two remote hosts, lets suppose we have a local Linux system which can connect to two remote Linux systems, so from my local linux system I can use scp command to copy files across these two systems,
Syntax:
### scp [root@linuxtechi][3]_hosts1:/&lt;files_to_transfer&gt;  [root@linuxtechi][3]_host2:/&lt;folder&gt;
Example is shown below,
```
# scp root@linuxtechi:~/backup-Oct.zip root@linuxtechi:/tmp
# ssh root@linuxtechi "ls -l /tmp/backup-Oct.zip"
-rwx------. 1 root root 747438080 Oct 19 12:02 /tmp/backup-Oct.zip
```
###### Example:6) Copy files and directories recursively (-r)
Use -r option in scp command to recursively copy the entire directory from one system to another, example is shown below,
```
[root@linuxtechi ~]$ scp -r Downloads root@linuxtechi:/opt
```
Use below command to verify whether Download folder is copied to remote system or not,
```
[root@linuxtechi ~]$ ssh root@linuxtechi "ls -ld /opt/Downloads"
drwxr-xr-x. 2 root root 75 Oct 19 12:10 /opt/Downloads
[root@linuxtechi ~]$
```
###### Example:7) Increase transfer speed by enabling compression (-C)
In scp command, we can increase the transfer speed by enabling the compression using -C option, it will automatically enable compression at source and decompression at destination host.
```
root@linuxtechi ~]$ scp -r -C Downloads root@linuxtechi:/mnt
```
In the above example we are transferring the Download directory with compression enabled.
###### Example:8) Limit bandwidth while copying ( -l )
Use -l option in scp command to put limit on bandwidth usage while copying. Bandwidth is specified in Kbit/s, example is shown below,
```
[root@linuxtechi ~]$ scp -l 500 jdk-linux-x64_bin.rpm root@linuxtechi:/var
```
###### Example:9) Specify different ssh port while scp ( -P)
There can be some scenario where ssh port is changed on destination host, so while using scp command we can specify the ssh port number using -P option.
```
[root@linuxtechi ~]$ scp -P 2022 jdk-linux-x64_bin.rpm root@linuxtechi:/var
```
In above example, ssh port for remote host is “2022”
###### Example:10) Preserves permissions, modes and access time of files while copying (-p)
Use “-p” option in scp command to preserve permissions, access time and modes while copying from source to destination
```
[root@linuxtechi ~]$ scp -p jdk-linux-x64_bin.rpm root@linuxtechi:/var/tmp
jdk-linux-x64_bin.rpm 100% 10MB 13.5MB/s 00:00
[root@linuxtechi ~]$
```
###### Example:11) Transferring files in quiet mode ( -q) in scp
Use -q option in scp command to suppress transfer progress, warning and diagnostic messages of ssh. Example is shown below,
```
[root@linuxtechi ~]$ scp -q -r Downloads root@linuxtechi:/var/tmp
[root@linuxtechi ~]$
```
###### Example:12) Use Identify file in scp while transferring ( -i )
In most of the Linux environments, keys-based authentication is preferred. In scp command we specify the identify file or private key file using -i option, example is shown below,
```
[root@linuxtechi ~]$ scp -i my_key.pem -r Downloads root@linuxtechi:/root
```
In above example, “my_key.pem” is the identity file or private key file.
###### Example:13) Use different ssh_config file in scp ( -F)
There are some scenarios where you use different networks to connect to Linux systems, may be some network is behind proxy servers, so in that case we must have different **ssh_config** file.
Different ssh_config file in scp command is specified via -F option, example is shown below
```
[root@linuxtechi ~]$ scp -F /home/pkumar/new_ssh_config -r Downloads root@linuxtechi:/root
root@linuxtechi's password:
jdk-linux-x64_bin.rpm 100% 10MB 16.6MB/s 00:00
backup-Oct.zip 100% 713MB 41.9MB/s 00:17
index.html 100% 85KB 6.6MB/s 00:00
[root@linuxtechi ~]$
```
###### Example:14) Use Different Cipher in scp command (-c)
By default, scp uses AES-128 cipher to encrypt the files. If you want to use another cipher in scp command then use -c option followed by cipher name,
Lets suppose we want to use 3des-cbc cipher in scp command while transferring the files, run the following scp command
```
[root@linuxtechi ~]# scp -c 3des-cbc -r Downloads root@linuxtechi:/root
```
Use the below command to list ssh and scp ciphers,
```
[root@linuxtechi ~]# ssh -Q cipher localhost | paste -d , -s -
3des-cbc,aes128-cbc,aes192-cbc,aes256-cbc,root@linuxtechi,aes128-ctr,aes192-ctr,aes256-ctr,root@linuxtechi,root@linuxtechi,root@linuxtechi
[root@linuxtechi ~]#
```
Thats all from this tutorial, to get more details about scp command, kindly refer its man page. Please do share your feedback and comments in comments section below.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/scp-command-examples-in-linux/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/scp-command-examples-linux.jpg
[3]: https://www.linuxtechi.com/cdn-cgi/l/email-protection

View File

@ -0,0 +1,111 @@
[#]: collector: (lujun9972)
[#]: translator: (hopefully2333)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How DevOps professionals can become security champions)
[#]: via: (https://opensource.com/article/19/9/devops-security-champions)
[#]: author: (Jessica Repka https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo)
DevOps 专业人员如何成为网络安全拥护者
======
打破信息孤岛,成为网络安全的拥护者,这对你、对你的职业、对你的公司都会有所帮助。
![A lock on the side of a building][1]
安全是 DevOps 中一个被误解了的部分,一些人认为它不在 DevOps 的范围内,而另一些人认为它太过重要(并且被忽视),建议改为使用 DevSecOps。无论你同意哪一方的观点网络安全都会影响到我们每一个人这是很明显的事实。
每年, [黑客行为的统计数据][3] 都会更加令人震惊。例如, 每 39 秒就有一次黑客行为发生,这可能会导致你为公司写的记录、身份和专有项目被盗。你的安全团队可能需要花上几个月(也可能是永远找不到)才能发现这次黑客行为背后是谁,目的是什么,人在哪,什么时候黑进来的。
运营专家面对这些棘手问题应该如何是好?呐我说,现在是时候成为网络安全的拥护者,变为解决方案的一部分了。
### 孤岛势力范围的战争
在我和我本地的 IT 安全ITSEC团队一起肩并肩战斗的岁月里我注意到了很多事情。一个很大的问题是安全团队和 DevOps 之间关系紧张,这种情况非常普遍。这种紧张关系几乎都是来源于安全团队为了保护系统、防范漏洞所作出的努力(例如,设置访问控制或者禁用某些东西),这些努力会中断 DevOps 的工作并阻碍他们快速部署应用程序。
你也看到了,我也看到了,你在现场碰见的每一个人都有至少一个和它有关的故事。一小撮的怨恨最终烧毁了信任的桥梁,要么是花费一段时间修复,要么就是两个团体之间开始一场小型的地盘争夺战,这个结果会使 DevOps 实现起来更加艰难。
### 一种新观点
为了打破这些孤岛并结束势力战争,我在每个安全团队中都选了至少一个人来交谈,了解我们组织日常安全运营里的来龙去脉。我开始做这件事是出于好奇,但我持续做这件事是因为它总是能带给我一些有价值的、新的观点。例如,我了解到,对于每个因为失败的安全性而被停止的部署,安全团队都在疯狂地尝试修复 10 个他们看见的其他问题。他们反应的莽撞和尖锐是因为他们必须在有限的时间里修复这些问题,不然这些问题就会变成一个大问题。
考虑到发现、识别和撤销已完成操作所需的大量知识,或者指出 DevOps 团队正在做什么-没有背景信息-然后复制并测试它。所有的这些通常都要由人手配备非常不足的安全团队完成。
这就是你的安全团队的日常生活,并且你的 DevOps 团队看不到这些。ITSEC 的日常工作意味着超时加班和过度劳累,以确保公司,公司的团队,团队里工作的所有人能够安全地工作。
### 成为安全拥护者的方法
这些是你成为你的安全团队的拥护者之后可以帮到它们的。这意味着-对于你做的所有操作-你必须仔细、认真地查看所有能够让其他人登录的方式,以及他们能够从中获得什么。
帮助你的安全团队就是在帮助你自己。将工具添加到你的工作流程里以此将你知道的要干的活和他们知道的要干的活结合到一起。从小事入手例如阅读公共漏洞披露CVEs并将扫描模块添加到你的 CI/CD 流程里。对于你写的所有代码,都会有一个开源扫描工具,添加小型开源工具(例如下面列出来的)在长远看来是可以让项目更好的。
**容器扫描工具:**
* [Anchore Engine][5]
* [Clair][6]
* [Vuls][7]
* [OpenSCAP][8]
**代码扫描工具:**
* [OWASP SonarQube][9]
* [Find Security Bugs][10]
* [Google Hacking Diggity Project][11]
**Kubernetes 安全工具:**
* [Project Calico][12]
* [Kube-hunter][13]
* [NeuVector][14]
### 保持你的 DevOps 态度
如果你的工作角色是和 DevOps 相关的,那么学习新技术和如何运用这项新技术创造新事物就是你工作的一部分。安全也是一样。我在 DevOps 安全方面保持到最新,下面是我的方法的列表。
* 每周阅读一篇你工作的方向里和安全相关的文章.
* 每周查看 [CVE][15] 官方网站,了解出现了什么新漏洞.
* 尝试做一次黑客马拉松。一些公司每个月都要这样做一次;如果你觉得还不够、想了解更多,可以访问 Beginner Hack 1.0 网站。
* 每年至少一次和那你的安全团队的成员一起参加安全会议,从他们的角度来看事情。
### 成为拥护者是为了变得更好
你应该成为你的安全的拥护者,下面是我们列出来的几个理由。首先是增长你的知识,帮助你的职业发展。第二是帮助其他的团队,培养新的关系,打破对你的组织有害的孤岛。在你的整个组织内建立由很多好处,包括设置沟通团队的典范,并鼓励人们一起工作。你同样能促进在整个组织中分享知识,并给每个人提供一个在安全方面更好的内部合作的新契机。
总的来说,成为一个网络安全的拥护者会让你成为你整个组织的拥护者。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/devops-security-champions
作者:[Jessica Repka][a]
选题:[lujun9972][b]
译者:[hopefully2333](https://github.com/hopefully2333)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
[2]: https://opensource.com/article/19/1/what-devsecops
[3]: https://hostingtribunal.com/blog/hacking-statistics/
[4]: https://opensource.com/article/18/8/what-cicd
[5]: https://github.com/anchore/anchore-engine
[6]: https://github.com/coreos/clair
[7]: https://vuls.io/
[8]: https://www.open-scap.org/
[9]: https://github.com/OWASP/sonarqube
[10]: https://find-sec-bugs.github.io/
[11]: https://resources.bishopfox.com/resources/tools/google-hacking-diggity/
[12]: https://www.projectcalico.org/
[13]: https://github.com/aquasecurity/kube-hunter
[14]: https://github.com/neuvector/neuvector-helm
[15]: https://cve.mitre.org/
[16]: https://www.hackerearth.com/challenges/hackathon/beginner-hack-10/

View File

@ -1,34 +1,34 @@
[#]: collector: (lujun9972)
[#]: translator: (way-ww)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Run the Top Command in Batch Mode)
[#]: via: (https://www.2daygeek.com/linux-run-execute-top-command-in-batch-mode/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
[#]: collector: "lujun9972"
[#]: translator: "way-ww"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: subject: "How to Run the Top Command in Batch Mode"
[#]: via: "https://www.2daygeek.com/linux-run-execute-top-command-in-batch-mode/"
[#]: author: "Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/"
How to Run the Top Command in Batch Mode
如何在批处理模式下运行 Top 命令
======
The **[Linux Top command][1]** is the best and most well known command that everyone uses to **[monitor Linux system performance][2]**.
**[Top 命令][1]** 是每个人都在使用的用于 **[监控 Linux 系统性能][2]** 的最好的命令。
You probably already know most of the options available, except for a few options, and if Im not wrong, “batch more” is one of the options.
除了很少的几个操作, 你可能已经知道 top 命令的绝大部分操作, 如果我没错的话, 批处理模式就是其中之一。
Most script writer and developers know this because this option is mainly used when writing the script.
大部分的脚本编写者和开发人员都知道这个, 因为这个操作主要就是用来编写脚本。
If youre not sure about this, dont worry were here to explain this.
如果你不了解这个, 不用担心,我们将在这里介绍它。
### What is “Batch Mode” in the Top Command
### 什么是 Top 命令的批处理模式
The “Batch Mode” option allows you to send top command output to other programs or to a file.
批处理模式允许你将 top 命令的输出发送至其他程序或者文件中。
In this mode, top will not accept input and runs until the iterations limit youve set with the “-n” command-line option.
在这个模式中, top 命令将不会接收输入并且持续运行直到迭代次数达到你用 “-n” 选项指定的次数为止。
If you want to fix any performance issues on the Linux server, you need to **[understand the top command output][3]** correctly.
如果你想解决 Linux 服务器上的任何性能问题, 你需要正确的 **[理解 top 命令的输出][3]** 。
### 1) How to Run the Top Command in Batch Mode
### 1) 如何在批处理模式下运行 top 命令
By default, the top command sort the results based on CPU usage, so when you run the below top command in batch mode, it does the same and prints the first 35 lines.
默认地, top 命令按照 CPU 的使用率来排序输出结果, 所以当你在批处理模式中运行以下命令时, 它会执行同样的操作并打印前 35 行。
```
# top -bc | head -35
@ -70,9 +70,9 @@ PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
46 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [kmpath_rdacd]
```
### 2) How to Run the Top Command in Batch Mode and Sort the Output Based on Memory Usage
### 2) 如何在批处理模式下运行 top 命令并按内存使用率排序结果
Run the below top command to sort the results based on memory usage in batch mode.
在批处理模式中运行以下命令按内存使用率对结果进行排序
```
# top -bc -o +%MEM | head -n 20
@ -99,19 +99,19 @@ KiB Swap: 1048572 total, 514640 free, 533932 used. 2475984 avail Mem
8632 nobody 20 0 256844 25744 2216 S 0.0 0.7 0:00.03 /usr/sbin/httpd -k start
```
**Details of the above command:**
**上面命令的详细信息:**
* **-b :** Batch mode operation
* **-c :** To print the absolute path of the running process
* **-o :** To specify fields for sorting processes
* **head :** Output the first part of files
* **-n :** To print the first “n” lines
* **-b :** 批处理模式选项
* **-c :** 打印运行中的进程的绝对路径
* **-o :** 指定进行排序的字段
* **head :** 输出文件的第一部分
* **-n :** 打印前 n 行
### 3) How to Run the Top Command in Batch Mode and Sort the Output Based on a Specific User Process
### 3) 如何在批处理模式下运行 top 命令并按照指定的用户进程对结果进行排序
If you want to sort results based on a specific user, run the below top command.
如果你想要按照指定用户进程对结果进行排序请运行以下命令
```
# top -bc -u mysql | head -n 10
@ -126,13 +126,13 @@ KiB Swap: 1048572 total, 514640 free, 533932 used. 2649412 avail Mem
18105 mysql 20 0 1453900 156888 8816 S 0.0 4.0 2:16.42 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid
```
### 4) How to Run the Top Command in Batch Mode and Sort the Output Based on the Process Age
### 4) 如何在批处理模式下运行 top 命令并按照处理时间进行排序
Use the below top command to sort the results based on the age of the process in batch mode. It shows the total CPU time the task has used since it started.
在批处理模式中使用以下 top 命令按照处理时间对结果进行排序。 这展示了任务从启动以来已使用的总 CPU 时间
But if you want to check how long a process has been running on Linux, go to the following article.
但是如果你想要检查一个进程在 Linux 上运行了多长时间请看接下来的文章。
* **[Five Ways to Check How Long a Process Has Been Running in Linux][4]**
* **[检查 Linux 中进程运行时间的五种方法][4]**
@ -161,9 +161,9 @@ KiB Swap: 1048572 total, 514640 free, 533932 used. 2440332 avail Mem
342 root 20 0 39472 2940 2752 S 0.0 0.1 1:18.17 /usr/lib/systemd/systemd-journald
```
### 5) How to Run the Top Command in Batch Mode and Save the Output to a File
### 5) 如何在批处理模式下运行 top 命令并将结果保存到文件中
If you want to share the output of the top command to someone for troubleshooting purposes, redirect the output to a file using the following command.
如果出于解决问题的目的, 你想要和别人分享 top 命令的输出, 请使用以下命令重定向输出到文件中
```
# top -bc | head -35 > top-report.txt
@ -207,11 +207,11 @@ KiB Swap: 1048572 total, 514640 free, 533932 used. 2659084 avail Mem
36 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 [crypto]
```
### How to Sort Output Based on Specific Fields
### 如何按照指定字段对结果进行排序
In the latest version of the top command release, press the **“f”** key to sort the fields via the field letter.
在 top 命令的最新版本中, 按下 **“f”** 键进入字段管理界面。
To sort with a new field, use the **“up/down”** arrow to select the correct selection, and then press **“s”** to sort it. Finally press **“q”** to exit from this window.
要使用新字段进行排序, 请使用 **“up/down”** 箭头选择正确的选项, 然后再按下 **“s”** 键进行排序。 最后按 **“q”** 键退出此窗口。
```
Fields Management for window 1:Def, whose current sort field is %CPU
@ -269,9 +269,9 @@ Fields Management for window 1:Def, whose current sort field is %CPU
nsUSER = USER namespace Inode
```
For older version of the top command, press the **“shift+f”** or **“shift+o”** key to sort the fields via the field letter.
对 top 命令的旧版本, 请按 **“shift+f”** 或 **“shift+o”** 键进入字段管理界面进行排序。
To sort with a new field, select the corresponding sort **field letter**, and then press **“Enter”** to sort it.
要使用新字段进行排序, 请选择相应的排序字段字母, 然后按下 **“Enter”** 排序。
```
Current Sort Field: N for window 1:Def
@ -322,7 +322,7 @@ via: https://www.2daygeek.com/linux-run-execute-top-command-in-batch-mode/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[way-ww](https://github.com/way-ww)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Object-Oriented Programming and Essential State)
[#]: via: (https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
面向对象编程和根本状态
======
早在 2015 年Brian Will 撰写了一篇有挑衅性的博客:[面向对象编程:一个灾难故事][1]。他随后发布了一个名为[面向对象编程很糟糕][2]的视频,该视频更加详细。我建议你花些时间观看视频,但这是我的一小段摘要:
OOP 的柏拉图式理想是一堆相互解耦的对象它们彼此之间发送无状态消息。没有人真的像这样制作软件Brian 指出这甚至没有意义:对象需要知道向哪个对象发送消息,这意味着它们需要相互引用。视频大部分讲述的是人们试图将对象耦合以实现控制流,同时假装它们是通过设计解耦的。
总的来说,他的想法与我自己的 OOP 经验产生了共鸣对象没有问题但是我从来没有对_面向_对象建立程序控制流满意而试图使代码“正确地”面向对象似乎总是在创建不必要的复杂性。
我认为他无法完全解释一件事。他直截了当地说“封装没有作用”,但在脚注后面加上“在细粒度的代码级别”,并继续承认对象有时可以奏效,并且在库和文件级别可以封装。但是他没有确切解释为什么有时会奏效,有时却没有奏效,以及如何/在何处划清界限。有人可能会说这使他的“ OOP不好”的说法有缺陷但是我认为他的观点是正确的并且可以在根本状态和偶发状态之间划清界限。
如果你以前从未听说过“根本”和“偶发”这两个术语的使用,那么你应该阅读 Fred Brooks 的经典文章[没有银弹][3]。 (顺便说一句,他写了许多有关构建软件系统的很棒的文章。)我以前曾写过[关于根本和偶发的复杂性的文章][4],但是这里有一个简短的摘要:软件很复杂。部分原因是因为我们希望软件能够解决混乱的现实世界问题,因此我们将其称为“根本复杂性”。“偶发复杂性”是所有其他复杂性,因为我们正尝试使用硅和金属来解决与硅和金属无关的问题。例如,对于大多数程序而言,用于内存管理或在内存与磁盘之间传输数据或解析文本格式的代码都是“偶发的复杂性”。
假设你正在构建一个支持多个频道的聊天应用。消息可以随时到达任何频道。有些频道特别有趣,当有新消息传入时,用户希望得到通知。其他频道静音:消息被存储,但用户不会受到打扰。你需要跟踪每个频道的用户首选设置。
一种实现方法是在频道和频道设置之间使用映射(也称为哈希表,字典或关联数组)。注意,映射是 Brian Will 所说的可以用作对象的抽象数据类型ADT
如果我们有一个调试器并查看内存中的 map 对象,我们将看到什么?我们当然会找到频道 ID 和频道设置数据(或至少指向它们的指针)。但是我们还会找到其他数据。如果 map 是使用红黑树实现的,我们将看到带有红/黑标签和指向其他节点的指针的树节点对象。与频道相关的数据是根本状态,而树节点是偶发状态。不过,请注意以下几点:该映射有效地封装了它的偶发状态-你可以用 AVL 树实现的另一个映射替换该映射,并且你的聊天程序仍然可以使用。另一方面,映射没有封装根本状态(仅使用 `get()``set()`方法访问数据不是封装)。事实上,映射与根本状态是尽可能不可知的,你可以使用基本相同的映射数据结构来存储与频道或通知无关的其他映射。
这就是映射 ADT 如此成功的原因它封装了偶发状态并与根本状态解耦。如果你考虑一下Brian 描述的封装问题就是尝试封装根本状态。其他描述的好处是封装偶发状态的好处。
要使整个软件系统都达到这一理想相当困难,但扩展开来,我认为它看起来像这样:
* 没有全局的可变状态
* 封装了偶发状态(在对象或模块或以其他任何形式)
* 无状态偶发复杂性封装在单独函数中,与数据解耦
* 使用诸如依赖注入之类的技巧使输入和输出变得明确
* 完全拥有组件,并从易于识别的位置进行控制
其中有些违反了我很久以前的本能。例如,如果你有一个数据库查询函数,如果数据库连接处理隐藏在该函数内部,并且唯一的参数是查询参数,那么接口会看起来会更简单。但是,当你使用这样的函数构建软件系统时,协调数据库的使用实际上变得更加复杂。组件不仅以自己的方式做事,而且还试图将自己所做的事情隐藏为“实现细节”。数据库查询需要数据库连接这一事实从来都不是实现细节。如果无法隐藏某些内容,那么显露它是更合理的。
我警惕将面向对象编程和函数式编程放在两极但我认为从函数式编程进入面向对象编程的另一极端是很有趣的OOP 试图封装事物,包括无法封装的根本复杂性,而纯函数式编程往往会使事情变得明确,包括一些偶发复杂性。在大多数时候,没什么问题,但有时候(比如[在纯函数式语言中构建自我指称的数据结构][5])设计更多的是为了函数编程,而不是为了简便(这就是为什么 [Haskell 包含了一些“逃生出口”( escape hatches][6])。我之前写过一篇[中立的所谓的“弱纯性” weak purity][7]
Brian 发现封装对更大规模有效,原因有几个。一个是,由于大小的原因,较大的组件更可能包含偶发状态。另一个是“偶发”与你要解决的问题有关。从聊天程序用户的角度来看,“偶发的复杂性”是与消息,频道和用户等无关的任何事物。但是,当你将问题分解为子问题时,更多的事情就变得重要。例如,在解决“构建聊天应用”问题时,可以说频道名称和频道 ID 之间的映射是偶发的复杂性,而在解决“实现 `getChannelIdByName()` 函数”子问题时,这是根本复杂性。因此,封装对于子组件的作用比对父组件的作用要小。
顺便说一句在影片的结尾Brian Will 想知道是否有任何语言支持_无法_访问它们所作用的范围的匿名函数。[D][8] 语言可以。 D 中的匿名 Lambda 通常是闭包,但是如果你想要的话,也可以声明匿名无状态函数:
```
import std.stdio;
void main()
{
int x = 41;
// Value from immediately executed lambda
auto v1 = () {
return x + 1;
}();
writeln(v1);
// Same thing
auto v2 = delegate() {
return x + 1;
}();
writeln(v2);
// Plain functions aren't closures
auto v3 = function() {
// Can't access x
// Can't access any mutable global state either if also marked pure
return 42;
}();
writeln(v3);
}
```
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2019/10/13/oop_and_essential_state.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://medium.com/@brianwill/object-oriented-programming-a-personal-disaster-1b044c2383ab
[2]: https://www.youtube.com/watch?v=QM1iUe6IofM
[3]: http://www.cs.nott.ac.uk/~pszcah/G51ISS/Documents/NoSilverBullet.html
[4]: https://theartofmachinery.com/2017/06/25/compression_complexity_and_software.html
[5]: https://wiki.haskell.org/Tying_the_Knot
[6]: https://en.wikibooks.org/wiki/Haskell/Mutable_objects#The_ST_monad
[7]: https://theartofmachinery.com/2016/03/28/dirtying_pure_functions_can_be_useful.html
[8]: https://dlang.org

View File

@ -7,32 +7,32 @@
[#]: via: (https://www.2daygeek.com/bash-script-to-delete-files-folders-older-than-x-days-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Bash Script to Delete Files/Folders Older Than “X” Days in Linux
在 Linux 中使用 Bash 脚本删除早于 “X” 天的文件/文件夹
======
**[Disk Usage][1]** Monitoring tools are capable of alerting us when a given threshold is reached.
**[磁盘使用率][1]**监控工具能够在达到给定阈值时提醒我们。
But they dont have the ingenuity to fix the **[disk usage][2]** problem on their own.
但它们无法自行解决**[磁盘使用率][2]**问题。
Manual intervention is needed to solve the problem.
需要手动干预才能解决该问题。
But if you want to fully automate this kind of activity, what you will do.
如果你想完全自动化此类操作,你会做什么。
Yes, it can be done using the bash script.
是的,可以使用 bash 脚本来完成。
This script prevents alerts from **[monitoring tool][3]** because we delete old log files before filling the disk space.
该脚本可防止来自**[监控工具][3]**的警报,因为我们会在填满磁盘空间之前删除旧的日志文件。
We have added many useful shell scripts in the past. If you want to check them out, go to the link below.
我们过去做了很多 shell 脚本。如果要查看,请进入下面的链接。
* **[How to automate day to day activities using shell scripts?][4]**
* **[如何使用 shell 脚本自动化日常活动?][4]**
Ive added two bash scripts to this article, which helps clear up old logs.
我在本文中添加了两个 bash 脚本,它们有助于清除旧日志。
### 1) Bash Script to Delete a Folders Older Than “X” Days in Linux
### 1)在 Linux 中删除早于 “X” 天的文件夹的 Bash 脚本
We have a folder named **“/var/log/app/”** that contains 15 days of logs and we are going to delete 10 days old folders.
我们有一个名为 **“/var/log/app/”** 的文件夹,其中包含 15 天的日志,我们将删除早于 10 天的文件夹。
```
$ ls -lh /var/log/app/
@ -54,9 +54,9 @@ drwxrw-rw- 3 root root 24K Oct 14 23:52 app_log.14
drwxrw-rw- 3 root root 24K Oct 15 23:52 app_log.15
```
This script will delete 10 days old folders and send folder list via mail.
该脚本将删除早于 10 天的文件夹,并通过邮件发送文件夹列表。
You can change the value **“-mtime X”** depending on your requirement. Also, replace your email id instead of us.
你可以根据需要修改 **“-mtime X”** 的值。另外,请替换你的电子邮箱,而不是用我们的。
```
# /opt/script/delete-old-folders.sh
@ -81,13 +81,13 @@ rm $MESSAGE /tmp/folder.out
fi
```
Set an executable permission to **“delete-old-folders.sh”** file.
**“delete-old-folders.sh”** 设置可执行权限。
```
# chmod +x /opt/script/delete-old-folders.sh
```
Finally add a **[cronjob][5]** to automate this. It runs daily at 7AM.
最后添加一个 [cronjob][5] 自动化此任务。它于每天早上 7 点运行。
```
# crontab -e
@ -95,7 +95,7 @@ Finally add a **[cronjob][5]** to automate this. It runs daily at 7AM.
0 7 * * * /bin/bash /opt/script/delete-old-folders.sh
```
You will get an output like the one below.
你将看到类似下面的输出。
```
Application log folders are deleted older than 20 days
@ -107,15 +107,15 @@ Oct 14 /var/log/app/app_log.14
Oct 15 /var/log/app/app_log.15
```
### 2) Bash Script to Delete a Files Older Than “X” Days in Linux
### 2)在 Linux 中删除早于 “X” 天的文件的 Bash 脚本
We have a folder named **“/var/log/apache/”** that contains 15 days of logs and we are going to delete 10 days old files.
我们有一个名为 **“/var/log/apache/”** 的文件夹其中包含15天的日志我们将删除 10 天前的文件。
The articles below are related to this topic, so you may be interested to read.
以下文章与该主题相关,因此你可能有兴趣阅读。
* **[How To Find And Delete Files Older Than “X” Days And “X” Hours In Linux?][6]**
* **[How to Find Recently Modified Files/Folders in Linux][7]**
* **[How To Automatically Delete Or Clean Up /tmp Folder Contents In Linux?][8]**
* **[如何在 Linux 中查找和删除早于 “X” 天和 “X” 小时的文件?][6]**
* **[如何在 Linux 中查找最近修改的文件/文件夹][7]**
* **[如何在 Linux 中自动删除或清理 /tmp 文件夹内容?][8]**
@ -139,9 +139,9 @@ The articles below are related to this topic, so you may be interested to read.
-rw-rw-rw- 3 root root 24K Oct 15 23:52 2daygeek_access.15
```
This script will delete 10 days old files and send folder list via mail.
该脚本将删除 10 天前的文件并通过邮件发送文件夹列表。
You can change the value **“-mtime X”** depending on your requirement. Also, replace your email id instead of us.
你可以根据需要修改 **“-mtime X”** 的值。另外,请替换你的电子邮箱,而不是用我们的。
```
# /opt/script/delete-old-files.sh
@ -166,13 +166,13 @@ rm $MESSAGE /tmp/file.out
fi
```
Set an executable permission to **“delete-old-files.sh”** file.
**“delete-old-files.sh”** 设置可执行权限。
```
# chmod +x /opt/script/delete-old-files.sh
```
Finally add a **[cronjob][5]** to automate this. It runs daily at 7AM.
最后添加一个 [cronjob][5] 自动化此任务。它于每天早上 7 点运行。
```
# crontab -e
@ -180,7 +180,7 @@ Finally add a **[cronjob][5]** to automate this. It runs daily at 7AM.
0 7 * * * /bin/bash /opt/script/delete-old-folders.sh
```
You will get an output like the one below.
你将看到类似下面的输出。
```
Apache Access log files are deleted older than 20 days
@ -198,7 +198,7 @@ via: https://www.2daygeek.com/bash-script-to-delete-files-folders-older-than-x-d
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出