mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-16 22:42:21 +08:00
commit
0570fabae8
@ -1,25 +1,31 @@
|
||||
Linux 系统查询机器最近重新启动的日期和时间的命令
|
||||
如何在 Linux 系统查询机器最近重启时间
|
||||
======
|
||||
|
||||
在你的 Linux 或 类 UNIX 系统中,你是如何查询系统重新启动的日期和时间?你是如何查询系统关机的日期和时间? last 命令不仅可以按照时间从近到远的顺序列出指定的用户,终端和主机名,而且还可以列出指定日期和时间登录的用户。输出到终端的每一行都包括用户名,会话终端,主机名,会话开始和结束的时间,会话持续的时间。使用下面的命令来查看 Linux 或类 UNIX 系统重启和关机的时间和日期。
|
||||
在你的 Linux 或类 UNIX 系统中,你是如何查询系统上次重新启动的日期和时间?怎样显示系统关机的日期和时间? `last` 命令不仅可以按照时间从近到远的顺序列出该会话的特定用户、终端和主机名,而且还可以列出指定日期和时间登录的用户。输出到终端的每一行都包括用户名、会话终端、主机名、会话开始和结束的时间、会话持续的时间。要查看 Linux 或类 UNIX 系统重启和关机的时间和日期,可以使用下面的命令。
|
||||
|
||||
- last 命令
|
||||
- who 命令
|
||||
- `last` 命令
|
||||
- `who` 命令
|
||||
|
||||
|
||||
### 使用 who 命令来查看系统重新启动的时间/日期
|
||||
|
||||
你需要在终端使用 [who][1] 命令来打印有哪些人登陆了系统。who 命令同时也会显示上次系统启动的时间,使用 last 命令来查看系统重启和关机的日期和时间,运行:
|
||||
你需要在终端使用 [who][1] 命令来打印有哪些人登录了系统,`who` 命令同时也会显示上次系统启动的时间。使用 `last` 命令来查看系统重启和关机的日期和时间,运行:
|
||||
|
||||
`$ who -b`
|
||||
```
|
||||
$ who -b
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
`system boot 2017-06-20 17:41`
|
||||
```
|
||||
system boot 2017-06-20 17:41
|
||||
```
|
||||
|
||||
使用 last 命令来查询最近登陆到系统的用户和系统重启的时间和日期。输入:
|
||||
使用 `last` 命令来查询最近登录到系统的用户和系统重启的时间和日期。输入:
|
||||
|
||||
`$ last reboot | less`
|
||||
```
|
||||
$ last reboot | less
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
@ -27,7 +33,9 @@ Linux 系统查询机器最近重新启动的日期和时间的命令
|
||||
|
||||
或者,尝试输入:
|
||||
|
||||
`$ last reboot | head -1`
|
||||
```
|
||||
$ last reboot | head -1
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
@ -35,13 +43,15 @@ Linux 系统查询机器最近重新启动的日期和时间的命令
|
||||
reboot system boot 4.9.0-3-amd64 Sat Jul 15 19:19 still running
|
||||
```
|
||||
|
||||
last 命令通过查看文件 /var/log/wtmp 来显示自 wtmp 文件被创建时的所有登陆(和注销)的用户。每当系统重新启动时,伪用户将重启信息记录到日志。因此,`last reboot` 命令将会显示自日志文件被创建以来的所有重启信息。
|
||||
`last` 命令通过查看文件 `/var/log/wtmp` 来显示自 wtmp 文件被创建时的所有登录(和登出)的用户。每当系统重新启动时,这个伪用户 `reboot` 就会登录。因此,`last reboot` 命令将会显示自该日志文件被创建以来的所有重启信息。
|
||||
|
||||
### 查看系统上次关机的时间和日期
|
||||
|
||||
可以使用下面的命令来显示上次关机的日期和时间:
|
||||
|
||||
`$ last -x|grep shutdown | head -1`
|
||||
```
|
||||
$ last -x|grep shutdown | head -1
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
@ -51,10 +61,10 @@ shutdown system down 2.6.15.4 Sun Apr 30 13:31 - 15:08 (01:37)
|
||||
|
||||
命令中,
|
||||
|
||||
* **-x**:显示系统开关机和运行等级改变信息
|
||||
* `-x`:显示系统关机和运行等级改变信息
|
||||
|
||||
|
||||
这里是 last 命令的其它的一些选项:
|
||||
这里是 `last` 命令的其它的一些选项:
|
||||
|
||||
```
|
||||
$ last
|
||||
@ -62,6 +72,7 @@ $ last -x
|
||||
$ last -x reboot
|
||||
$ last -x shutdown
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
![Fig.01: How to view last Linux System Reboot Date/Time ][3]
|
||||
@ -70,7 +81,9 @@ $ last -x shutdown
|
||||
|
||||
评论区的读者建议的另一个命令如下:
|
||||
|
||||
`$ uptime -s`
|
||||
```
|
||||
$ uptime -s
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
@ -82,7 +95,9 @@ $ last -x shutdown
|
||||
|
||||
在终端输入下面的命令:
|
||||
|
||||
`$ last reboot`
|
||||
```
|
||||
$ last reboot
|
||||
```
|
||||
|
||||
在 OS X 示例输出结果如下:
|
||||
|
||||
@ -108,7 +123,9 @@ wtmp begins Sat Oct 3 18:57
|
||||
|
||||
查看关机日期和时间,输入:
|
||||
|
||||
`$ last shutdown`
|
||||
```
|
||||
$ last shutdown
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
@ -130,7 +147,7 @@ wtmp begins Sat Oct 3 18:57
|
||||
|
||||
### 如何查看是谁重启和关闭机器?
|
||||
|
||||
你需要[启动 psacct 服务然后运行下面的命令][4]来查看执行过的命令,同时包括用户名,在终端输入 [lastcomm][5] 命令查看信息
|
||||
你需要[启用 psacct 服务然后运行下面的命令][4]来查看执行过的命令(包括用户名),在终端输入 [lastcomm][5] 命令查看信息
|
||||
|
||||
```
|
||||
# lastcomm userNameHere
|
||||
@ -138,9 +155,10 @@ wtmp begins Sat Oct 3 18:57
|
||||
# lastcomm | more
|
||||
# lastcomm reboot
|
||||
# lastcomm shutdown
|
||||
### OR see both reboot and shutdown time
|
||||
### 或者查看重启和关机时间
|
||||
# lastcomm | egrep 'reboot|shutdown'
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
@ -152,13 +170,11 @@ shutdown S root pts/1 0.00 secs Sun Dec 27 23:45
|
||||
|
||||
### 参见
|
||||
|
||||
* 更多信息可以查看 man 手册( man last )和参考文章 [如何在 Linux 服务器上使用 tuptime 命令查看历史和统计的正常的运行时间][6].
|
||||
|
||||
* 更多信息可以查看 man 手册(`man last`)和参考文章 [如何在 Linux 服务器上使用 tuptime 命令查看历史和统计的正常的运行时间][6]。
|
||||
|
||||
### 关于作者
|
||||
|
||||
作者是 nixCraft 的创立者同时也是一名经验丰富的系统管理员,也是 Linux,类 Unix 操作系统 shell 脚本的培训师。他曾与全球各行各业的客户工作过,包括 IT,教育,国防和空间研究以及非营利部门等等。你可以在 [Twitter][7] ,[Facebook][8],[Google+][9] 关注他。
|
||||
|
||||
作者是 nixCraft 的创立者,同时也是一名经验丰富的系统管理员,也是 Linux,类 Unix 操作系统 shell 脚本的培训师。他曾与全球各行各业的客户工作过,包括 IT,教育,国防和空间研究以及非营利部门等等。你可以在 [Twitter][7]、[Facebook][8]、[Google+][9] 关注他。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -167,7 +183,7 @@ via: https://www.cyberciti.biz/tips/linux-last-reboot-time-and-date-find-out.htm
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[amwps290](https://github.com/amwps290)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,158 @@
|
||||
在 Linux 上检测 IDE/SATA SSD 硬盘的传输速度
|
||||
======
|
||||
|
||||
你知道你的硬盘在 Linux 下传输有多快吗?不打开电脑的机箱或者机柜,你知道它运行在 SATA I (150 MB/s) 、 SATA II (300 MB/s) 还是 SATA III (6.0Gb/s) 呢?
|
||||
|
||||
你能够使用 `hdparm` 和 `dd` 命令来检测你的硬盘速度。它为各种硬盘的 ioctls 提供了命令行界面,这是由 Linux 系统的 ATA / IDE / SATA 设备驱动程序子系统所支持的。有些选项只能用最新的内核才能正常工作(请确保安装了最新的内核)。我也推荐使用最新的内核源代码的包含头文件来编译 `hdparm` 命令。
|
||||
|
||||
### 如何使用 hdparm 命令来检测硬盘的传输速度
|
||||
|
||||
以 root 管理员权限登录并执行命令:
|
||||
|
||||
```
|
||||
$ sudo hdparm -tT /dev/sda
|
||||
```
|
||||
|
||||
或者,
|
||||
|
||||
```
|
||||
$ sudo hdparm -tT /dev/hda
|
||||
```
|
||||
|
||||
输出:
|
||||
|
||||
```
|
||||
/dev/sda:
|
||||
Timing cached reads: 7864 MB in 2.00 seconds = 3935.41 MB/sec
|
||||
Timing buffered disk reads: 204 MB in 3.00 seconds = 67.98 MB/sec
|
||||
```
|
||||
|
||||
为了检测更精准,这个操作应该**重复2-3次** 。这显示了无需访问磁盘,直接从 Linux 缓冲区缓存中读取的速度。这个测量实际上是被测系统的处理器、高速缓存和存储器的吞吐量的指标。这是一个 [for 循环的例子][1],连续运行测试 3 次:
|
||||
|
||||
```
|
||||
for i in 1 2 3; do hdparm -tT /dev/hda; done
|
||||
```
|
||||
|
||||
这里,
|
||||
|
||||
* `-t` :执行设备读取时序
|
||||
* `-T` :执行缓存读取时间
|
||||
* `/dev/sda` :硬盘设备文件
|
||||
|
||||
|
||||
要 [找出 SATA 硬盘的连接速度][2] ,请输入:
|
||||
|
||||
```
|
||||
sudo hdparm -I /dev/sda | grep -i speed
|
||||
```
|
||||
|
||||
输出:
|
||||
|
||||
```
|
||||
* Gen1 signaling speed (1.5Gb/s)
|
||||
* Gen2 signaling speed (3.0Gb/s)
|
||||
* Gen3 signaling speed (6.0Gb/s)
|
||||
|
||||
```
|
||||
|
||||
以上输出表明我的硬盘可以使用 1.5Gb/s、3.0Gb/s 或 6.0Gb/s 的速度。请注意,您的 BIOS/主板必须支持 SATA-II/III 才行:
|
||||
|
||||
```
|
||||
$ dmesg | grep -i sata | grep 'link up'
|
||||
```
|
||||
|
||||
[![Linux Check IDE SATA SSD Hard Disk Transfer Speed][3]][3]
|
||||
|
||||
### dd 命令
|
||||
|
||||
你使用 `dd` 命令也可以获取到相应的速度信息:
|
||||
|
||||
```
|
||||
dd if=/dev/zero of=/tmp/output.img bs=8k count=256k
|
||||
rm /tmp/output.img
|
||||
```
|
||||
|
||||
输出:
|
||||
|
||||
```
|
||||
262144+0 records in
|
||||
262144+0 records out
|
||||
2147483648 bytes (2.1 GB) copied, 23.6472 seconds, `90.8 MB/s`
|
||||
```
|
||||
|
||||
下面是 [推荐的 dd 命令参数][4]:
|
||||
|
||||
```
|
||||
dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync
|
||||
|
||||
## GNU dd syntax ##
|
||||
dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
|
||||
|
||||
## OR alternate syntax for GNU/dd ##
|
||||
dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync
|
||||
```
|
||||
|
||||
这是上面命令的第三个命令的输出结果:
|
||||
|
||||
```
|
||||
1+0 records in
|
||||
1+0 records out
|
||||
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.23889 s, 253 MB/s
|
||||
```
|
||||
|
||||
### “磁盘与存储” - GUI 工具
|
||||
|
||||
您还可以使用位于“系统>管理>磁盘实用程序”菜单中的磁盘实用程序。请注意,在最新版本的 Gnome 中,它简称为“磁盘”。
|
||||
|
||||
#### 如何使用 Linux 上的“磁盘”测试我的硬盘的性能?
|
||||
|
||||
要测试硬盘的速度:
|
||||
|
||||
1. 从“活动概览”中打开“磁盘”(按键盘上的 super 键并键入“disks”)
|
||||
2. 从“左侧窗格”的列表中选择“磁盘”
|
||||
3. 选择菜单按钮并从菜单中选择“测试磁盘性能……”
|
||||
4. 单击“开始性能测试……”并根据需要调整传输速率和访问时间参数。
|
||||
5. 选择“开始性能测试”来测试从磁盘读取数据的速度。需要管理权限请输入密码。
|
||||
|
||||
以上操作的快速视频演示:
|
||||
|
||||
https://www.cyberciti.biz/tips/wp-content/uploads/2007/10/disks-performance.mp4
|
||||
|
||||
#### 只读 Benchmark (安全模式下)
|
||||
|
||||
然后,选择 > 只读:
|
||||
|
||||
![Fig.01: Linux Benchmarking Hard Disk Read Only Test Speed][5]
|
||||
|
||||
上述选项不会销毁任何数据。
|
||||
|
||||
#### 读写的 Benchmark(所有数据将丢失,所以要小心)
|
||||
|
||||
访问“系统>管理>磁盘实用程序菜单>单击性能测试>单击开始读/写性能测试按钮:
|
||||
|
||||
![Fig.02:Linux Measuring read rate, write rate and access time][6]
|
||||
|
||||
### 作者
|
||||
|
||||
作者是 nixCraft 的创造者,是经验丰富的系统管理员,也是 Linux 操作系统/ Unix shell 脚本的培训师。他曾与全球客户以及 IT,教育,国防和空间研究以及非营利部门等多个行业合作。在Twitter,Facebook和Google+上关注他。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/tips/how-fast-is-linux-sata-hard-disk.html
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[MonkeyDEcho](https://github.com/MonkeyDEcho)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/faq/bash-for-loop/
|
||||
[2]:https://www.cyberciti.biz/faq/linux-command-to-find-sata-harddisk-link-speed/
|
||||
[3]:https://www.cyberciti.biz/tips/wp-content/uploads/2007/10/Linux-Check-IDE-SATA-SSD-Hard-Disk-Transfer-Speed.jpg
|
||||
[4]:https://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd-command/
|
||||
[5]:https://www.cyberciti.biz/media/new/tips/2007/10/Linux-Hard-Disk-Speed-Benchmark.png (Linux Benchmark Hard Disk Speed)
|
||||
[6]:https://www.cyberciti.biz/media/new/tips/2007/10/Linux-Hard-Disk-Read-Write-Benchmark.png (Linux Hard Disk Benchmark Read / Write Rate and Access Time)
|
||||
[7]:https://twitter.com/nixcraft
|
||||
[8]:https://facebook.com/nixcraft
|
||||
[9]:https://plus.google.com/+CybercitiBiz
|
100
published/20170511 Working with VI editor - The Basics.md
Normal file
100
published/20170511 Working with VI editor - The Basics.md
Normal file
@ -0,0 +1,100 @@
|
||||
使用 Vi/Vim 编辑器:基础篇
|
||||
=========
|
||||
|
||||
VI 编辑器是一个基于命令行的、功能强大的文本编辑器,最早为 Unix 系统开发,后来也被移植到许多的 Unix 和 Linux 发行版上。
|
||||
|
||||
在 Linux 上还存在着另一个 VI 编辑器的高阶版本 —— VIM(也被称作 VI IMproved)。VIM 只是在 VI 已经很强的功能上添加了更多的功能,这些功能有:
|
||||
|
||||
- 支持更多 Linux 发行版,
|
||||
- 支持多种编程语言,包括 python、c++、perl 等语言的代码块折叠,语法高亮,
|
||||
- 支持通过多种网络协议,包括 http、ssh 等编辑文件,
|
||||
- 支持编辑压缩归档中的文件,
|
||||
- 支持分屏同时编辑多个文件。
|
||||
|
||||
接下来我们会讨论 VI/VIM 的命令以及选项。本文出于教学的目的,我们使用 VI 来举例,但所有的命令都可以被用于 VIM。首先我们先介绍 VI 编辑器的两种模式。
|
||||
|
||||
### 命令模式
|
||||
|
||||
命令模式下,我们可以执行保存文件、在 VI 内运行命令、复制/剪切/粘贴操作,以及查找/替换等任务。当我们处于插入模式时,我们可以按下 `Escape`(`Esc`)键返回命令模式
|
||||
|
||||
### 插入模式
|
||||
|
||||
在插入模式下,我们可以键入文件内容。在命令模式下按下 `i` 进入插入模式。
|
||||
|
||||
### 创建文件
|
||||
|
||||
我们可以通过下述命令建立一个文件(LCTT 译注:如果该文件存在,则编辑已有文件):
|
||||
|
||||
```
|
||||
$ vi filename
|
||||
```
|
||||
|
||||
一旦该文件被创建或者打开,我们首先进入命令模式,我们需要进入输入模式以在文件中输入内容。我们通过前文已经大致上了解这两种模式。
|
||||
|
||||
### 退出 Vi
|
||||
|
||||
如果是想从插入模式中退出,我们首先需要按下 `Esc` 键进入命令模式。接下来我们可以根据不同的需要分别使用两种命令退出 Vi。
|
||||
|
||||
1. 不保存退出 - 在命令模式中输入 `:q!`
|
||||
2. 保存并退出 - 在命令模式中输入 `:wq`
|
||||
|
||||
### 移动光标
|
||||
|
||||
下面我们来讨论下那些在命令模式中移动光标的命令和选项:
|
||||
|
||||
1. `k` 将光标上移一行
|
||||
2. `j` 将光标下移一行
|
||||
3. `h` 将光标左移一个字母
|
||||
4. `l` 将光标右移一个字母
|
||||
|
||||
注意:如果你想通过一个命令上移或下移多行,或者左移、右移多个字母,你可以使用 `4k` 或者 `5l`,这两条命令会分别上移 4 行或者右移 5 个字母。
|
||||
1. `0` 将光标移动到该行行首
|
||||
2. `$` 将光标移动到该行行尾
|
||||
3. `nG` 将光标移动到第 n 行
|
||||
4. `G` 将光标移动到文件的最后一行
|
||||
5. `{` 将光标移动到上一段
|
||||
6. `}` 将光标移动到下一段
|
||||
|
||||
除此之外还有一些命令可以用于控制光标的移动,但上述列出的这些命令应该就能应付日常工作所需。
|
||||
|
||||
### 编辑文本
|
||||
|
||||
这部分会列出一些用于命令模式的命令,可以进入插入模式来编辑当前文件
|
||||
|
||||
|
||||
1. `i` 在当前光标位置之前插入内容
|
||||
2. `I` 在光标所在行的行首插入内容
|
||||
3. `a` 在当前光标位置之后插入内容
|
||||
4. `A` 在光标所在行的行尾插入内容
|
||||
5. `o` 在当前光标所在行之后添加一行
|
||||
6. `O` 在当前光标所在行之前添加一行
|
||||
|
||||
|
||||
### 删除文本
|
||||
|
||||
|
||||
以下的这些命令都只能在命令模式下使用,所以首先需要按下 `Esc` 进入命令模式,如果你正处于插入模式:
|
||||
|
||||
1. `dd` 删除光标所在的整行内容,可以在 `dd` 前增加数字,比如 `2dd` 可以删除从光标所在行开始的两行
|
||||
2. `d$` 删除从光标所在位置直到行尾
|
||||
3. `d^` 删除从光标所在位置直到行首
|
||||
4. `dw` 删除从光标所在位置直到下一个词开始的所有内容
|
||||
|
||||
### 复制与黏贴
|
||||
|
||||
1. `yy` 复制当前行,在 `yy` 前添加数字可以复制多行
|
||||
2. `p` 在光标之后粘贴复制行
|
||||
3. `P` 在光标之前粘贴复制行
|
||||
|
||||
上述就是可以在 VI/VIM 编辑器上使用的一些基本命令。在未来的教程中还会继续教授一些更高级的命令。如果有任何疑问和建议,请在下方评论区留言。
|
||||
|
||||
---------
|
||||
via: http://linuxtechlab.com/working-vi-editor-basics/
|
||||
|
||||
作者:[Shusain][a]
|
||||
译者:[ljgibbslf](https://github.com/ljgibbslf)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 LCTT 原创编译,Linux中国 荣誉推出
|
||||
|
||||
[a]: http://linuxtechlab.com/author/shsuain/
|
@ -0,0 +1,312 @@
|
||||
使用 Ansible 让你的系统管理自动化
|
||||
======
|
||||
|
||||
>精进你的系统管理能力和 Linux 技能,学习如何设置工具来简化管理多台机器。
|
||||
|
||||
![配图 ](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_google_wave.png?itok=2oh8TpUi)
|
||||
|
||||
你是否想精进你的系统管理能力和 Linux 技能?也许你的本地局域网上跑了一些东西,而你又想让生活更轻松一点--那该怎么办呢?在本文中,我会向你演示如何设置工具来简化管理多台机器。
|
||||
|
||||
远程管理工具有很多,SaltStack、Puppet、Chef,以及 Ansible 都是很流行的选择。在本文中,我将重点放在 Ansible 上并会解释它是如何帮到你的,不管你是有 5 台还是 1000 台虚拟机。
|
||||
|
||||
让我们从多机(不管这些机器是虚拟的还是物理的)的基本管理开始。我假设你知道要做什么,有基础的 Linux 管理技能(至少要有能找出执行每个任务具体步骤的能力)。我会向你演示如何使用这一工具,而是否使用它由你自己决定。
|
||||
|
||||
### 什么是 Ansible?
|
||||
|
||||
Ansible 的网站上将之解释为 “一个超级简单的 IT 自动化引擎,可以自动进行云供给、配置管理、应用部署、服务内部编排,以及其他很多 IT 需求。” 通过在一个集中的位置定义好服务器集合,Ansible 可以在多个服务器上执行相同的任务。
|
||||
|
||||
如果你对 Bash 的 `for` 循环很熟悉,你会发现 Ansible 操作跟这很类似。区别在于 Ansible 是<ruby>幕等的<rt>idempotent</rt></ruby>。通俗来说就是 Ansible 一般只有在确实会发生改变时才执行所请求的动作。比如,假设你执行一个 Bash 的 for 循环来为多个机器创建用户,像这样子:
|
||||
|
||||
```
|
||||
for server in serverA serverB serverC; do ssh ${server} "useradd myuser"; done
|
||||
```
|
||||
|
||||
这会在 serverA、serverB,以及 serverC 上创建 myuser 用户;然而不管这个用户是否存在,每次运行这个 for 循环时都会执行 `useradd` 命令。一个幕等的系统会首先检查用户是否存在,只有在不存在的情况下才会去创建它。当然,这个例子很简单,但是幕等工具的好处将会随着时间的推移变得越发明显。
|
||||
|
||||
#### Ansible 是如何工作的?
|
||||
|
||||
Ansible 会将 Ansible playbooks 转换成通过 SSH 运行的命令,这在管理类 UNIX 环境时有很多优势:
|
||||
|
||||
1. 绝大多数类 UNIX 机器默认都开了 SSH。
|
||||
2. 依赖 SSH 意味着远程主机不需要有代理。
|
||||
3. 大多数情况下都无需安装额外的软件,Ansible 需要 2.6 或更新版本的 Python。而绝大多数 Linux 发行版默认都安装了这一版本(或者更新版本)的 Python。
|
||||
4. Ansible 无需主节点。他可以在任何安装有 Ansible 并能通过 SSH 访问的主机上运行。
|
||||
5. 虽然可以在 cron 中运行 Ansible,但默认情况下,Ansible 只会在你明确要求的情况下运行。
|
||||
|
||||
#### 配置 SSH 密钥认证
|
||||
|
||||
使用 Ansible 的一种常用方法是配置无需密码的 SSH 密钥登录以方便管理。(可以使用 Ansible Vault 来为密码等敏感信息提供保护,但这不在本文的讨论范围之内)。现在只需要使用下面命令来生成一个 SSH 密钥,如示例 1 所示。
|
||||
|
||||
```
|
||||
[09:44 user ~]$ ssh-keygen
|
||||
Generating public/private rsa key pair。
|
||||
Enter file in which to save the key (/home/user/.ssh/id_rsa):
|
||||
Created directory '/home/user/.ssh'。
|
||||
Enter passphrase (empty for no passphrase):
|
||||
Enter same passphrase again:
|
||||
Your identification has been saved in /home/user/.ssh/id_rsa。
|
||||
Your public key has been saved in /home/user/.ssh/id_rsa.pub。
|
||||
The key fingerprint is:
|
||||
SHA256:TpMyzf4qGqXmx3aqZijVv7vO9zGnVXsh6dPbXAZ+LUQ user@user-fedora
|
||||
The key's randomart image is:
|
||||
+---[RSA 2048]----+
|
||||
| |
|
||||
| |
|
||||
| E |
|
||||
| o . .。|
|
||||
| . + S o+。|
|
||||
| . .o * . .+ooo|
|
||||
| . .+o o o oo+。*|
|
||||
|。.ooo* o。* .*+|
|
||||
| . o+*BO.o+ .o|
|
||||
+----[SHA256]-----+
|
||||
```
|
||||
|
||||
*示例 1 :生成一个 SSH 密钥*
|
||||
|
||||
|
||||
在示例 1 中,直接按下回车键来接受默认值。任何非特权用户都能生成 SSH 密钥,也能安装到远程系统中任何用户的 SSH 的 `authorized_keys` 文件中。生成密钥后,还需要将之拷贝到远程主机上去,运行下面命令:
|
||||
|
||||
```
|
||||
ssh-copy-id root@servera
|
||||
```
|
||||
|
||||
注意:运行 Ansible 本身无需 root 权限;然而如果你使用非 root 用户,你_需要_为要执行的任务配置合适的 sudo 权限。
|
||||
|
||||
输入 servera 的 root 密码,这条命令会将你的 SSH 密钥安装到远程主机上去。安装好 SSH 密钥后,再通过 SSH 登录远程主机就不再需要输入 root 密码了。
|
||||
|
||||
### 安装 Ansible
|
||||
|
||||
只需要在示例 1 中生成 SSH 密钥的那台主机上安装 Ansible。若你使用的是 Fedora,输入下面命令:
|
||||
|
||||
```
|
||||
sudo dnf install ansible -y
|
||||
```
|
||||
|
||||
若运行的是 CentOS,你需要为 EPEL 仓库配置额外的包:
|
||||
|
||||
```
|
||||
sudo yum install epel-release -y
|
||||
```
|
||||
|
||||
然后再使用 yum 来安装 Ansible:
|
||||
|
||||
```
|
||||
sudo yum install ansible -y
|
||||
```
|
||||
|
||||
对于基于 Ubuntu 的系统,可以从 PPA 上安装 Ansible:
|
||||
|
||||
```
|
||||
sudo apt-get install software-properties-common -y
|
||||
sudo apt-add-repository ppa:ansible/ansible
|
||||
sudo apt-get update
|
||||
sudo apt-get install ansible -y
|
||||
```
|
||||
|
||||
若你使用的是 macOS,那么推荐通过 Python PIP 来安装:
|
||||
|
||||
```
|
||||
sudo pip install ansible
|
||||
```
|
||||
|
||||
对于其他发行版,请参见 [Ansible 安装文档 ][2]。
|
||||
|
||||
### Ansible Inventory
|
||||
|
||||
Ansible 使用一个 INI 风格的文件来追踪要管理的服务器,这种文件被称之为<ruby>库存清单<rt>Inventory</rt></ruby>。默认情况下该文件位于 `/etc/ansible/hosts`。本文中,我使用示例 2 中所示的 Ansible 库存清单来对所需的主机进行操作(为了简洁起见已经进行了裁剪):
|
||||
|
||||
```
|
||||
[arch]
|
||||
nextcloud
|
||||
prometheus
|
||||
desktop1
|
||||
desktop2
|
||||
vm-host15
|
||||
|
||||
[fedora]
|
||||
netflix
|
||||
|
||||
[centos]
|
||||
conan
|
||||
confluence
|
||||
7-repo
|
||||
vm-server1
|
||||
gitlab
|
||||
|
||||
[ubuntu]
|
||||
trusty-mirror
|
||||
nwn
|
||||
kids-tv
|
||||
media-centre
|
||||
nas
|
||||
|
||||
[satellite]
|
||||
satellite
|
||||
|
||||
[ocp]
|
||||
lb00
|
||||
ocp_dns
|
||||
master01
|
||||
app01
|
||||
infra01
|
||||
```
|
||||
|
||||
*示例 2 : Ansible 主机文件*
|
||||
|
||||
每个分组由中括号和组名标识(像这样 `[group1]` ),是应用于一组服务器的任意组名。一台服务器可以存在于多个组中,没有任何问题。在这个案例中,我有根据操作系统进行的分组(`arch`、`ubuntu`、`centos`、`fedora`),也有根据服务器功能进行的分组(`ocp`、`satellite`)。Ansible 主机文件可以处理比这复杂的多的情况。详细内容,请参阅 [库存清单文档][3]。
|
||||
|
||||
### 运行命令
|
||||
|
||||
将你的 SSH 密钥拷贝到库存清单中所有服务器上后,你就可以开始使用 Ansible 了。Ansible 的一项基本功能就是运行特定命令。语法为:
|
||||
|
||||
```
|
||||
ansible -a "some command"
|
||||
```
|
||||
例如,假设你想升级所有的 CentOS 服务器,可以运行:
|
||||
|
||||
```
|
||||
ansible centos -a 'yum update -y'
|
||||
```
|
||||
|
||||
_注意:不是必须要根据服务器操作系统来进行分组的。我下面会提到,[Ansible Facts][4] 可以用来收集这一信息;然而,若使用 Facts 的话,则运行特定命令会变得很复杂,因此,如果你在管理异构环境的话,那么为了方便起见,我推荐创建一些根据操作系统来划分的组。_
|
||||
|
||||
这会遍历 `centos` 组中的所有服务器并安装所有的更新。一个更加有用的命令应该是 Ansible 的 `ping` 模块了,可以用来验证服务器是否准备好接受命令了:
|
||||
|
||||
```
|
||||
ansible all -m ping
|
||||
```
|
||||
|
||||
这会让 Ansible 尝试通过 SSH 登录库存清单中的所有服务器。在示例 3 中可以看到 `ping` 命令的部分输出结果。
|
||||
|
||||
```
|
||||
nwn | SUCCESS => {
|
||||
"changed":false,
|
||||
"ping":"pong"
|
||||
}
|
||||
media-centre | SUCCESS => {
|
||||
"changed":false,
|
||||
"ping":"pong"
|
||||
}
|
||||
nas | SUCCESS => {
|
||||
"changed":false,
|
||||
"ping":"pong"
|
||||
}
|
||||
kids-tv | SUCCESS => {
|
||||
"changed":false,
|
||||
"ping":"pong"
|
||||
}
|
||||
...
|
||||
```
|
||||
|
||||
*示例 3 :Ansible ping 命令输出*
|
||||
|
||||
运行指定命令的能力有助于完成快速任务(LCTT 译注:应该指的那种一次性任务),但是如果我想在以后也能以同样的方式运行同样的任务那该怎么办呢?Ansible [playbooks][5] 就是用来做这个的。
|
||||
|
||||
### 复杂任务使用 Ansible playbooks
|
||||
|
||||
Ansible <ruby>剧本<rt>playbook<rt></ruby> 就是包含 Ansible 指令的 YAML 格式的文件。我这里不打算讲解类似 Roles 和 Templates 这些比较高深的内容。有兴趣的话,请阅读 [Ansible 文档][6]。
|
||||
|
||||
在前一章节,我推荐你使用 `ssh-copy-id` 命令来传递你的 SSH 密钥;然而,本文关注于如何以一种一致的、可重复性的方式来完成任务。示例 4 演示了一种以冥等的方式,即使 SSH 密钥已经存在于目标主机上也能保证正确性的实现方法。
|
||||
|
||||
```
|
||||
---
|
||||
- hosts:all
|
||||
gather_facts:false
|
||||
vars:
|
||||
ssh_key:'/root/playbooks/files/laptop_ssh_key'
|
||||
tasks:
|
||||
- name:copy ssh key
|
||||
authorized_key:
|
||||
key:"{{ lookup('file',ssh_key) }}"
|
||||
user:root
|
||||
```
|
||||
|
||||
*示例 4:Ansible 剧本 “push_ssh_keys.yaml”*
|
||||
|
||||
`- hosts:` 行标识了这个剧本应该在那个主机组上执行。在这个例子中,它会检查库存清单里的所有主机。
|
||||
|
||||
`gather_facts:` 行指明 Ansible 是否去搜索每个主机的详细信息。我稍后会做一次更详细的检查。现在为了节省时间,我们设置 `gather_facts` 为 `false`。
|
||||
|
||||
`vars:` 部分,顾名思义,就是用来定义剧本中所用变量的。在示例 4 的这个简短剧本中其实不是必要的,但是按惯例我们还是设置了一个变量。
|
||||
|
||||
最后由 `tasks:` 标注的这个部分,是存放主体指令的地方。每个任务都有一个 `-name:`。Ansbile 在运行剧本时会显示这个名字。
|
||||
|
||||
`authorized_key:` 是剧本所使用 Ansible 模块的名字。可以通过命令 `ansible-doc -a` 来查询 Ansible 模块的相关信息; 不过通过网络浏览器查看 [文档 ][7] 可能更方便一些。[authorized_key 模块][8] 有很多很好的例子可以参考。要运行示例 4 中的剧本,只要运行 `ansible-playbook` 命令就行了:
|
||||
|
||||
```
|
||||
ansible-playbook push_ssh_keys.yaml
|
||||
```
|
||||
|
||||
如果是第一次添加 SSH 密钥,SSH 会提示你输入 root 用户的密码。
|
||||
|
||||
现在 SSH 密钥已经传输到服务器中去了,可以来做点有趣的事了。
|
||||
|
||||
### 使用 Ansible 收集信息
|
||||
|
||||
Ansible 能够收集目标系统的各种信息。如果你的主机数量很多,那它会特别的耗时。按我的经验,每台主机大概要花个 1 到 2 秒钟,甚至更长时间;然而有时收集信息是有好处的。考虑下面这个剧本,它会禁止 root 用户通过密码远程登录系统:
|
||||
|
||||
```
|
||||
---
|
||||
- hosts:all
|
||||
gather_facts:true
|
||||
vars:
|
||||
tasks:
|
||||
- name:Enabling ssh-key only root access
|
||||
lineinfile:
|
||||
dest:/etc/ssh/sshd_config
|
||||
regexp:'^PermitRootLogin'
|
||||
line:'PermitRootLogin without-password'
|
||||
notify:
|
||||
- restart_sshd
|
||||
- restart_ssh
|
||||
|
||||
handlers:
|
||||
- name:restart_sshd
|
||||
service:
|
||||
name:sshd
|
||||
state:restarted
|
||||
enabled:true
|
||||
when:ansible_distribution == 'RedHat'
|
||||
- name:restart_ssh
|
||||
service:
|
||||
name:ssh
|
||||
state:restarted
|
||||
enabled:true
|
||||
when:ansible_distribution == 'Debian'
|
||||
```
|
||||
|
||||
*示例 5:锁定 root 的 SSH 访问*
|
||||
|
||||
在示例 5 中 `sshd_config` 文件的修改是有[条件][9] 的,只有在找到匹配的发行版的情况下才会执行。在这个案例中,基于 Red Hat 的发行版与基于 Debian 的发行版对 SSH 服务的命名是不一样的,这也是使用条件语句的目的所在。虽然也有其他的方法可以达到相同的效果,但这个例子很好演示了 Ansible 信息的作用。若你想查看 Ansible 默认收集的所有信息,可以在本地运行 `setup` 模块:
|
||||
|
||||
```
|
||||
ansible localhost -m setup |less
|
||||
```
|
||||
|
||||
Ansible 收集的所有信息都能用来做判断,就跟示例 4 中 `vars:` 部分所演示的一样。所不同的是,Ansible 信息被看成是**内置** 变量,无需由系统管理员定义。
|
||||
|
||||
### 更近一步
|
||||
|
||||
现在可以开始探索 Ansible 并创建自己的基本了。Ansible 是一个富有深度、复杂性和灵活性的工具,只靠一篇文章不可能就把它讲透。希望本文能够激发你的兴趣,鼓励你去探索 Ansible 的功能。在下一篇文章中,我会再聊聊 `Copy`、`systemd`、`service`、`apt`、`yum`、`virt`,以及 `user` 模块。我们可以在剧本中组合使用这些模块,还可以创建一个简单的 Git 服务器来存储这些所有剧本。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/7/automate-sysadmin-ansible
|
||||
|
||||
作者:[Steve Ovens][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/stratusss
|
||||
[1]:https://opensource.com/tags/ansible
|
||||
[2]:http://docs.ansible.com/ansible/intro_installation.html
|
||||
[3]:http://docs.ansible.com/ansible/intro_inventory.html
|
||||
[4]:http://docs.ansible.com/ansible/playbooks_variables.html#information-discovered-from-systems-facts
|
||||
[5]:http://docs.ansible.com/ansible/playbooks.html
|
||||
[6]:http://docs.ansible.com/ansible/playbooks_roles.html
|
||||
[7]:http://docs.ansible.com/ansible/modules_by_category.html
|
||||
[8]:http://docs.ansible.com/ansible/authorized_key_module.html
|
||||
[9]:http://docs.ansible.com/ansible/lineinfile_module.html
|
@ -1,13 +1,15 @@
|
||||
使用 TLS 加密保护 VNC 服务器的简单指南
|
||||
======
|
||||
在本教程中,我们将学习使用 TLS 加密安装 VNC 服务器并保护 VNC 会话。
|
||||
此方法已经在 CentOS 6&7 上测试过了,但是也可以在其他的版本/操作系统上运行(RHEL、Scientific Linux 等)。
|
||||
|
||||
在本教程中,我们将学习安装 VNC 服务器并使用 TLS 加密保护 VNC 会话。
|
||||
|
||||
此方法已经在 CentOS 6&7 上测试过了,但是也可以在其它的版本/操作系统上运行(RHEL、Scientific Linux 等)。
|
||||
|
||||
**(推荐阅读:[保护 SSH 会话终极指南][1])**
|
||||
|
||||
### 安装 VNC 服务器
|
||||
|
||||
在机器上安装 VNC 服务器之前,请确保我们有一个可用的 GUI。如果机器上还没有安装 GUI,我们可以通过执行以下命令来安装:
|
||||
在机器上安装 VNC 服务器之前,请确保我们有一个可用的 GUI(图形用户界面)。如果机器上还没有安装 GUI,我们可以通过执行以下命令来安装:
|
||||
|
||||
```
|
||||
yum groupinstall "GNOME Desktop"
|
||||
@ -38,7 +40,7 @@ yum groupinstall "GNOME Desktop"
|
||||
现在我们需要编辑 VNC 配置文件:
|
||||
|
||||
```
|
||||
**# vim /etc/sysconfig/vncservers**
|
||||
# vim /etc/sysconfig/vncservers
|
||||
```
|
||||
|
||||
并添加下面这几行:
|
||||
@ -63,7 +65,7 @@ VNCSERVERARGS[1]= "-geometry 1024×768″
|
||||
|
||||
#### CentOS 7
|
||||
|
||||
在 CentOS 7 上,/etc/sysconfig/vncservers 已经改为 /lib/systemd/system/vncserver@.service。我们将使用这个配置文件作为参考,所以创建一个文件的副本,
|
||||
在 CentOS 7 上,`/etc/sysconfig/vncservers` 已经改为 `/lib/systemd/system/vncserver@.service`。我们将使用这个配置文件作为参考,所以创建一个文件的副本,
|
||||
|
||||
```
|
||||
# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service
|
||||
@ -85,8 +87,8 @@ PIDFile=/home/vncuser/.vnc/%H%i.pid
|
||||
保存文件并退出。接下来重启服务并在启动时启用它:
|
||||
|
||||
```
|
||||
systemctl restart[[email protected]][2]:1.service
|
||||
systemctl enable[[email protected]][2]:1.service
|
||||
# systemctl restart vncserver@:1.service
|
||||
# systemctl enable vncserver@:1.service
|
||||
```
|
||||
|
||||
现在我们已经设置好了 VNC 服务器,并且可以使用 VNC 服务器的 IP 地址从客户机连接到它。但是,在此之前,我们将使用 TLS 加密保护我们的连接。
|
||||
@ -105,7 +107,9 @@ systemctl enable[[email protected]][2]:1.service
|
||||
|
||||
现在,我们可以使用客户机上的 VNC 浏览器访问服务器,使用以下命令以安全连接启动 vnc 浏览器:
|
||||
|
||||
**# vncviewer -SecurityTypes=VeNCrypt,TLSVnc 192.168.1.45:1**
|
||||
```
|
||||
# vncviewer -SecurityTypes=VeNCrypt,TLSVnc 192.168.1.45:1
|
||||
```
|
||||
|
||||
这里,192.168.1.45 是 VNC 服务器的 IP 地址。
|
||||
|
||||
@ -115,14 +119,13 @@ systemctl enable[[email protected]][2]:1.service
|
||||
|
||||
这篇教程就完了,欢迎随时使用下面的评论栏提交你的建议或疑问。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/secure-vnc-server-tls-encryption/
|
||||
|
||||
作者:[Shusain][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -3,31 +3,33 @@
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2017/11/Setup-Japanese-Language-Environment-In-Arch-Linux-720x340.jpg)
|
||||
|
||||
在本教程中,我们将讨论如何在 Arch Linux 中设置日语环境。在其他类 Unix 操作系统中,设置日文布局并不是什么大不了的事情。你可以从设置中轻松选择日文键盘布局。然而,在 Arch Linux 下有点困难,ArchWiki 中没有合适的文档。如果你正在使用 Arch Linux 和/或其衍生产品如 Antergos,Manajaro Linux,请遵循本指南在 Arch Linux 及其衍生系统中使用日语。
|
||||
在本教程中,我们将讨论如何在 Arch Linux 中设置日语环境。在其他类 Unix 操作系统中,设置日文布局并不是什么大不了的事情。你可以从设置中轻松选择日文键盘布局。然而,在 Arch Linux 下有点困难,ArchWiki 中没有合适的文档。如果你正在使用 Arch Linux 和/或其衍生产品如 Antergos、Manajaro Linux,请遵循本指南以在 Arch Linux 及其衍生系统中使用日语。
|
||||
|
||||
### 在Arch Linux中设置日语环境
|
||||
### 在 Arch Linux 中设置日语环境
|
||||
|
||||
首先,为了正确查看日语字符,先安装必要的日语字体:
|
||||
|
||||
首先,为了正确查看日语 ASCII 格式,先安装必要的日语字体:
|
||||
```
|
||||
sudo pacman -S adobe-source-han-sans-jp-fonts otf-ipafont
|
||||
```
|
||||
```
|
||||
pacaur -S ttf-monapo
|
||||
```
|
||||
|
||||
如果你尚未安装 pacaur,请参阅[**此链接**][1]。
|
||||
如果你尚未安装 `pacaur`,请参阅[此链接][1]。
|
||||
|
||||
确保你在 `/etc/locale.gen` 中注释掉了(添加 `#` 注释)下面的行。
|
||||
|
||||
确保你在 **/etc/locale.gen** 中注释掉了(添加 # 注释)下面的行。
|
||||
```
|
||||
#ja_JP.UTF-8
|
||||
```
|
||||
|
||||
然后,安装 **iBus** 和 **ibus-anthy**。对于那些想知道原因的,iBus 是类 Unix 系统的输入法(IM)框架,而 ibus-anthy 是 iBus 的日语输入法。
|
||||
然后,安装 iBus 和 ibus-anthy。对于那些想知道原因的,iBus 是类 Unix 系统的输入法(IM)框架,而 ibus-anthy 是 iBus 的日语输入法。
|
||||
|
||||
```
|
||||
sudo pacman -S ibus ibus-anthy
|
||||
```
|
||||
|
||||
在 **~/.xprofile** 中添加以下几行(如果不存在,创建一个):
|
||||
在 `~/.xprofile` 中添加以下几行(如果不存在,创建一个):
|
||||
|
||||
```
|
||||
# Settings for Japanese input
|
||||
export GTK_IM_MODULE='ibus'
|
||||
@ -38,21 +40,21 @@ export XMODIFIERS=@im='ibus'
|
||||
ibus-daemon -drx
|
||||
```
|
||||
|
||||
~/.xprofile 允许我们在 X 用户会话开始时且在窗口管理器启动之前执行命令。
|
||||
|
||||
`~/.xprofile` 允许我们在 X 用户会话开始时且在窗口管理器启动之前执行命令。
|
||||
|
||||
保存并关闭文件。重启 Arch Linux 系统以使更改生效。
|
||||
|
||||
登录到系统后,右键单击任务栏中的 iBus 图标,然后选择 **Preferences**。如果不存在,请从终端运行以下命令来启动 iBus 并打开偏好设置窗口。
|
||||
登录到系统后,右键单击任务栏中的 iBus 图标,然后选择 “Preferences”。如果不存在,请从终端运行以下命令来启动 iBus 并打开偏好设置窗口。
|
||||
|
||||
```
|
||||
ibus-setup
|
||||
```
|
||||
|
||||
选择 Yes 来启动 iBus。你会看到一个像下面的页面。点击 Ok 关闭它。
|
||||
选择 “Yes” 来启动 iBus。你会看到一个像下面的页面。点击 Ok 关闭它。
|
||||
|
||||
[![][2]][3]
|
||||
|
||||
现在,你将看到 iBus 偏好设置窗口。进入 **Input Method** 选项卡,然后单击 “Add” 按钮。
|
||||
现在,你将看到 iBus 偏好设置窗口。进入 “Input Method” 选项卡,然后单击 “Add” 按钮。
|
||||
|
||||
[![][2]][4]
|
||||
|
||||
@ -60,29 +62,27 @@ ibus-setup
|
||||
|
||||
[![][2]][5]
|
||||
|
||||
然后,选择 “Anthy” 并点击添加。
|
||||
然后,选择 “Anthy” 并点击添加:
|
||||
|
||||
[![][2]][6]
|
||||
|
||||
就是这样了。你现在将在输入法栏看到 “Japanese - Anthy”。
|
||||
就是这样了。你现在将在输入法栏看到 “Japanese - Anthy”:
|
||||
|
||||
[![][2]][7]
|
||||
|
||||
根据你的需求在偏好设置中更改日语输入法的选项(点击 Japanese - Anthy -> Preferences)。
|
||||
根据你的需求在偏好设置中更改日语输入法的选项(点击 “Japanese-Anthy” -> “Preferences”)。
|
||||
|
||||
[![][2]][8]
|
||||
|
||||
你还可以在键盘绑定中编辑默认的快捷键。完成所有更改后,点击应用并确定。就是这样。从任务栏中的 iBus 图标中选择日语,或者按下**SUPER 键+空格键**(LCTT译注:SUPER KEY 通常为 Command/Window KEY)来在日语和英语(或者系统中的其他默认语言)之间切换。你可以从 iBus 首选项窗口更改键盘快捷键。
|
||||
|
||||
现在你知道如何在 Arch Linux 及其衍生版中使用日语了。如果你发现我们的指南很有用,那么请您在社交、专业网络上分享,并支持 OSTechNix。
|
||||
|
||||
你还可以在键盘绑定中编辑默认的快捷键。完成所有更改后,点击应用并确定。就是这样。从任务栏中的 iBus 图标中选择日语,或者按下 `SUPER + 空格键”(LCTT 译注:SUPER 键通常为 `Command` 或 `Window` 键)来在日语和英语(或者系统中的其他默认语言)之间切换。你可以从 iBus 首选项窗口更改键盘快捷键。
|
||||
|
||||
现在你知道如何在 Arch Linux 及其衍生版中使用日语了。如果你发现我们的指南很有用,那么请您在社交、专业网络上分享,并支持我们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/setup-japanese-language-environment-arch-linux/
|
||||
|
||||
作者:[][a]
|
||||
作者:[SK][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[Locez](https://github.com/locez)
|
||||
|
||||
@ -91,9 +91,9 @@ via: https://www.ostechnix.com/setup-japanese-language-environment-arch-linux/
|
||||
[a]:https://www.ostechnix.com
|
||||
[1]:https://www.ostechnix.com/install-pacaur-arch-linux/
|
||||
[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2017/11/ibus.png ()
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2017/11/iBus-preferences.png ()
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2017/11/Choose-Japanese.png ()
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2017/11/Japanese-Anthy.png ()
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2017/11/iBus-preferences-1.png ()
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2017/11/ibus-anthy.png ()
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2017/11/ibus.png
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2017/11/iBus-preferences.png
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2017/11/Choose-Japanese.png
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2017/11/Japanese-Anthy.png
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2017/11/iBus-preferences-1.png
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2017/11/ibus-anthy.png
|
@ -0,0 +1,140 @@
|
||||
为 Linux 初学者讲解 wc 命令
|
||||
======
|
||||
|
||||
在命令行工作时,有时您可能想要知道一个文件中的单词数量、字节数、甚至换行数量。如果您正在寻找这样做的工具,您会很高兴地知道,在 Linux 中,存在一个命令行实用程序,它被称为 `wc` ,它为您完成所有这些工作。在本文中,我们将通过简单易懂的例子来讨论这个工具。
|
||||
|
||||
但是在我们开始之前,值得一提的是,本教程中提供的所有示例都在 Ubuntu 16.04 上进行了测试。
|
||||
|
||||
### Linux wc 命令
|
||||
|
||||
`wc` 命令打印每个输入文件的新行、单词和字节数。以下是该命令行工具的语法:
|
||||
|
||||
```
|
||||
wc [OPTION]... [FILE]...
|
||||
```
|
||||
|
||||
以下是 `wc` 的 man 文档的解释:
|
||||
|
||||
```
|
||||
为每个文件打印新行、单词和字节数,如果指定多于一个文件,也列出总的行数。单词是由空格分隔的非零长度的字符序列。如果没有指定文件,或当文件为 `-`,则读取标准输入。
|
||||
```
|
||||
|
||||
下面的 Q&A 样式的示例将会让您更好地了解 `wc` 命令的基本用法。
|
||||
|
||||
注意:在所有示例中我们将使用一个名为 `file.txt` 的文件作为输入文件。以下是该文件包含的内容:
|
||||
|
||||
```
|
||||
hi
|
||||
hello
|
||||
how are you
|
||||
thanks.
|
||||
```
|
||||
|
||||
### Q1. 如何打印字节数
|
||||
|
||||
使用 `-c` 命令选项打印字节数.
|
||||
|
||||
```
|
||||
wc -c file.txt
|
||||
```
|
||||
|
||||
下面是这个命令在我们的系统上产生的输出:
|
||||
|
||||
[![如何打印字节数][1]][2]
|
||||
|
||||
文件包含 29 个字节。
|
||||
|
||||
### Q2. 如何打印字符数
|
||||
|
||||
要打印字符数,请使用 `-m` 命令行选项。
|
||||
|
||||
```
|
||||
wc -m file.txt
|
||||
```
|
||||
|
||||
下面是这个命令在我们的系统上产生的输出:
|
||||
|
||||
[![如何打印字符数][3]][4]
|
||||
|
||||
文件包含 29 个字符。
|
||||
|
||||
### Q3. 如何打印换行数
|
||||
|
||||
使用 `-l` 命令选项来打印文件中的新行数:
|
||||
|
||||
```
|
||||
wc -l file.txt
|
||||
```
|
||||
|
||||
这里是我们的例子的输出:
|
||||
|
||||
[![如何打印换行数][5]][6]
|
||||
|
||||
### Q4. 如何打印单词数
|
||||
|
||||
要打印文件中的单词数量,请使用 `-w` 命令选项。
|
||||
|
||||
```
|
||||
wc -w file.txt
|
||||
```
|
||||
|
||||
在我们的例子中命令的输出如下:
|
||||
|
||||
[![如何打印字数][7]][8]
|
||||
|
||||
这显示文件中有 6 个单词。
|
||||
|
||||
### Q5. 如何打印最长行的显示宽度或长度
|
||||
|
||||
如果您想要打印输入文件中最长行的长度,请使用 `-l` 命令行选项。
|
||||
|
||||
```
|
||||
wc -L file.txt
|
||||
```
|
||||
|
||||
下面是在我们的案例中命令产生的结果:
|
||||
|
||||
[![如何打印最长行的显示宽度或长度][9]][10]
|
||||
|
||||
所以文件中最长的行长度是 11。
|
||||
|
||||
### Q6. 如何从文件读取输入文件名
|
||||
|
||||
如果您有多个文件名,并且您希望 `wc` 从一个文件中读取它们,那么使用`-files0-from` 选项。
|
||||
|
||||
```
|
||||
wc --files0-from=names.txt
|
||||
```
|
||||
|
||||
[![如何从文件读取输入文件名][11]][12]
|
||||
|
||||
如你所见 `wc` 命令,在这个例子中,输出了文件 `file.txt` 的行、单词和字符计数。文件名为 `file.txt` 的文件在 `name.txt` 文件中提及。值得一提的是,要成功地使用这个选项,文件中的文件名应该用 NUL 终止——您可以通过键入`Ctrl + v` 然后按 `Ctrl + Shift + @` 来生成这个字符。
|
||||
|
||||
### 结论
|
||||
|
||||
正如您所认同的一样,从理解和使用目的来看, `wc` 是一个简单的命令。我们已经介绍了几乎所有的命令行选项,所以一旦你练习了我们这里介绍的内容,您就可以随时在日常工作中使用该工具了。想了解更多关于 `wc` 的信息,请参考它的 [man 文档][13]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-wc-command-explained-for-beginners-6-examples/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[stevenzdg988](https://github.com/stevenzdg988)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-c-option.png
|
||||
[2]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-c-option.png
|
||||
[3]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-m-option.png
|
||||
[4]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-m-option.png
|
||||
[5]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-l-option.png
|
||||
[6]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-l-option.png
|
||||
[7]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-w-option.png
|
||||
[8]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-w-option.png
|
||||
[9]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-L-option.png
|
||||
[10]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-L-option.png
|
||||
[11]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-file0-from-option.png
|
||||
[12]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-file0-from-option.png
|
||||
[13]:https://linux.die.net/man/1/wc
|
@ -1,40 +1,41 @@
|
||||
cURL VS wget:根据两者的差异和使用习惯,你应该选用哪一个?
|
||||
cURL 与 wget:你应该选用哪一个?
|
||||
======
|
||||
|
||||
![](https://www.maketecheasier.com/assets/uploads/2017/12/wgc-feat.jpg)
|
||||
|
||||
当想要直接通过 Linux 命令行下载文件,马上就能想到两个工具:‘wget’和‘cURL’。它们有很多共享的特征,可以很轻易的完成一些相同的任务。
|
||||
当想要直接通过 Linux 命令行下载文件,马上就能想到两个工具:wget 和 cURL。它们有很多一样的特征,可以很轻易的完成一些相同的任务。
|
||||
|
||||
虽然它们有一些相似的特征,但它们并不是完全一样。这两个程序适用与不同的场合,在特定场合下,都拥有各自的特性。
|
||||
|
||||
### cURL vs wget: 相似之处
|
||||
### cURL vs wget: 相似之处
|
||||
|
||||
wget 和 cURL 都可以下载内容。它们的内核就是这么设计的。它们都可以向互联网发送请求并返回请求项。这可以是文件、图片或者是其他诸如网站的原始 HTML 之类。
|
||||
wget 和 cURL 都可以下载内容。它们的核心就是这么设计的。它们都可以向互联网发送请求并返回请求项。这可以是文件、图片或者是其他诸如网站的原始 HTML 之类。
|
||||
|
||||
这两个程序都可以进行 HTTP POST 请求。这意味着它们都可以向网站发送数据,比如说填充表单什么的。
|
||||
|
||||
由于这两者都是命令行工具,它们都被设计成脚本程序。wget 和 cURL 都可以写进你的 [Bash 脚本][1] ,自动与新内容交互,下载所需内容。
|
||||
由于这两者都是命令行工具,它们都被设计成可脚本化。wget 和 cURL 都可以写进你的 [Bash 脚本][1] ,自动与新内容交互,下载所需内容。
|
||||
|
||||
### wget 的优势
|
||||
|
||||
![wget download][2]
|
||||
|
||||
wget 简单直接。这意味着你能享受它超凡的下载速度。wget 是一个独立的程序,无需额外的资源库,更不会做出格的事情。
|
||||
wget 简单直接。这意味着你能享受它超凡的下载速度。wget 是一个独立的程序,无需额外的资源库,更不会做其范畴之外的事情。
|
||||
|
||||
wget 是专业的直接下载程序,支持递归下载。同时,它也允许你在网页或是 FTP 目录下载任何事物。
|
||||
wget 是专业的直接下载程序,支持递归下载。同时,它也允许你下载网页中或是 FTP 目录中的任何内容。
|
||||
|
||||
wget 拥有智能的默认项。他规定了很多在常规浏览器里的事物处理方式,比如 cookies 和重定向,这都不需要额外的配置。可以说,wget 简直就是无需说明,开罐即食!
|
||||
wget 拥有智能的默认设置。它规定了很多在常规浏览器里的事物处理方式,比如 cookies 和重定向,这都不需要额外的配置。可以说,wget 简直就是无需说明,开罐即食!
|
||||
|
||||
### cURL 优势
|
||||
|
||||
![cURL Download][3]
|
||||
|
||||
cURL是一个多功能工具。当然,他可以下载网络内容,但同时它也能做更多别的事情。
|
||||
cURL是一个多功能工具。当然,它可以下载网络内容,但同时它也能做更多别的事情。
|
||||
|
||||
cURL 技术支持库是:libcurl。这就意味着你可以基于 cURL 编写整个程序,允许你在 libcurl 库中基于图形环境下载程序,访问它所有的功能。
|
||||
cURL 技术支持库是:libcurl。这就意味着你可以基于 cURL 编写整个程序,允许你基于 libcurl 库中编写图形环境的下载程序,访问它所有的功能。
|
||||
|
||||
cURL 宽泛的网络协议支持可能是其最大的卖点。cURL 支持访问 HTTP 和 HTTPS 协议,能够处理 FTP 传送。它支持 LDAP 协议,甚至支持 Samba 分享。实际上,你还可以用 cURL 收发邮件。
|
||||
cURL 宽泛的网络协议支持可能是其最大的卖点。cURL 支持访问 HTTP 和 HTTPS 协议,能够处理 FTP 传输。它支持 LDAP 协议,甚至支持 Samba 分享。实际上,你还可以用 cURL 收发邮件。
|
||||
|
||||
cURL 也有一些简洁的安全特性。cURL 支持安装许多 SSL/TLS 库,也支持通过网络代理访问,包括 SOCKS。这意味着,你可以越过 Tor. 使用cURL。
|
||||
cURL 也有一些简洁的安全特性。cURL 支持安装许多 SSL/TLS 库,也支持通过网络代理访问,包括 SOCKS。这意味着,你可以越过 Tor 来使用cURL。
|
||||
|
||||
cURL 同样支持让数据发送变得更容易的 gzip 压缩技术。
|
||||
|
||||
@ -42,15 +43,15 @@ cURL 同样支持让数据发送变得更容易的 gzip 压缩技术。
|
||||
|
||||
那你应该使用 cURL 还是使用 wget?这个比较得看实际用途。如果你想快速下载并且没有担心参数标识的需求,那你应该使用轻便有效的 wget。如果你想做一些更复杂的使用,直觉告诉你,你应该选择 cRUL。
|
||||
|
||||
cURL 支持你做很多事情。你可以把 cURL想象成一个精简的命令行网页浏览器。它支持几乎你能想到的所有协议,可以交互访问几乎所有在线内容。唯一和浏览器不同的是,cURL 不能显示接收到的相应信息。
|
||||
cURL 支持你做很多事情。你可以把 cURL 想象成一个精简的命令行网页浏览器。它支持几乎你能想到的所有协议,可以交互访问几乎所有在线内容。唯一和浏览器不同的是,cURL 不会渲染接收到的相应信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/curl-vs-wget/
|
||||
|
||||
作者:[Nick Congleton][a]
|
||||
译者:[译者ID](https://github.com/CYLeft)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[CYLeft](https://github.com/CYLeft)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
100
published/20180106 Meltdown and Spectre Linux Kernel Status.md
Normal file
100
published/20180106 Meltdown and Spectre Linux Kernel Status.md
Normal file
@ -0,0 +1,100 @@
|
||||
Gerg:meltdown 和 spectre 影响下的 Linux 内核状况
|
||||
============================================================
|
||||
|
||||
现在(LCTT 译注:本文发表于 1 月初),每个人都知道一件关乎电脑安全的“大事”发生了,真见鬼,等[每日邮报报道][1]的时候,你就知道什么是糟糕了...
|
||||
|
||||
不管怎样,除了告诉你这篇写的及其出色的[披露该问题的 Zero 项目的论文][2]之外,我不打算去跟进这个问题已经被报道出来的细节。他们应该现在就直接颁布 2018 年的 [Pwnie][3] 奖,干的太棒了。
|
||||
|
||||
如果你想了解我们如何在内核中解决这些问题的技术细节,你可以保持关注了不起的 [lwn.net][4],他们会把这些细节写成文章。
|
||||
|
||||
此外,这有一条很好的关于[这些公告][5]的摘要,包括了各个厂商的公告。
|
||||
|
||||
至于这些涉及的公司是如何处理这些问题的,这可以说是如何**不**与 Linux 内核社区保持沟通的教科书般的例子。这件事涉及到的人和公司都知道发生了什么,我确定这件事最终会出现,但是目前我需要去关注的是如何修复这些涉及到的问题,然后不去点名指责,不管我有多么的想去这么做。
|
||||
|
||||
### 你现在能做什么
|
||||
|
||||
如果你的 Linux 系统正在运行一个正常的 Linux 发行版,那么升级你的内核。它们都应该已经更新了,然后在接下来的几个星期里保持更新。我们会统计大量在极端情况下出现的 bug ,这里涉及的测试很复杂,包括庞大的受影响的各种各样的系统和工作任务。如果你的 Linux 发行版没有升级内核,我强烈的建议你马上更换一个 Linux 发行版。
|
||||
|
||||
然而有很多的系统因为各种各样的原因(听说它们比起“传统”的企业发行版更多)不是在运行“正常的” Linux 发行版上。它们依靠长期支持版本(LTS)的内核升级,或者是正常的稳定内核升级,或者是内部的某人打造版本的内核。对于这部分人,这篇介绍了你能使用的上游内核中发生的混乱是怎么回事。
|
||||
|
||||
### Meltdown – x86
|
||||
|
||||
现在,Linus 的内核树包含了我们当前所知的为 x86 架构解决 meltdown 漏洞的所有修复。开启 `CONFIG_PAGE_TABLE_ISOLATION` 这个内核构建选项,然后进行重构和重启,所有的设备应该就安全了。
|
||||
|
||||
然而,Linus 的内核树当前处于 4.15-rc6 这个版本加上一些未完成的补丁。4.15-rc7 版本要明天才会推出,里面的一些补丁会解决一些问题。但是大部分的人不会在一个“正常”的环境里运行 -rc 内核。
|
||||
|
||||
因为这个原因,x86 内核开发者在<ruby>页表隔离<rt>page table isolation</rt></ruby>代码的开发过程中做了一个非常好的工作,好到要反向移植到最新推出的稳定内核 4.14 的话,我们只需要做一些微不足道的工作。这意味着最新的 4.14 版本(本文发表时是 4.14.12 版本),就是你应该运行的版本。4.14.13 会在接下来的几天里推出,这个更新里有一些额外的修复补丁,这些补丁是一些运行 4.14.12 内核且有启动时间问题的系统所需要的(这是一个显而易见的问题,如果它不启动,就把这些补丁加入更新排队中)。
|
||||
|
||||
我个人要感谢 Andy Lutomirski、Thomas Gleixner、Ingo Molnar、 Borislav Petkov、 Dave Hansen、 Peter Zijlstra、 Josh Poimboeuf、 Juergen Gross 和 Linus Torvalds。他们开发出了这些修复补丁,并且为了让我能轻松地使稳定版本能够正常工作,还把这些补丁以一种形式融合到了上游分支里。没有这些工作,我甚至不敢想会发生什么。
|
||||
|
||||
对于老的长期支持内核(LTS),我主要依靠 Hugh Dickins、 Dave Hansen、 Jiri Kosina 和 Borislav Petkov 优秀的工作,来为 4.4 到 4.9 的稳定内核代码树分支带去相同的功能。我同样在追踪讨厌的 bug 和缺失的补丁方面从 Guenter Roeck、 Kees Cook、 Jamie Iles 以及其他很多人那里得到了极大的帮助。我要感谢 David Woodhouse、 Eduardo Valentin、 Laura Abbott 和 Rik van Riel 在反向移植和集成方面的帮助,他们的帮助在许多棘手的地方是必不可少的。
|
||||
|
||||
这些长期支持版本的内核同样有 `CONFIG_PAGE_TABLE_ISOLATION` 这个内核构建选项,你应该开启它来获得全方面的保护。
|
||||
|
||||
从主线版本 4.14 和 4.15 的反向移植是非常不一样的,它们会出现不同的 bug,我们现在知道了一些在工作中遇见的 VDSO 问题。一些特殊的虚拟机安装的时候会报一些奇怪的错,但这是只是现在出现的少数情况,这种情况不应该阻止你进行升级。如果你在这些版本中遇到了问题,请让我们在稳定内核邮件列表中知道这件事。
|
||||
|
||||
如果你依赖于 4.4 和 4.9 或是现在的 4.14 以外的内核代码树分支,并且没有发行版支持你的话,你就太不幸了。比起你当前版本内核包含的上百个已知的漏洞和 bug,缺少补丁去解决 meltdown 问题算是一个小问题了。你现在最需要考虑的就是马上把你的系统升级到最新。
|
||||
|
||||
与此同时,臭骂那些强迫你运行一个已被废弃且不安全的内核版本的人,他们是那些需要知道这是完全不顾后果的行为的人中的一份子。
|
||||
|
||||
### Meltdown – ARM64
|
||||
|
||||
现在 ARM64 为解决 Meltdown 问题而开发的补丁还没有并入 Linus 的代码树,一旦 4.15 在接下来的几周里成功发布,他们就准备[阶段式地并入][6] 4.16-rc1,因为这些补丁还没有在一个 Linus 发布的内核中,我不能把它们反向移植进一个稳定的内核版本里(额……我们有这个[规矩][7]是有原因的)
|
||||
|
||||
由于它们还没有在一个已发布的内核版本中,如果你的系统是用的 ARM64 的芯片(例如 Android ),我建议你选择 [Android 公共内核代码树][8],现在,所有的 ARM64 补丁都并入 [3.18][9]、[4.4][10] 和 [4.9][11] 分支 中。
|
||||
|
||||
我强烈建议你关注这些分支,看随着时间的过去,由于测试了已并入补丁的已发布的上游内核版本,会不会有更多的修复补丁被补充进来,特别是我不知道这些补丁会在什么时候加进稳定的长期支持内核版本里。
|
||||
|
||||
对于 4.4 到 4.9 的长期支持内核版本,这些补丁有很大概率永远不会并入它们,因为需要大量的先决补丁。而所有的这些先决补丁长期以来都一直在 Android 公共内核版本中测试和合并,所以我认为现在对于 ARM 系统来说,仅仅依赖这些内核分支而不是长期支持版本是一个更好的主意。
|
||||
|
||||
同样需要注意的是,我合并所有的长期支持内核版本的更新到这些分支后通常会在一天之内或者这个时间点左右进行发布,所以你无论如何都要关注这些分支,来确保你的 ARM 系统是最新且安全的。
|
||||
|
||||
### Spectre
|
||||
|
||||
现在,事情变得“有趣”了……
|
||||
|
||||
再一次,如果你正在运行一个发行版的内核,一些内核融入了各种各样的声称能缓解目前大部分问题的补丁,你的内核*可能*就被包含在其中。如果你担心这一类的攻击的话,我建议你更新并测试看看。
|
||||
|
||||
对于上游来说,很好,现状就是仍然没有任何的上游代码树分支合并了这些类型的问题相关的修复补丁。有很多的邮件列表在讨论如何去解决这些问题的解决方案,大量的补丁在这些邮件列表中广为流传,但是它们尚处于开发前期,一些补丁系列甚至没有被构建或者应用到任何已知的代码树,这些补丁系列彼此之间相互冲突,这是常见的混乱。
|
||||
|
||||
这是由于 Spectre 问题是最近被内核开发者解决的。我们所有人都在 Meltdown 问题上工作,我们没有精确的 Spectre 问题全部的真实信息,而四处散乱的补丁甚至比公开发布的补丁还要糟糕。
|
||||
|
||||
因为所有的这些原因,我们打算在内核社区里花上几个星期去解决这些问题并把它们合并到上游去。修复补丁会进入到所有内核的各种各样的子系统中,而且在它们被合并后,会集成并在稳定内核的更新中发布,所以再次提醒,无论你使用的是发行版的内核还是长期支持的稳定内核版本,你最好并保持更新到最新版。
|
||||
|
||||
这不是好消息,我知道,但是这就是现实。如果有所安慰的话,似乎没有任何其它的操作系统完全地解决了这些问题,现在整个产业都在同一条船上,我们只需要等待,并让开发者尽快地解决这些问题。
|
||||
|
||||
提出的解决方案并非毫不重要,但是它们中的一些还是非常好的。一些新概念会被创造出来来帮助解决这些问题,Paul Turner 提出的 Retpoline 方法就是其中的一个例子。这将是未来大量研究的一个领域,想出方法去减轻硬件中涉及的潜在问题,希望在它发生前就去预见它。
|
||||
|
||||
### 其他架构的芯片
|
||||
|
||||
现在,我没有看见任何 x86 和 arm64 架构以外的芯片架构的补丁,听说在一些企业发行版中有一些用于其他类型的处理器的补丁,希望他们在这几周里能浮出水面,合并到合适的上游那里。我不知道什么时候会发生,如果你使用着一个特殊的架构,我建议在 arch-specific 邮件列表上问这件事来得到一个直接的回答。
|
||||
|
||||
### 结论
|
||||
|
||||
再次说一遍,更新你的内核,不要耽搁,不要止步。更新会在很长的一段时间里持续地解决这些问题。同样的,稳定和长期支持内核发行版里仍然有很多其它的 bug 和安全问题,它们和问题的类型无关,所以一直保持更新始终是一个好主意。
|
||||
|
||||
现在,有很多非常劳累、坏脾气、缺少睡眠的人,他们通常会生气地让内核开发人员竭尽全力地解决这些问题,即使这些问题完全不是开发人员自己造成的。请关爱这些可怜的程序猿。他们需要爱、支持,我们可以为他们免费提供的他们最爱的饮料,以此来确保我们都可以尽可能快地结束修补系统。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://kroah.com/log/blog/2018/01/06/meltdown-status/
|
||||
|
||||
作者:[Greg Kroah-Hartman][a]
|
||||
译者:[hopefully2333](https://github.com/hopefully2333)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://kroah.com
|
||||
[1]:http://www.dailymail.co.uk/sciencetech/article-5238789/Intel-says-security-updates-fix-Meltdown-Spectre.html
|
||||
[2]:https://googleprojectzero.blogspot.fr/2018/01/reading-privileged-memory-with-side.html
|
||||
[3]:https://pwnies.com/
|
||||
[4]:https://lwn.net/Articles/743265/
|
||||
[5]:https://lwn.net/Articles/742999/
|
||||
[6]:https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git/log/?h=kpti
|
||||
[7]:https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
|
||||
[8]:https://android.googlesource.com/kernel/common/
|
||||
[9]:https://android.googlesource.com/kernel/common/+/android-3.18
|
||||
[10]:https://android.googlesource.com/kernel/common/+/android-4.4
|
||||
[11]:https://android.googlesource.com/kernel/common/+/android-4.9
|
||||
[12]:https://support.google.com/faqs/answer/7625886
|
@ -3,25 +3,25 @@ Linux 终端下的多媒体应用
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/multimedia.jpg?itok=v-XrnKRB)
|
||||
|
||||
Linux 终端是支持多媒体的,所以你可以在终端里听音乐,看电影,看图片,甚至是阅读 PDF。
|
||||
> Linux 终端是支持多媒体的,所以你可以在终端里听音乐,看电影,看图片,甚至是阅读 PDF。
|
||||
|
||||
在我的上一篇文章里,我们了解到 Linux 终端是可以支持多媒体的。是的,这是真的!你可以使用 Mplayer、fbi 和 fbgs 来实现不打开 X 进程就听音乐、看电影、看照片,甚至阅读 PDF。此外,你还可以通过 CMatrix 来体验黑客帝国(Matrix)风格的屏幕保护。
|
||||
在我的上一篇文章里,我们了解到 Linux 终端是可以支持多媒体的。是的,这是真的!你可以使用 Mplayer、fbi 和 fbgs 来实现不打开 X 会话就听音乐、看电影、看照片,甚至阅读 PDF。此外,你还可以通过 CMatrix 来体验黑客帝国(Matrix)风格的屏幕保护。
|
||||
|
||||
不过你可能需要对系统进行一些修改才能达到前面这些目的。下文的操作都是在 Ubuntu 16.04 上进行的。
|
||||
|
||||
### MPlayer
|
||||
|
||||
你可能会比较熟悉功能丰富的 MPlayer。它支持几乎所有格式的视频与音频,并且能在绝大部分现有的平台上运行,像 Linux,Android,Windows,Mac,Kindle,OS/2 甚至是 AmigaOS。不过,要在你的终端运行 MPlayer 可能需要多做一点工作,这些工作与你使用的 Linux 发行版有关。来,我们先试着播放一个视频:
|
||||
你可能会比较熟悉功能丰富的 MPlayer。它支持几乎所有格式的视频与音频,并且能在绝大部分现有的平台上运行,像 Linux、Android、Windows、Mac、Kindle、OS/2 甚至是 AmigaOS。不过,要在你的终端运行 MPlayer 可能需要多做一点工作,这些工作与你使用的 Linux 发行版有关。来,我们先试着播放一个视频:
|
||||
|
||||
```
|
||||
$ mplayer [视频文件名]
|
||||
```
|
||||
|
||||
如果上面的命令正常执行了,那么很好,接下来你可以把时间放在了解 MPlayer 的常用选项上了,譬如设定视频大小等。但是,有些 Linux 发行版在对帧缓冲(framebuffer)的处理方式上与早期的不同,那么你就需要进行一些额外的设置才能让其正常工作了。下面是在最近的 Ubuntu 发行版上需要做的一些操作。
|
||||
如果上面的命令正常执行了,那么很好,接下来你可以把时间放在了解 MPlayer 的常用选项上了,譬如设定视频大小等。但是,有些 Linux 发行版在对<ruby>帧缓冲<rt>framebuffer</rt></ruby>的处理方式上与早期的不同,那么你就需要进行一些额外的设置才能让其正常工作了。下面是在最近的 Ubuntu 发行版上需要做的一些操作。
|
||||
|
||||
首先,将你自己添加到 video 用户组。
|
||||
首先,将你自己添加到 `video` 用户组。
|
||||
|
||||
其次,确认 `/etc/modprobe.d/blacklist-framebuffer.conf` 文件中包含这样一行:`#blacklist vesafb`。这一行应该默认被注释掉了,如果不是的话,那就手动把它注释掉。此外的其他模块行需要确认没有被注释,这样设置才能保证其他那些模块不会被载入。注:如果你想要对控制帧缓冲(framebuffer)有更深入的了解,可以从针对你的显卡的这些模块里获取更深入的认识。
|
||||
其次,确认 `/etc/modprobe.d/blacklist-framebuffer.conf` 文件中包含这样一行:`#blacklist vesafb`。这一行应该默认被注释掉了,如果不是的话,那就手动把它注释掉。此外的其他模块行需要确认没有被注释,这样设置才能保证其他那些模块不会被载入。注:如果你想要更深入的利用<ruby>帧缓冲<rt>framebuffer</rt></ruby>,这些针对你的显卡的模块可以使你获得更好的性能。
|
||||
|
||||
然后,在 `/etc/initramfs-tools/modules` 的结尾增加两个模块:`vesafb` 和 `fbcon`,并且更新 iniramfs 镜像:
|
||||
|
||||
@ -35,7 +35,7 @@ $ sudo nano /etc/initramfs-tools/modules
|
||||
$ sudo update-initramfs -u
|
||||
```
|
||||
|
||||
[fbcon][1] 是 Linux 帧缓冲(framebuffer)终端,它运行在帧缓冲(framebuffer)之上并为其增加图形功能。而它需要一个帧缓冲(framebuffer)设备,这则是由 `vesafb` 模块来提供的。
|
||||
[fbcon][1] 是 Linux <ruby>帧缓冲<rt>framebuffer</rt></ruby>终端,它运行在<ruby>帧缓冲<rt>framebuffer</rt></ruby>之上并为其增加图形功能。而它需要一个<ruby>帧缓冲<rt>framebuffer</rt></ruby>设备,这则是由 `vesafb` 模块来提供的。
|
||||
|
||||
接下来,你需要修改你的 GRUB2 配置。在 `/etc/default/grub` 中你将会看到类似下面的一行:
|
||||
|
||||
@ -49,7 +49,7 @@ GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
|
||||
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash vga=789"
|
||||
```
|
||||
|
||||
重启之后进入你的终端(Ctrl+Alt+F1)(LCTT 译注:在某些发行版中 Ctrl+Alt+F1 默认为图形界面,可以尝试 Ctrl+Alt+F2),然后就可以尝试播放一个视频了。下面的命令指定了 `fbdev2` 为视频输出设备,虽然我还没弄明白如何去选择用哪个输入设备,但是我用它成功过。默认的视频大小是 320x240,在此我给缩放到了 960:
|
||||
重启之后进入你的终端(`Ctrl+Alt+F1`)(LCTT 译注:在某些发行版中 `Ctrl+Alt+F1` 默认为图形界面,可以尝试 `Ctrl+Alt+F2`),然后就可以尝试播放一个视频了。下面的命令指定了 `fbdev2` 为视频输出设备,虽然我还没弄明白如何去选择用哪个输入设备,但是我用它成功过。默认的视频大小是 320x240,在此我给缩放到了 960:
|
||||
|
||||
```
|
||||
$ mplayer -vo fbdev2 -vf scale -zoom -xy 960 AlienSong_mp4.mov
|
||||
@ -69,19 +69,19 @@ MPlayer 可以播放 CD、DVD 以及网络视频流,并且还有一系列的
|
||||
$ fbi 文件名
|
||||
```
|
||||
|
||||
你可以使用方向键来在大图片中移动视野,使用 + 和 - 来缩放,或者使用 r 或 l 来向右或向左旋转 90 度。Escape 键则可以关闭查看的图片。此外,你还可以给 `fbi` 一个文件列表来实现幻灯播放:
|
||||
你可以使用方向键来在大图片中移动视野,使用 `+` 和 `-` 来缩放,或者使用 `r` 或 `l` 来向右或向左旋转 90 度。`Escape` 键则可以关闭查看的图片。此外,你还可以给 `fbi` 一个文件列表来实现幻灯播放:
|
||||
|
||||
```
|
||||
$ fbi --list 文件列表.txt
|
||||
```
|
||||
|
||||
`fbi` 还支持自动缩放。还可以使用 `-a` 选项来控制缩放比例。`--autoup` 和 `--autodown` 则是用于告知 `fbi` 只进行放大或者缩小。要调整图片切换时淡入淡出的时间则可以使用 `--blend [时间]` 来指定一个以毫秒为单位的时间长度。使用 k 和 j 键则可以切换文件列表中的上一张或下一张图片。
|
||||
`fbi` 还支持自动缩放。还可以使用 `-a` 选项来控制缩放比例。`--autoup` 和 `--autodown` 则是用于告知 `fbi` 只进行放大或者缩小。要调整图片切换时淡入淡出的时间则可以使用 `--blend [时间]` 来指定一个以毫秒为单位的时间长度。使用 `k` 和 `j` 键则可以切换文件列表中的上一张或下一张图片。
|
||||
|
||||
`fbi` 还提供了命令来为你浏览过的文件创建文件列表,或者将你的命令导出到文件中,以及一系列其它很棒的选项。你可以通过 `man fbi` 来查阅完整的选项列表。
|
||||
|
||||
### CMatrix 终端屏保
|
||||
|
||||
黑客帝国(The Matrix)屏保仍然是我非常喜欢的屏保之一(如图 2),仅次于弹跳牛(bouncing cow)。[CMatrix][3] 可以在终端运行。要运行它只需输入 `cmatrix`,然后可以用 Ctrl+C 来停止运行。执行 `cmatrix -s` 则会启动屏保模式,这样的话,按任意键都会直接退出。`-C` 参数可以设定颜色,譬如绿色(green)、红色(red)、蓝色(blue)、黄色(yellow)、白色(white)、紫色(magenta)、青色(cyan)或者黑色(black)。
|
||||
<ruby>黑客帝国<rt>The Matrix</rt></ruby>屏保仍然是我非常喜欢的屏保之一(如图 2),仅次于<ruby>弹跳牛<rt>bouncing cow</rt></ruby>。[CMatrix][3] 可以在终端运行。要运行它只需输入 `cmatrix`,然后可以用 `Ctrl+C` 来停止运行。执行 `cmatrix -s` 则会启动屏保模式,这样的话,按任意键都会直接退出。`-C` 参数可以设定颜色,譬如绿色(`green`)、红色(`red`)、蓝色(`blue`)、黄色(`yellow`)、白色(`white`)、紫色(`magenta`)、青色(`cyan`)或者黑色(`black`)。
|
||||
|
||||
![图 2 黑客帝国屏保](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_0.jpg?itok=E3f26R7w)
|
||||
|
||||
@ -91,7 +91,7 @@ CMatrix 还支持异步按键,这意味着你可以在它运行的时候改变
|
||||
|
||||
### fbgs PDF 阅读器
|
||||
|
||||
看起来,PDF 文档的流行是普遍且无法阻止的,而且 PDF 比它之前好了很多,譬如超链接、复制粘贴以及更好的文本搜索功能等。`fbgs` 是 `fbida` 包中提供的一个 PDF 阅读器。它可以设置页面大小、分辨率、指定页码以及绝大部分 `fbi` 所提供的选项,当然除了一些在 `man fbgs` 中列举出来的不可用选项。我主要用到的选项是页面大小,你可以选择 `-l`、`xl` 或者 `xxl`:
|
||||
看起来,PDF 文档是普遍流行且无法避免的,而且 PDF 比它之前的功能好了很多,譬如超链接、复制粘贴以及更好的文本搜索功能等。`fbgs` 是 `fbida` 包中提供的一个 PDF 阅读器。它可以设置页面大小、分辨率、指定页码以及绝大部分 `fbi` 所提供的选项,当然除了一些在 `man fbgs` 中列举出来的不可用选项。我主要用到的选项是页面大小,你可以选择 `-l`、`xl` 或者 `xxl`:
|
||||
|
||||
```
|
||||
$ fbgs -xl annoyingpdf.pdf
|
||||
@ -105,7 +105,7 @@ via: https://www.linux.com/learn/intro-to-linux/2018/1/multimedia-apps-linux-con
|
||||
|
||||
作者:[Carla Schroder][a]
|
||||
译者:[Yinr](https://github.com/Yinr)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,42 +1,42 @@
|
||||
如何启动进入 Linux 命令行
|
||||
======
|
||||
|
||||
![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/how-to-boot-into-linux-command-line_orig.jpg)
|
||||
|
||||
可能有时候你需要或者不想使用 GUI,也就是没有 X,而是选择命令行启动 [Linux][1]。不管是什么原因,幸运的是,直接启动进入 Linux **命令行** 非常简单。在其他内核选项之后,它需要对引导参数进行简单的更改。此更改将系统引导到指定的运行级别。
|
||||
可能有时候你启动 Linux 时需要或者希望不使用 GUI(图形用户界面),也就是没有 X,而是选择命令行。不管是什么原因,幸运的是,直接启动进入 Linux 命令行 非常简单。它需要在其他内核选项之后对引导参数进行简单的更改。此更改将系统引导到指定的运行级别。
|
||||
|
||||
### 为什么要这样做?
|
||||
|
||||
如果你的系统由于无效配置或者显示管理器损坏或任何可能导致 GUI 无法正常启动的情况而无法运行 Xorg,那么启动到命令行将允许你通过登录到终端进行故障排除(假设你知道要怎么开始),并能做任何你需要做的东西。引导到命令行也是一个很好的熟悉终端的方式,不然,你也可以为了好玩这么做。
|
||||
如果你的系统由于无效配置或者显示管理器损坏或任何可能导致 GUI 无法正常启动的情况而无法运行 Xorg,那么启动到命令行将允许你通过登录到终端进行故障排除(假设你知道要怎么做),并能做任何你需要做的东西。引导到命令行也是一个很好的熟悉终端的方式,不然,你也可以为了好玩这么做。
|
||||
|
||||
### 访问 GRUB 菜单
|
||||
|
||||
在启动时,你需要访问 GRUB 启动菜单。如果在每次启动计算机时菜单未设置为显示,那么可能需要在系统启动之前按住 SHIFT 键。在菜单中,需要选择 [Linux 发行版][2]条目。高亮显示后,按下 “e” 编辑引导参数。
|
||||
在启动时,你需要访问 GRUB 启动菜单。如果在每次启动计算机时菜单未设置为显示,那么可能需要在系统启动之前按住 `SHIFT` 键。在菜单中,需要选择 Linux 发行版条目。高亮显示后该条目,按下 `e` 编辑引导参数。
|
||||
|
||||
[![zorin os grub menu](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/gnu-grub_orig.png)][3]
|
||||
|
||||
较老的 GRUB 版本遵循类似的机制。启动管理器应提供有关如何编辑启动参数的说明。
|
||||
较老的 GRUB 版本遵循类似的机制。启动管理器应提供有关如何编辑启动参数的说明。
|
||||
|
||||
### 指定运行级别
|
||||
|
||||
编辑器将出现,你将看到 GRUB 解析到内核的选项。移动到以 “linux” 开头的行(旧的 GRUB 版本可能是 “kernel”,选择它并按照说明操作)。这指定了解析到内核的参数。在该行的末尾(可能会出现跨越多行,具体取决于分辨率),只需指定要引导的运行级别,即 3(多用户模式,纯文本)。
|
||||
会出现一个编辑器,你将看到 GRUB 会解析给内核的选项。移动到以 `linux` 开头的行(旧的 GRUB 版本可能是 `kernel`,选择它并按照说明操作)。这指定了要解析给内核的参数。在该行的末尾(可能会出现跨越多行,具体取决于你的终端分辨率),只需指定要引导的运行级别,即 `3`(多用户模式,纯文本)。
|
||||
|
||||
[![customize grub menu](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/runlevel_orig.png)][4]
|
||||
|
||||
按下 Ctrl-X 或 F10 将使用这些参数启动系统。开机和以前一样。唯一改变的是启动的运行级别。
|
||||
|
||||
|
||||
按下 `Ctrl-X` 或 `F10` 将使用这些参数启动系统。开机和以前一样。唯一改变的是启动的运行级别。
|
||||
|
||||
这是启动后的页面:
|
||||
|
||||
[![boot linux in command line](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/runlevel_1_orig.png)][5]
|
||||
[![boot linux in command line](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/runlevel_1_orig.png)][5]
|
||||
|
||||
### 运行级别
|
||||
|
||||
你可以指定不同的运行级别,默认运行级别是 5。1 启动到“单用户”模式,它会启动进入 root shell。3 提供了一个多用户命令行系统。
|
||||
你可以指定不同的运行级别,默认运行级别是 `5` (多用户图形界面)。`1` 启动到“单用户”模式,它会启动进入 root shell。`3` 提供了一个多用户命令行系统。
|
||||
|
||||
### 从命令行切换
|
||||
|
||||
在某个时候,你可能想要再次运行显示管理器来使用 GUI,最快的方法是运行这个:
|
||||
在某个时候,你可能想要运行显示管理器来再次使用 GUI,最快的方法是运行这个:
|
||||
|
||||
```
|
||||
$ sudo init 5
|
||||
```
|
||||
@ -49,7 +49,7 @@ via: http://www.linuxandubuntu.com/home/how-to-boot-into-linux-command-line
|
||||
|
||||
作者:[LinuxAndUbuntu][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,62 +1,63 @@
|
||||
在终端显示世界地图
|
||||
MapSCII:在终端显示世界地图
|
||||
======
|
||||
我偶然发现了一个有趣的工具。在终端的世界地图!是的,这太酷了。向 **MapSCII** 问好,这是可在 xterm 兼容终端渲染的盲文和 ASCII 世界地图。它支持 GNU/Linux、Mac OS 和 Windows。我以为这是另一个在 GitHub 上托管的项目。但是我错了!他们做了令人印象深刻的事。我们可以使用我们的鼠标指针在世界地图的任何地方拖拽放大和缩小。其他显著的特性是:
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-8-720x340.png)
|
||||
|
||||
我偶然发现了一个有趣的工具。在终端里的世界地图!是的,这太酷了。给 `MapSCII` 打 call,这是可在 xterm 兼容终端上渲染的布莱叶盲文和 ASCII 世界地图。它支持 GNU/Linux、Mac OS 和 Windows。我原以为它只不过是一个在 GitHub 上托管的项目而已,但是我错了!他们做的事令人印象深刻。我们可以使用我们的鼠标指针在世界地图的任何地方拖拽放大和缩小。其他显著的特性是:
|
||||
|
||||
* 发现任何特定地点周围的兴趣点
|
||||
* 高度可定制的图层样式,带有[ Mapbox 样式][1]支持
|
||||
* 连接到任何公共或私有矢量贴片服务器
|
||||
* 高度可定制的图层样式,支持 [Mapbox 样式][1]
|
||||
* 可连接到任何公共或私有的矢量贴片服务器
|
||||
* 或者使用已经提供并已优化的基于 [OSM2VectorTiles][2] 服务器
|
||||
* 离线工作,发现本地 [VectorTile][3]/[MBTiles][4]
|
||||
* 可以离线工作并发现本地的 [VectorTile][3]/[MBTiles][4]
|
||||
* 兼容大多数 Linux 和 OSX 终端
|
||||
* 高度优化算法的流畅体验
|
||||
|
||||
|
||||
|
||||
### 使用 MapSCII 在终端中显示世界地图
|
||||
|
||||
要打开地图,只需从终端运行以下命令:
|
||||
|
||||
```
|
||||
telnet mapscii.me
|
||||
```
|
||||
|
||||
这是我终端上的世界地图。
|
||||
|
||||
[![][5]][6]
|
||||
![][6]
|
||||
|
||||
很酷,是吗?
|
||||
|
||||
要切换到盲文视图,请按 **c**。
|
||||
要切换到布莱叶盲文视图,请按 `c`。
|
||||
|
||||
[![][5]][7]
|
||||
![][7]
|
||||
|
||||
Type **c** again to switch back to the previous format **.**
|
||||
再次输入 **c** 切回以前的格式。
|
||||
再次输入 `c` 切回以前的格式。
|
||||
|
||||
要滚动地图,请使用**向上**、向下**、**向左**、**向右**箭头键。要放大/缩小位置,请使用 **a** 和 **a** 键。另外,你可以使用鼠标的滚轮进行放大或缩小。要退出地图,请按 **q**。
|
||||
要滚动地图,请使用“向上”、“向下”、“向左”、“向右”箭头键。要放大/缩小位置,请使用 `a` 和 `z` 键。另外,你可以使用鼠标的滚轮进行放大或缩小。要退出地图,请按 `q`。
|
||||
|
||||
就像我已经说过的,不要认为这是一个简单的项目。点击地图上的任何位置,然后按 **“a”** 放大。
|
||||
就像我已经说过的,不要认为这是一个简单的项目。点击地图上的任何位置,然后按 `a` 放大。
|
||||
|
||||
放大后,下面是一些示例截图。
|
||||
|
||||
[![][5]][8]
|
||||
![][8]
|
||||
|
||||
我可以放大查看我的国家(印度)的州。
|
||||
|
||||
[![][5]][9]
|
||||
![][9]
|
||||
|
||||
和州内的地区(Tamilnadu):
|
||||
|
||||
[![][5]][10]
|
||||
![][10]
|
||||
|
||||
甚至是地区内的镇 [Taluks][11]:
|
||||
|
||||
[![][5]][12]
|
||||
![][12]
|
||||
|
||||
还有,我完成学业的地方:
|
||||
|
||||
[![][5]][13]
|
||||
![][13]
|
||||
|
||||
即使它只是一个最小的城镇,MapSCII 也能准确地显示出来。 MapSCII 使用 [**OpenStreetMap**][14] 来收集数据。
|
||||
即使它只是一个最小的城镇,MapSCII 也能准确地显示出来。 MapSCII 使用 [OpenStreetMap][14] 来收集数据。
|
||||
|
||||
### 在本地安装 MapSCII
|
||||
|
||||
@ -64,15 +65,16 @@ Type **c** again to switch back to the previous format **.**
|
||||
|
||||
确保你的系统上已经安装了 Node.js。如果还没有,请参阅以下链接。
|
||||
|
||||
[Install NodeJS on Linux][15]
|
||||
- [在 Linux 上安装 NodeJS][15]
|
||||
|
||||
然后,运行以下命令来安装它。
|
||||
|
||||
```
|
||||
sudo npm install -g mapscii
|
||||
|
||||
```
|
||||
|
||||
要启动 MapSCII,请运行:
|
||||
|
||||
```
|
||||
mapscii
|
||||
```
|
||||
@ -81,15 +83,13 @@ mapscii
|
||||
|
||||
干杯!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/mapscii-world-map-terminal/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -99,13 +99,13 @@ via: https://www.ostechnix.com/mapscii-world-map-terminal/
|
||||
[3]:https://github.com/mapbox/vector-tile-spec
|
||||
[4]:https://github.com/mapbox/mbtiles-spec
|
||||
[5]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-1-2.png ()
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-2.png ()
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-3.png ()
|
||||
[9]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-4.png ()
|
||||
[10]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-5.png ()
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-1-2.png
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-2.png
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-3.png
|
||||
[9]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-4.png
|
||||
[10]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-5.png
|
||||
[11]:https://en.wikipedia.org/wiki/Tehsils_of_India
|
||||
[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-6.png ()
|
||||
[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-7.png ()
|
||||
[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-6.png
|
||||
[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-7.png
|
||||
[14]:https://www.openstreetmap.org/
|
||||
[15]:https://www.ostechnix.com/install-node-js-linux/
|
@ -0,0 +1,170 @@
|
||||
8 个你不一定全都了解的 rm 命令示例
|
||||
======
|
||||
|
||||
删除文件和复制/移动文件一样,都是很基础的操作。在 Linux 中,有一个专门的命令 `rm`,可用于完成所有删除相关的操作。在本文中,我们将用些容易理解的例子来讨论这个命令的基本使用。
|
||||
|
||||
但在我们开始前,值得指出的是本文所有示例都在 Ubuntu 16.04 LTS 中测试过。
|
||||
|
||||
### Linux rm 命令概述
|
||||
|
||||
通俗的讲,我们可以认为 `rm` 命令是用于删除文件和目录的。下面是此命令的语法:
|
||||
|
||||
```
|
||||
rm [选项]... [要删除的文件/目录]...
|
||||
```
|
||||
|
||||
下面是命令使用说明:
|
||||
|
||||
> GUN 版本 `rm` 命令的手册文档。`rm` 删除每个指定的文件,默认情况下不删除目录。
|
||||
|
||||
> 当删除的文件超过三个或者提供了选项 `-r`、`-R` 或 `--recursive`(LCTT 译注:表示递归删除目录中的文件)时,如果给出 `-I`(LCTT 译注:大写的 I)或 `--interactive=once` 选项(LCTT 译注:表示开启交互一次),则 `rm` 命令会提示用户是否继续整个删除操作,如果用户回应不是确认(LCTT 译注:即没有回复 `y`),则整个命令立刻终止。
|
||||
|
||||
> 另外,如果被删除文件是不可写的,标准输入是终端,这时如果没有提供 `-f` 或 `--force` 选项,或者提供了 `-i`(LCTT 译注:小写的 i) 或 `--interactive=always` 选项,`rm` 会提示用户是否要删除此文件,如果用户回应不是确认(LCTT 译注:即没有回复 `y`),则跳过此文件。
|
||||
|
||||
|
||||
下面这些问答式例子会让你更好的理解这个命令的使用。
|
||||
|
||||
### Q1. 如何用 rm 命令删除文件?
|
||||
|
||||
这是非常简单和直观的。你只需要把文件名(如果文件不是在当前目录中,则还需要添加文件路径)传入给 `rm` 命令即可。
|
||||
|
||||
(LCTT 译注:可以用空格隔开传入多个文件名称。)
|
||||
|
||||
```
|
||||
rm 文件1 文件2 ...
|
||||
```
|
||||
如:
|
||||
|
||||
```
|
||||
rm testfile.txt
|
||||
```
|
||||
|
||||
[![How to remove files using rm command][1]][2]
|
||||
|
||||
### Q2. 如何用 `rm` 命令删除目录?
|
||||
|
||||
如果你试图删除一个目录,你需要提供 `-r` 选项。否则 `rm` 会抛出一个错误告诉你正试图删除一个目录。
|
||||
|
||||
(LCTT 译注:`-r` 表示递归地删除目录下的所有文件和目录。)
|
||||
|
||||
```
|
||||
rm -r [目录名称]
|
||||
```
|
||||
|
||||
如:
|
||||
|
||||
```
|
||||
rm -r testdir
|
||||
```
|
||||
|
||||
[![How to remove directories using rm command][3]][4]
|
||||
|
||||
### Q3. 如何让删除操作前有确认提示?
|
||||
|
||||
如果你希望在每个删除操作完成前都有确认提示,可以使用 `-i` 选项。
|
||||
|
||||
```
|
||||
rm -i [文件/目录]
|
||||
```
|
||||
|
||||
比如,你想要删除一个目录“testdir”,但需要每个删除操作都有确认提示,你可以这么做:
|
||||
|
||||
```
|
||||
rm -r -i testdir
|
||||
```
|
||||
|
||||
[![How to make rm prompt before every removal][5]][6]
|
||||
|
||||
### Q4. 如何让 rm 忽略不存在的文件或目录?
|
||||
|
||||
如果你删除一个不存在的文件或目录时,`rm` 命令会抛出一个错误,如:
|
||||
|
||||
[![Linux rm command example][7]][8]
|
||||
|
||||
然而,如果你愿意,你可以使用 `-f` 选项(LCTT 译注:即 “force”)让此次操作强制执行,忽略错误提示。
|
||||
|
||||
```
|
||||
rm -f [文件...]
|
||||
```
|
||||
|
||||
[![How to force rm to ignore nonexistent files][9]][10]
|
||||
|
||||
### Q5. 如何让 rm 仅在某些场景下确认删除?
|
||||
|
||||
选项 `-I`,可保证在删除超过 3 个文件时或递归删除时(LCTT 译注: 如删除目录)仅提示一次确认。
|
||||
|
||||
比如,下面的截图展示了 `-I` 选项的作用——当两个文件被删除时没有提示,当超过 3 个文件时会有提示。
|
||||
|
||||
[![How to make rm prompt only in some scenarios][11]][12]
|
||||
|
||||
### Q6. 当删除根目录是 rm 是如何工作的?
|
||||
|
||||
当然,删除根目录(`/`)是 Linux 用户最不想要的操作。这也就是为什么默认 `rm` 命令不支持在根目录上执行递归删除操作。(LCTT 译注:早期的 `rm` 命令并无此预防行为。)
|
||||
|
||||
[![How rm works when dealing with root directory][13]][14]
|
||||
|
||||
然而,如果你非得完成这个操作,你需要使用 `--no-preserve-root` 选项。当提供此选项,`rm` 就不会特殊处理根目录(`/`)了。
|
||||
|
||||
假如你想知道在哪些场景下 Linux 用户会删除他们的根目录,点击[这里][15]。
|
||||
|
||||
### Q7. 如何让 rm 仅删除空目录?
|
||||
|
||||
假如你需要 `rm` 在删除目录时仅删除空目录,你可以使用 `-d` 选项。
|
||||
|
||||
```
|
||||
rm -d [目录]
|
||||
```
|
||||
|
||||
下面的截图展示 `-d` 选项的用途——仅空目录被删除了。
|
||||
|
||||
[![How to make rm only remove empty directories][16]][17]
|
||||
|
||||
### Q8. 如何让 rm 显示当前删除操作的详情?
|
||||
|
||||
如果你想 rm 显示当前操作完成时的详细情况,使用 `-v` 选项可以做到。
|
||||
|
||||
```
|
||||
rm -v [文件/目录]
|
||||
```
|
||||
|
||||
如:
|
||||
|
||||
[![How to force rm to emit details of operation it is performing][18]][19]
|
||||
|
||||
### 结论
|
||||
|
||||
考虑到 `rm` 命令提供的功能,可以说其是 Linux 中使用频率最高的命令之一了(就像 [cp][20] 和 `mv` 一样)。在本文中,我们涉及到了其提供的几乎所有主要选项。`rm` 命令有些学习曲线,因此在你日常工作中开始使用此命令之前
|
||||
你将需要花费些时间去练习它的选项。更多的信息,请点击此命令的 [man 手册页][21]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-rm-command/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[yizhuoyan](https://github.com/yizhuoyan)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/images/command-tutorial/rm-basic-usage.png
|
||||
[2]:https://www.howtoforge.com/images/command-tutorial/big/rm-basic-usage.png
|
||||
[3]:https://www.howtoforge.com/images/command-tutorial/rm-r.png
|
||||
[4]:https://www.howtoforge.com/images/command-tutorial/big/rm-r.png
|
||||
[5]:https://www.howtoforge.com/images/command-tutorial/rm-i-option.png
|
||||
[6]:https://www.howtoforge.com/images/command-tutorial/big/rm-i-option.png
|
||||
[7]:https://www.howtoforge.com/images/command-tutorial/rm-non-ext-error.png
|
||||
[8]:https://www.howtoforge.com/images/command-tutorial/big/rm-non-ext-error.png
|
||||
[9]:https://www.howtoforge.com/images/command-tutorial/rm-f-option.png
|
||||
[10]:https://www.howtoforge.com/images/command-tutorial/big/rm-f-option.png
|
||||
[11]:https://www.howtoforge.com/images/command-tutorial/rm-I-option.png
|
||||
[12]:https://www.howtoforge.com/images/command-tutorial/big/rm-I-option.png
|
||||
[13]:https://www.howtoforge.com/images/command-tutorial/rm-root-default.png
|
||||
[14]:https://www.howtoforge.com/images/command-tutorial/big/rm-root-default.png
|
||||
[15]:https://superuser.com/questions/742334/is-there-a-scenario-where-rm-rf-no-preserve-root-is-needed
|
||||
[16]:https://www.howtoforge.com/images/command-tutorial/rm-d-option.png
|
||||
[17]:https://www.howtoforge.com/images/command-tutorial/big/rm-d-option.png
|
||||
[18]:https://www.howtoforge.com/images/command-tutorial/rm-v-option.png
|
||||
[19]:https://www.howtoforge.com/images/command-tutorial/big/rm-v-option.png
|
||||
[20]:https://www.howtoforge.com/linux-cp-command/
|
||||
[21]:https://linux.die.net/man/1/rm
|
@ -1,20 +1,20 @@
|
||||
八种在 Linux 上生成随机密码的方法
|
||||
======
|
||||
学习使用 8 种 Linux 原生命令或第三方组件来生成随机密码。
|
||||
|
||||
> 学习使用 8 种 Linux 原生命令或第三方实用程序来生成随机密码。
|
||||
|
||||
![][1]
|
||||
|
||||
在这篇文章中,我们将引导你通过几种不同的方式在 Linux 中生成随机密码。其中几种利用原生 Linux 命令,另外几种则利用极易在 Linux 机器上安装的第三方工具或组件实现。在这里我们利用像 `openssl`, [dd][2], `md5sum`, `tr`, `urandom` 这样的原生命令和 mkpasswd,randpw,pwgen,spw,gpg,xkcdpass,diceware,revelation,keepaasx,passwordmaker 这样的第三方工具。
|
||||
在这篇文章中,我们将引导你通过几种不同的方式在 Linux 终端中生成随机密码。其中几种利用原生 Linux 命令,另外几种则利用极易在 Linux 机器上安装的第三方工具或实用程序实现。在这里我们利用像 `openssl`, [dd][2], `md5sum`, `tr`, `urandom` 这样的原生命令和 mkpasswd,randpw,pwgen,spw,gpg,xkcdpass,diceware,revelation,keepaasx,passwordmaker 这样的第三方工具。
|
||||
|
||||
其实这些方法就是生成一些能被用作密码的随机字母字符串。随机密码可以用于新用户的密码,不管用户基数有多大,这些密码都是独一无二的。话不多说,让我们来看看 8 种不同的在 Linux 上生成随机密码的方法吧。
|
||||
|
||||
##### 使用 mkpasswd 组件生成密码
|
||||
### 使用 mkpasswd 实用程序生成密码
|
||||
|
||||
`mkpasswd` 在基于 RHEL 的系统上随 `expect` 软件包一起安装。在基于 Debian 的系统上 `mkpasswd` 则在软件包 `whois` 中。直接安装 `mkpasswd` 软件包将会导致错误 -
|
||||
`mkpasswd` 在基于 RHEL 的系统上随 `expect` 软件包一起安装。在基于 Debian 的系统上 `mkpasswd` 则在软件包 `whois` 中。直接安装 `mkpasswd` 软件包将会导致错误:
|
||||
|
||||
RHEL 系统:软件包 mkpasswd 不可用。
|
||||
|
||||
Debian 系统:错误:无法定位软件包 mkpasswd。
|
||||
- RHEL 系统:软件包 mkpasswd 不可用。
|
||||
- Debian 系统:错误:无法定位软件包 mkpasswd。
|
||||
|
||||
所以按照上面所述安装他们的父软件包,就没问题了。
|
||||
|
||||
@ -28,9 +28,9 @@ root@kerneltalks# mkpasswd teststring << on Ubuntu
|
||||
XnlrKxYOJ3vik
|
||||
```
|
||||
|
||||
这个命令在不同的系统上表现得不一样,所以要对应工作。你也可以通过参数来控制长度等选项。你可以查阅 man 手册来探索。
|
||||
这个命令在不同的系统上表现得不一样,所以工作方式各异。你也可以通过参数来控制长度等选项,可以查阅 man 手册来探索。
|
||||
|
||||
##### 使用 openssl 生成密码
|
||||
### 使用 openssl 生成密码
|
||||
|
||||
几乎所有 Linux 发行版都包含 openssl。我们可以利用它的随机功能来生成可以用作密码的随机字母字符串。
|
||||
|
||||
@ -41,18 +41,18 @@ nU9LlHO5nsuUvw==
|
||||
|
||||
这里我们使用 `base64` 编码随机函数,最后一个数字参数表示长度。
|
||||
|
||||
##### 使用 urandom 生成密码
|
||||
### 使用 urandom 生成密码
|
||||
|
||||
设备文件 `/dev/urandom` 是另一个获得随机字符串的方法。我们使用 `tr` 功能裁剪输出来获得随机字符串,并把它作为密码。
|
||||
设备文件 `/dev/urandom` 是另一个获得随机字符串的方法。我们使用 `tr` 功能并裁剪输出来获得随机字符串,并把它作为密码。
|
||||
|
||||
```bash
|
||||
root@kerneltalks # strings /dev/urandom |tr -dc A-Za-z0-9 | head -c20; echo
|
||||
UiXtr0NAOSIkqtjK4c0X
|
||||
```
|
||||
|
||||
##### 使用 dd 命令生成密码
|
||||
### 使用 dd 命令生成密码
|
||||
|
||||
我们甚至可以使用 /dev/urandom 设备配合 [dd 命令][2] 来获取随机字符串。
|
||||
我们甚至可以使用 `/dev/urandom` 设备配合 [dd 命令][2] 来获取随机字符串。
|
||||
|
||||
```bash
|
||||
root@kerneltalks# dd if=/dev/urandom bs=1 count=15|base64 -w 0
|
||||
@ -62,16 +62,16 @@ root@kerneltalks# dd if=/dev/urandom bs=1 count=15|base64 -w 0
|
||||
QMsbe2XbrqAc2NmXp8D0
|
||||
```
|
||||
|
||||
我们需要将结果通过 `base64` 编码使它能被人类读懂。你可以使用计数值来获取想要的长度。想要获得更简洁的输出的话,可以将 std2 重定向到 `/dev/null`。简洁输出的命令是 -
|
||||
我们需要将结果通过 `base64` 编码使它能被人类可读。你可以使用数值来获取想要的长度。想要获得更简洁的输出的话,可以将“标准错误输出”重定向到 `/dev/null`。简洁输出的命令是:
|
||||
|
||||
```bash
|
||||
root@kerneltalks # dd if=/dev/urandom bs=1 count=15 2>/dev/null|base64 -w 0
|
||||
F8c3a4joS+a3BdPN9C++
|
||||
```
|
||||
|
||||
##### 使用 md5sum 生成密码
|
||||
### 使用 md5sum 生成密码
|
||||
|
||||
另一种获取可用作密码的随机字符串的方法是计算 MD5 校验值!校验值看起来确实像是随机字符串组合在一起,我们可以用作为密码。确保你的计算源是个变量,这样的话每次运行命令时生成的校验值都不一样。比如 `date`![date 命令][3] 总会生成不同的输出。
|
||||
另一种获取可用作密码的随机字符串的方法是计算 MD5 校验值!校验值看起来确实像是随机字符串组合在一起,我们可以用作密码。确保你的计算源是个变量,这样的话每次运行命令时生成的校验值都不一样。比如 `date` ![date 命令][3] 总会生成不同的输出。
|
||||
|
||||
```bash
|
||||
root@kerneltalks # date |md5sum
|
||||
@ -80,9 +80,9 @@ root@kerneltalks # date |md5sum
|
||||
|
||||
在这里我们将 `date` 命令的输出通过 `md5sum` 得到了校验和!你也可以用 [cut 命令][4] 裁剪你需要的长度。
|
||||
|
||||
##### 使用 pwgen 生成密码
|
||||
### 使用 pwgen 生成密码
|
||||
|
||||
`pwgen` 软件包在[类 EPEL 仓库][5](译者注:企业版 Linux 附加软件包)中。`pwgen` 更专注于生成可发音的密码,但它们不在英语词典中,也不是纯英文的。标准发行版仓库中可能并不包含这个工具。安装这个软件包然后运行 `pwgen` 命令行。Boom !
|
||||
`pwgen` 软件包在类似 [EPEL 软件仓库][5](LCTT 译注:企业版 Linux 附加软件包)中。`pwgen` 更专注于生成可发音的密码,但它们不在英语词典中,也不是纯英文的。标准发行版仓库中可能并不包含这个工具。安装这个软件包然后运行 `pwgen` 命令行。Boom !
|
||||
|
||||
```bash
|
||||
root@kerneltalks # pwgen
|
||||
@ -92,9 +92,10 @@ aic2OaDa iexieQu8 Aesoh4Ie Eixou9ph ShiKoh0i uThohth7 taaN3fuu Iege0aeZ
|
||||
cah3zaiW Eephei0m AhTh8guo xah1Shoo uh8Iengo aifeev4E zoo4ohHa fieDei6c
|
||||
aorieP7k ahna9AKe uveeX7Hi Ohji5pho AigheV7u Akee9fae aeWeiW4a tiex8Oht
|
||||
```
|
||||
|
||||
你的终端会呈现出一个密码列表!你还想要什么呢?好吧。你还想再仔细探索的话, `pwgen` 还有很多自定义选项,这些都可以在 man 手册里查阅到。
|
||||
|
||||
##### 使用 gpg 工具生成密码
|
||||
### 使用 gpg 工具生成密码
|
||||
|
||||
GPG 是一个遵循 OpenPGP 标准的加密及签名工具。大部分 gpg 工具都预先被安装好了(至少在我的 RHEL7 上是这样)。但如果没有的话你可以寻找 `gpg` 或 `gpg2` 软件包并[安装][6]它。
|
||||
|
||||
@ -107,10 +108,12 @@ mL8i+PKZ3IuN6a7a
|
||||
|
||||
这里我们传了生成随机字节序列选项(`--gen-random`),质量为 1(第一个参数),次数 12 (第二个参数)。选项 `--armor` 保证以 `base64` 编码输出。
|
||||
|
||||
##### 使用 xkcdpass 生成密码
|
||||
### 使用 xkcdpass 生成密码
|
||||
|
||||
著名的极客幽默网站 [xkcd][7],发表了一篇非常有趣的文章,是关于好记但又复杂的密码的。你可以在[这里][8]阅读。所以 `xkcdpass` 工具就受这篇文章启发,做了这样的工作!这是一个 Python 软件包,可以在[这里][9]的 Python 的官网上找到它。
|
||||
|
||||
![](https://imgs.xkcd.com/comics/password_strength.png)
|
||||
|
||||
所有的安装使用说明都在上面那个页面提及了。这里是安装步骤和我的测试 RHEL 服务器的输出,以供参考。
|
||||
|
||||
```bash
|
||||
@ -229,7 +232,7 @@ Processing dependencies for xkcdpass==1.14.3
|
||||
Finished processing dependencies for xkcdpass==1.14.3
|
||||
```
|
||||
|
||||
现在运行 xkcdpass 命令,将会随机给出你几个像下面这样的字典单词 -
|
||||
现在运行 `xkcdpass` 命令,将会随机给出你几个像下面这样的字典单词:
|
||||
|
||||
```bash
|
||||
root@kerneltalks # xkcdpass
|
||||
@ -245,9 +248,10 @@ root@kerneltalks # xkcdpass |md5sum
|
||||
root@kerneltalks # xkcdpass |md5sum
|
||||
ad79546e8350744845c001d8836f2ff2 -
|
||||
```
|
||||
|
||||
或者你甚至可以把所有单词串在一起作为一个超长的密码,不仅非常好记,也不容易被电脑程序攻破。
|
||||
|
||||
Linux 上还有像 [Diceware][10], [KeePassX][11], [Revelation][12], [PasswordMaker][13] 这样的工具,也可以考虑用来生成强随机密码。
|
||||
Linux 上还有像 [Diceware][10]、 [KeePassX][11]、 [Revelation][12]、 [PasswordMaker][13] 这样的工具,也可以考虑用来生成强随机密码。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -255,7 +259,7 @@ via: https://kerneltalks.com/tips-tricks/8-ways-to-generate-random-password-in-l
|
||||
|
||||
作者:[kerneltalks][a]
|
||||
译者:[heart4lor](https://github.com/heart4lor)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[Locez](https://github.com/locez)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -3,13 +3,13 @@
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lightbulb-idea-think-yearbook-lead.png?itok=5ZpCm0Jh)
|
||||
|
||||
如果您从未使用过 [Git][1],甚至可能从未听说过它。莫慌张,只需要一步步地跟着入门教程,很快您就会在 [GitHub][2] 上拥有一个全新的 Git 仓库。
|
||||
如果您从未使用过 [Git][1],甚至可能从未听说过它。莫慌张,只需要一步步地跟着这篇入门教程,很快您就会在 [GitHub][2] 上拥有一个全新的 Git 仓库。
|
||||
|
||||
在开始之前,让我们先理清一个常见的误解:Git 并不是 GitHub。Git 是一套版本控制系统(或者说是一款软件),能够协助您跟踪计算机程序和文件在任何时间的更改。它同样允许您在程序、代码和文件操作上与同事协作。GitHub 以及类似服务(包括 GitLab 和 BitBucket)都属于部署了 Git 程序的网站,能够托管您的代码。
|
||||
|
||||
### 步骤 1:申请一个 GitHub 账户
|
||||
|
||||
在 [GitHub.com][3] (免费)网站上创建一个账户是最简单的方式。
|
||||
在 [GitHub.com][3] 网站上(免费)创建一个账户是最简单的方式。
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/git_guide1.png)
|
||||
|
||||
@ -17,13 +17,13 @@
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/git_guide2.png)
|
||||
|
||||
### 步骤 2:创建一个新的 repository
|
||||
### 步骤 2:创建一个新的仓库
|
||||
|
||||
一个 repository(仓库),类似于能储存物品的场所或是容器;在这里,我们创建仓库存储代码。在 `+` 符号内(在插图的右上角,我已经选中它了) 的下拉菜单中选择 **New Pepositiry**。
|
||||
一个仓库( repository),类似于能储存物品的场所或是容器;在这里,我们创建仓库存储代码。在 `+` 符号(在插图的右上角,我已经选中它了) 的下拉菜单中选择 **New Repository**。
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/git_guide3.png)
|
||||
|
||||
给您的仓库命名(比如说,123)然后点击 **Create Repository**。无需考虑本页面的其他选项。
|
||||
给您的仓库命名(比如说,Demo)然后点击 **Create Repository**。无需考虑本页面的其他选项。
|
||||
|
||||
恭喜!您已经在 GitHub.com 中建立了您的第一个仓库。
|
||||
|
||||
@ -33,73 +33,83 @@
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/git_guide4.png)
|
||||
|
||||
不必惊慌,它比看上去简单。跟紧步骤。忽略其他内容,注意截图上的“...or create a new repository on the command line,”。
|
||||
不必惊慌,它比看上去简单。跟紧步骤。忽略其他内容,注意截图上的 “...or create a new repository on the command line,”。
|
||||
|
||||
在您的计算机中打开终端。
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/git_guide5.png)
|
||||
|
||||
键入 `git` 然后回车。如果命令行显示 `bash: git: command not found`,在您的操作系统或发行版使用 [安装 Git][4] 命令。键入 `git` 并回车检查是否成功安装;如果安装成功,您将看见大量关于使用说明的信息。
|
||||
键入 `git` 然后回车。如果命令行显示 `bash: git: command not found`,在您的操作系统或发行版 [安装 Git][4] 命令。键入 `git` 并回车检查是否成功安装;如果安装成功,您将看见大量关于使用该命令的说明信息。
|
||||
|
||||
在终端内输入:
|
||||
|
||||
```
|
||||
mkdir Demo
|
||||
```
|
||||
|
||||
这个命令将会创建一个名为 Demo 的目录(文件夹)。
|
||||
|
||||
如下命令将会切换终端目录,跳转到 Demo 目录:
|
||||
|
||||
```
|
||||
cd Demo
|
||||
```
|
||||
|
||||
然后输入:
|
||||
|
||||
```
|
||||
echo "#Demo" >> README.md
|
||||
```
|
||||
|
||||
创建一个名为 `README.md` 的文件,并写入 `#Demo`。检查文件是否创建成功,请输入:
|
||||
|
||||
```
|
||||
cat README.md
|
||||
```
|
||||
|
||||
这将会为您显示 `README.md` 文件的内容,如果文件创建成功,您的终端会有如下显示:
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/git_guide7.png)
|
||||
|
||||
使用 Git 程序告诉您的电脑,Demo 是一个被 Git 托管的目录,请输入:
|
||||
使用 Git 程序告诉您的电脑,Demo 是一个被 Git 管理的目录,请输入:
|
||||
|
||||
```
|
||||
git init
|
||||
```
|
||||
|
||||
然后,告诉 Git 程序您关心的文件并且想在此刻起跟踪它的任何改变,请输入:
|
||||
|
||||
```
|
||||
git add README.md
|
||||
```
|
||||
|
||||
### 步骤 4:创建一次提交
|
||||
|
||||
目前为止,您已经创建了一个文件,并且已经通知了 Git,现在,是时候创建一次提交了。提交被看作为一个里程碑。每当完成一些工作之时,您都可以创建一次提交,保存文件当前版本,这样一来,您可以返回之前的版本,并且查看那时候的文件内容。无论那一次,您对修改过后的文件创建的新的存档,都和上一次的不一样。
|
||||
目前为止,您已经创建了一个文件,并且已经通知了 Git,现在,是时候创建一次<ruby>提交<rt>commit</rt></ruby>了。提交可以看作是一个里程碑。每当完成一些工作之时,您都可以创建一次提交,保存文件当前版本,这样一来,您可以返回之前的版本,并且查看那时候的文件内容。无论何时您修改了文件,都可以对文件创建一个上一次的不一样的新版本。
|
||||
|
||||
创建一次提交,请输入:
|
||||
|
||||
```
|
||||
git commit -m "first commit"
|
||||
```
|
||||
|
||||
就是这样!刚才您创建了包含一条注释为“first commit”的 Git 提交。每次提交,您都必须编辑注释信息;它不仅能协助您识别提交,而且能让您理解此时您对文件做了什么修改。这样到了明天,如果您在文件中添加新的代码,您可以写一句提交信息:添加了新的代码,然后当您一个月后回来查看提交记录或者 Git 日志(提交列表),您还能知道当时的您在文件夹里做了什么。
|
||||
就是这样!刚才您创建了包含一条注释为 “first commit” 的 Git 提交。每次提交,您都必须编辑注释信息;它不仅能协助您识别提交,而且能让您理解此时您对文件做了什么修改。这样到了明天,如果您在文件中添加新的代码,您可以写一句提交信息:“添加了新的代码”,然后当您一个月后回来查看提交记录或者 Git 日志(即提交列表),您还能知道当时的您在文件夹里做了什么。
|
||||
|
||||
### 步骤 5: 将您的计算机连接到 GitHub 仓库
|
||||
### 步骤 5: 将您的计算机与 GitHub 仓库相连接
|
||||
|
||||
现在,是时候用如下命令将您的计算机连接到 GitHub 仓库了:
|
||||
|
||||
```
|
||||
git remote add origin https://github.com/<your_username>/Demo.git
|
||||
```
|
||||
|
||||
让我们一步步的分析这行命令。我们通知 Git 去添加一个叫做 `origin` 的,拥有地址为 `https://github.com/<your_username>/Demo.git`(它也是您的 GitHub 地址仓库) 的 `remote`。当您递送代码时,允许您在 GitHub.com 和 Git 仓库交互时使用 `origin` 而不是完整的 Git 地址。为什么叫做 `origin`?当然,您可以叫点别的,只要您喜欢。
|
||||
让我们一步步的分析这行命令。我们通知 Git 去添加一个叫做 `origin` (起源)的,拥有地址为 `https://github.com/<your_username>/Demo.git`(它也是您的仓库的 GitHub 地址) 的 `remote` (远程仓库)。当您提交代码时,这允许您在 GitHub.com 和 Git 仓库交互时使用 `origin` 这个名称而不是完整的 Git 地址。为什么叫做 `origin`?当然,您可以叫点别的,只要您喜欢(惯例而已)。
|
||||
|
||||
现在,在 GitHub.com 我们已经连接并复制本地 Demo 仓库副本到远程仓库。您的设备会有如下显示:
|
||||
现在,我们已经将本地 Demo 仓库副本连接到了其在 GitHub.com 远程副本上。您的终端看起来如下:
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/git_guide8.png)
|
||||
|
||||
此刻我们已经连接到远程仓库,可以推送我们的代码(上传 `README.md` 文件) 到 GitHub.com。
|
||||
此刻我们已经连接到远程仓库,可以推送我们的代码 到 GitHub.com(例如上传 `README.md` 文件)。
|
||||
|
||||
执行完毕后,您的终端会显示如下信息:
|
||||
|
||||
@ -109,7 +119,7 @@ git remote add origin https://github.com/<your_username>/Demo.git
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/git_guide10.png)
|
||||
|
||||
就是这么回事!您已经创建了您的第一个 GitHub 仓库,连接到了您的电脑,并且在 GitHub.com 推送(或者称:上传)名叫 Demo 的文件到您的远程仓库。下一次,我将编写关于 Git 复制、添加新文件、修改现存文件、推送(上传)文件到 GitHub。
|
||||
就是这么回事!您已经创建了您的第一个 GitHub 仓库,连接到了您的电脑,并且从你的计算机推送(或者称:上传)一个文件到 GitHub.com 名叫 Demo 的远程仓库上了。下一次,我将编写关于 Git 复制(从 GitHub 上下载文件到你的计算机上)、添加新文件、修改现存文件、推送(上传)文件到 GitHub。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -117,7 +127,7 @@ via: https://opensource.com/article/18/1/step-step-guide-git
|
||||
|
||||
作者:[Kedar Vijay Kulkarni][a]
|
||||
译者:[CYLeft](https://github.com/CYLeft)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,61 @@
|
||||
Linux 内核 4.15:“一个不同寻常的发布周期”
|
||||
============================================================
|
||||
|
||||
|
||||
![Linux](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/background-penguin.png?itok=g8NBQs24 "Linux")
|
||||
|
||||
> Linus Torvalds 在周日发布了 Linux 的 4.15 版内核,比原计划发布时间晚了一周。了解这次发行版的关键更新。
|
||||
|
||||
Linus Torvalds 在周日(1 月 28 日)[发布了 Linux 内核的 4.15 版][7],再一次比原计划晚了一周。延迟发布的罪魁祸首是 “Meltdown” 和 “Spectre” bug,由于这两个漏洞,使开发者不得不在这最后的周期中提交重大补丁。Torvalds 不愿意“赶工”,因此,他又给了一周时间去制作这个发行版本。
|
||||
|
||||
不出意外的话,这第一批补丁将是去修补前面提及的 [Meltdown 和 Spectre][8] 漏洞。为防范 Meltdown —— 这个影响 Intel 芯片的问题,在 x86 架构上[开发者实现了页表隔离(PTI)][9]。不论什么理由你如果想去关闭它,你可以使用内核引导选项 `pti=off` 去关闭这个特性。
|
||||
|
||||
Spectre v2 漏洞对 Intel 和 AMD 芯片都有影响,为防范它,[内核现在带来了 retpoline 机制][10]。Retpoline 要求 GCC 的版本支持 `-mindirect-branch=thunk-extern` 功能。由于使用了 PTI,Spectre 抑制机制可以被关闭,如果需要去关闭它,在引导时使用 `spectre_v2=off` 选项。尽管开发者努力去解决 Spectre v1,但是,到目前为止还没有一个解决方案,因此,在 4.15 的内核版本中并没有这个 bug 的修补程序。
|
||||
|
||||
对于在 ARM 上的 Meltdown 解决方案也将在下一个开发周期中推送。但是,[对于 PowerPC 上的 bug,在这个发行版中包含了一个补救措施,那就是使用 L1-D 缓存的 RFI 冲刷特性][11]。
|
||||
|
||||
一个有趣的事情是,上面提及的所有受影响的新内核中,都带有一个 `/sys/devices/system/cpu/vulnerabilities/` 虚拟目录。这个目录显示了影响你的 CPU 的漏洞以及当前应用的补救措施。
|
||||
|
||||
芯片带 bug (以及保守秘密的制造商)的问题重新唤起了开发可行的开源替代品的呼声。这使得已经合并到主线版本的内核提供了对 [RISC-V][12] 芯片的部分支持。RISC-V 是一个开源的指令集架构,它允许制造商去设计他们自己的基于 RISC-V 芯片的实现。并且因此也有了几个开源的芯片。虽然 RISC-V 芯片目前主要用于嵌入式设备,它能够去做像智能硬盘或者像 Arduino 这样的开发板,RISC-V 的支持者认为这个架构也可以用于个人电脑甚至是多节点的超级计算机上。
|
||||
|
||||
正如在上面提到的,[对 RISC-V 的支持][13],仍然没有全部完成,它虽然包含了架构代码,但是没有设备驱动。这意味着,虽然 Linux 内核可以在 RISC-V 芯片上运行,但是没有可行的方式与底层的硬件进行实质的交互。也就是说,RISC-V 不会受到其它闭源架构上的任何 bug 的影响,并且对它的支持的开发工作也在加速进行,因为,[RISC-V 基金会已经得到了一些行业巨头的支持][14]。
|
||||
|
||||
### 4.15 版新内核中的其它新特性
|
||||
|
||||
Torvalds 经常说他喜欢的事情是很无聊的。对他来说,幸运的是,除了 Spectre 和 Meltdown 引发的混乱之外,在 4.15 内核中的大部分其它东西都很普通,比如,对驱动的进一步改进、对新设备的支持等等。但是,还有几点需要重点指出,它们是:
|
||||
|
||||
* [AMD 对虚拟化安全加密的支持][3]。它允许内核通过加密来实现对虚拟机内存的保护。加密的内存仅能够被使用它的虚拟机所解密。就算是 hypervisor 也不能看到它内部的数据。这意味着在云中虚拟机正在处理的数据,在虚拟机外的任何进程都看不到。
|
||||
* 由于 [包含了_显示代码_][4], AMD GPU 得到了极大的提升,这使得 Radeon RX Vega 和 Raven Ridge 显卡得到了内核主线版本的支持,并且也在 AMD 显卡中实现了 HDMI/DP 音频。
|
||||
* 树莓派的爱好者应该很高兴,因为在新内核中, [7" 触摸屏现在已经得到原生支持][5],这将产生成百上千的有趣的项目。
|
||||
|
||||
要发现更多的特性,你可以去查看在 [Kernel Newbies][15] 和 [Phoronix][16] 上的内容。
|
||||
|
||||
_想学习更多的 Linux 的知识,可以去学习来自 Linux 基金会和 edX 的免费课程 —— ["了解 Linux" ][6]。_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/intro-to-linux/2018/1/linux-kernel-415-unusual-release-cycle
|
||||
|
||||
作者:[PAUL BROWN][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/bro66
|
||||
[1]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[2]:https://www.linux.com/files/images/background-penguinpng
|
||||
[3]:https://git.kernel.org/linus/33e63acc119d15c2fac3e3775f32d1ce7a01021b
|
||||
[4]:https://git.kernel.org/torvalds/c/f6705bf959efac87bca76d40050d342f1d212587
|
||||
[5]:https://git.kernel.org/linus/2f733d6194bd58b26b705698f96b0f0bd9225369
|
||||
[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
||||
[7]:https://lkml.org/lkml/2018/1/28/173
|
||||
[8]:https://meltdownattack.com/
|
||||
[9]:https://git.kernel.org/linus/5aa90a84589282b87666f92b6c3c917c8080a9bf
|
||||
[10]:https://git.kernel.org/linus/76b043848fd22dbf7f8bf3a1452f8c70d557b860
|
||||
[11]:https://git.kernel.org/linus/aa8a5e0062ac940f7659394f4817c948dc8c0667
|
||||
[12]:https://riscv.org/
|
||||
[13]:https://git.kernel.org/torvalds/c/b293fca43be544483b6488d33ad4b3ed55881064
|
||||
[14]:https://riscv.org/membership/
|
||||
[15]:https://kernelnewbies.org/Linux_4.15
|
||||
[16]:https://www.phoronix.com/scan.php?page=search&q=Linux+4.15
|
103
published/20180131 Why you should use named pipes on Linux.md
Normal file
103
published/20180131 Why you should use named pipes on Linux.md
Normal file
@ -0,0 +1,103 @@
|
||||
为什么应该在 Linux 上使用命名管道
|
||||
======
|
||||
|
||||
> 命名管道并不常用,但是它们为进程间通讯提供了一些有趣的特性。
|
||||
|
||||
![](https://images.techhive.com/images/article/2017/05/blue-1845806_1280-100722976-large.jpg)
|
||||
|
||||
估计每一位 Linux 使用者都熟悉使用 “|” 符号将数据从一个进程传输到另一个进程的操作。它使用户能简便地从一个命令输出数据到另一个命令,并筛选出想要的数据而无须写脚本进行选择、重新格式化等操作。
|
||||
|
||||
还有另一种管道, 虽然也叫“管道”这个名字却有着非常不同的性质。即您可能尚未使用甚至尚未知晓的——命名管道。
|
||||
|
||||
普通管道与命名管道的一个主要区别就是命名管道是以文件形式实实在在地存在于文件系统中的,没错,它们表现出来就是文件。但是与其它文件不同的是,命名管道文件似乎从来没有文件内容。即使用户往命名管道中写入大量数据,该文件看起来还是空的。
|
||||
|
||||
### 如何在 Linux 上创建命名管道
|
||||
|
||||
在我们研究这些空空如也的命名管道之前,先追根溯源来看看命名管道是如何被创建的。您应该使用名为 `mkfifo` 的命令来创建它们。为什么提及“FIFO”?是因为命名管道也被认为是一种 FIFO 特殊文件。术语 “FIFO” 指的是它的<ruby>先进先出<rt>first-in, first-out</rt></ruby>特性。如果你将冰淇淋盛放到碟子中,然后可以品尝它,那么你执行的就是一个LIFO(<ruby>后进先出<rt>last-in, first-out</rt></ruby>操作。如果你通过吸管喝奶昔,那你就在执行一个 FIFO 操作。好,接下来是一个创建命名管道的例子。
|
||||
|
||||
```
|
||||
$ mkfifo mypipe
|
||||
$ ls -l mypipe
|
||||
prw-r-----. 1 shs staff 0 Jan 31 13:59 mypipe
|
||||
```
|
||||
|
||||
注意一下特殊的文件类型标记 “p” 以及该文件大小为 0。您可以将重定向数据写入命名管道文件,而文件大小依然为 0。
|
||||
|
||||
```
|
||||
$ echo "Can you read this?" > mypipe
|
||||
```
|
||||
|
||||
正如上面所说,敲击回车后似乎什么都没有发生(LCTT 译注:没有返回命令行提示符)。
|
||||
|
||||
另外再开一个终端,查看该命名管道的大小,依旧是 0:
|
||||
|
||||
```
|
||||
$ ls -l mypipe
|
||||
prw-r-----. 1 shs staff 0 Jan 31 13:59 mypipe
|
||||
```
|
||||
|
||||
也许这有违直觉,用户输入的文本已经进入该命名管道,而你仍然卡在输入端。你或者其他人应该等在输出端,并准备读取放入管道的数据。现在让我们读取看看。
|
||||
|
||||
```
|
||||
$ cat mypipe
|
||||
Can you read this?
|
||||
```
|
||||
|
||||
一旦被读取之后,管道中的内容就没有了。
|
||||
|
||||
另一种研究命名管道如何工作的方式是通过将放入数据的操作置入后台来执行两个操作(将数据放入管道,而在另外一段读取它)。
|
||||
|
||||
```
|
||||
$ echo "Can you read this?" > mypipe &
|
||||
[1] 79302
|
||||
$ cat mypipe
|
||||
Can you read this?
|
||||
[1]+ Done echo "Can you read this?" > mypipe
|
||||
```
|
||||
|
||||
一旦管道被读取或“耗干”,该管道就清空了,尽管我们还能看见它并再次使用。可为什么要费此周折呢?
|
||||
|
||||
### 为何要使用命名管道?
|
||||
|
||||
命名管道很少被使用的理由似乎很充分。毕竟在 Unix 系统上,总有多种不同的方式完成同样的操作。有多种方式写文件、读文件、清空文件,尽管命名管道比它们来得更高效。
|
||||
|
||||
值得注意的是,命名管道的内容驻留在内存中而不是被写到硬盘上。数据内容只有在输入输出端都打开时才会传送。用户可以在管道的输出端打开之前向管道多次写入。通过使用命名管道,用户可以创建一个进程写入管道并且另外一个进程读取管道的流程,而不用关心协调二者时间上的同步。
|
||||
|
||||
用户可以创建一个单纯等待数据出现在管道输出端的进程,并在拿到输出数据后对其进行操作。下列命令我们采用 `tail` 来等待数据出现。
|
||||
|
||||
```
|
||||
$ tail -f mypipe
|
||||
```
|
||||
|
||||
一旦供给管道数据的进程结束了,我们就可以看到一些输出。
|
||||
|
||||
```
|
||||
$ tail -f mypipe
|
||||
Uranus replicated to WCDC7
|
||||
Saturn replicated to WCDC8
|
||||
Pluto replicated to WCDC9
|
||||
Server replication operation completed
|
||||
```
|
||||
|
||||
如果研究一下向命名管道写入的进程,用户也许会惊讶于它的资源消耗之少。在下面的 `ps` 命令输出中,唯一显著的资源消耗是虚拟内存(VSZ 那一列)。
|
||||
|
||||
```
|
||||
ps u -P 80038
|
||||
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
|
||||
shs 80038 0.0 0.0 108488 764 pts/4 S 15:25 0:00 -bash
|
||||
```
|
||||
|
||||
命名管道与 Unix/Linux 系统上更常用的管道相比足以不同到拥有另一个名号,但是“管道”确实能反映出它们如何在进程间传送数据的形象,故将称其为“命名管道”还真是恰如其分。也许您在执行操作时就能从这个聪明的 Unix/Linux 特性中获益匪浅呢。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3251853/linux/why-use-named-pipes-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
译者:[YPBlib](https://github.com/YPBlib)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[1]:http://www.networkworld.com/article/2926630/linux/11-pointless-but-awesome-linux-terminal-tricks.html#tk.nww-fsb
|
127
published/20180201 Custom Embedded Linux Distributions.md
Normal file
127
published/20180201 Custom Embedded Linux Distributions.md
Normal file
@ -0,0 +1,127 @@
|
||||
定制嵌入式 Linux 发行版
|
||||
======
|
||||
|
||||
便宜的物联网板的普及意味着它不仅会控制应用程序,还会控制整个软件平台。 那么,如何构建一个针对特定用途的交叉编译应用程序的自定义发行版呢? 正如 Michael J. Hammel 在这里解释的那样,它并不像你想象的那么难。
|
||||
|
||||
### 为什么要定制?
|
||||
|
||||
以前,许多嵌入式项目都使用现成的发行版,然后出于种种原因,再将它们剥离到只剩下基本的必需的东西。首先,移除不需要的包以减少占用的存储空间。在启动时,嵌入式系统一般不需要大量的存储空间以及可用存储空间。在嵌入式系统运行时,可能从非易失性内存中拷贝大量的操作系统文件到内存中。第二,移除用不到的包可以降低可能的攻击面。如果你不需要它们就没有必要把这些可能有漏洞的包挂在上面。最后,移除用不到包可以降低发行版管理的开销。如果在包之间有依赖关系,意味着任何一个包请求从上游更新,那么它们都必须保持同步。那样可能就会出现验证噩梦。
|
||||
|
||||
然而,从一个现有的发行版中去移除包并不像说的那样容易。移除一个包可能会打破与其它包保持的各种依赖关系,以及可能在上游的发行版管理中改变依赖。另外,由于一些包原生集成在引导或者运行时进程中,它们并不能轻易地简单地移除。所有这些都是项目之外的平台的管理,并且有可能会导致意外的开发延迟。
|
||||
|
||||
一个流行的选择是使用上游发行版供应商提供的构建工具去构建一个定制的发行版。无论是 Gentoo 还是 Debian 都提供这种自下而上的构建方式。这些构建工具中最为流行的可能是 Debian 的 debootstrap 实用程序。它取出预构建的核心组件并允许用户去精选出它们感兴趣的包来构建用户自己的平台。但是,debootstrap 最初仅在 x86 平台上可用,虽然,现在有了 ARM(也有可能会有其它的平台)选项。debootstrap 和 Gentoo 的 catalyst 仍然需要从本地项目中将依赖管理移除。
|
||||
|
||||
一些人认为让别人去管理平台软件(像 Android 一样)要比自己亲自管理容易的多。但是,那些发行版都是多用途的,当你在一个轻量级的、资源有限的物联网设备上使用它时,你可能会再三考虑从你手中被拿走的任何资源。
|
||||
|
||||
### 系统引导的基石
|
||||
|
||||
一个定制的 Linux 发行版要求许多软件组件。其中第一个就是<ruby>工具链<rt>toolchain</rt></ruby>。工具链是用于编译软件的一套工具集。包括(但不限于)一个编译器、链接器、二进制操作工具以及标准的 C 库。工具链是为一个特定的目标硬件设备专门构建的。如果一个构建在 x86 系统上的工具链想要用于树莓派,那么这个工具链就被称为交叉编译工具链。当在内存和存储都十分有限的小型嵌入式设备上工作时,最好是使用一个交叉编译工具链。需要注意的是,即便是使用像 JavaScript 这样的需要运行在特定平台的脚本语言为特定用途编写的应用程序,也需要使用交叉编译工具链编译。
|
||||
|
||||
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12278f1.png)
|
||||
|
||||
*图 1. 编译依赖和引导顺序*
|
||||
|
||||
交叉编译工具链用于为目标硬件构建软件组件。需要的第一个组件是<ruby>引导加载程序<rt>bootloader</rt></ruby>。当计算机主板加电之后,处理器(可能有差异,取决于设计)尝试去跳转到一个特定的内存位置去开始运行软件。那个内存位置就是保存引导加载程序的地方。硬件可能有内置的引导加载程序,它可能直接从它的存储位置或者可能在它运行前首先拷贝到内存中。也可能会有多个引导加载程序。例如,第一阶段的引导加载程序可能位于硬件的 NAND 或者 NOR 闪存中。它唯一的功能是设置硬件以便于执行第二阶段的引导加载程序——比如,存储在 SD 卡中的可以被加载并运行的引导加载程序。
|
||||
|
||||
引导加载程序能够从硬件中取得足够的信息,将 Linux 加载到内存中并跳转到正确的位置,将控制权有效地移交到 Linux。Linux 是一个操作系统。这意味着,在这种设计中,它除了监控硬件和向上层软件(也就是应用程序)提供服务外,它实际上什么都不做。[Linux 内核][1] 中通常是各种各样的固件块。那些预编译的软件对象,通常包含硬件平台使用的设备的专用 IP(知识资产)。当构建一个定制发行版时,在开始编译内核之前,它可能会要求获得一些 Linux 内核源代码树没有提供的必需的固件块。
|
||||
|
||||
应用程序保存在根文件系统中,这个根文件系统是通过编译构建的,它集合了各种软件库、工具、脚本以及配置文件。总的来说,它们都提供各种服务,比如,网络配置和 USB 设备挂载,这些都是将要运行的项目应用程序所需要的。
|
||||
|
||||
总的来说,一个完整的系统构建要求下列的组件:
|
||||
|
||||
1. 一个交叉编译工具链
|
||||
2. 一个或多个引导加载程序
|
||||
3. Linux 内核和相关的固件块
|
||||
4. 一个包含库、工具以及实用程序的根文件系统
|
||||
5. 定制的应用程序
|
||||
|
||||
### 使用适当的工具开始构建
|
||||
|
||||
交叉编译工具链的组件可以手工构建,但这是一个很复杂的过程。幸运的是,现有的工具可以很容易地完成这一过程。构建交叉编译工具链的最好工具可能是 [Crosstool-NG][2],这个工具使用了与 Linux 内核相同的 kconfig 菜单系统来构建工具链的每个细节和方面。使用这个工具的关键是,为目标平台找到正确的配置项。配置项通常包含下列内容:
|
||||
|
||||
1. 目标架构,比如,是 ARM 还是 x86。
|
||||
2. 字节顺序:小端字节顺序(一般情况下,Intel 采用这种顺序)还是大端字节顺序(一般情况下,ARM 或者其它的平台采用这种顺序)。
|
||||
3. 编译器已知的 CPU 类型,比如,GCC 可以使用 `-mcpu` 或 `--with-cpu`。
|
||||
4. 支持的浮点类型,如果有的话,比如,GCC 可以使用 `-mfpu` 或 `--with-fpu`。
|
||||
5. <ruby>二进制实用工具<rt>binutils</rt></ruby>、C 库以及 C 编译器的特定版本信息。
|
||||
|
||||
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12278f2.png)
|
||||
|
||||
*图 2. Crosstool-NG 配置菜单*
|
||||
|
||||
前四个一般情况下可以从处理器制造商的文档中获得。对于较新的处理器,它们可能不容易找到,但是,像树莓派或者 BeagleBoards(以及它们的后代和分支),你可以在像 [嵌入式 Linux Wiki][3] 这样的地方找到相关信息。
|
||||
|
||||
二进制实用工具、C 库、以及 C 编译器的版本,将与任何第三方提供的其它工具链分开。首先,它们中的每一个都有多个提供者。Linaro 为最新的处理器类型提供了最先进的版本,同时致力于将该支持合并到像 GNU C 库这样的上游项目中。尽管你可以使用各种提供者的工具,你可能依然想去使用现成的 GNU 工具链或者相同的 Linaro 版本。
|
||||
|
||||
在 Crosstool-NG 中的另外的重要选择是 Linux 内核的版本。这个选择将得到用于各种工具链组件的<ruby>头文件<rt>headers</rt></ruby>,但是它没有必要一定与你在目标硬件上将要引导的 Linux 内核相同。选择一个不比目标硬件的内核更新的 Linux 内核是很重要的。如果可能的话,尽量选择一个比目标硬件使用的内核更老的长周期支持(LTS)的内核。
|
||||
|
||||
对于大多数不熟悉构建定制发行版的开发者来说,工具链的构建是最为复杂的过程。幸运的是,大多数硬件平台的二进制工具链都可以想办法得到。如果构建一个定制的工具链有问题,可以在线搜索像 [嵌入式 Linux Wiki][4] 这样的地方去查找预构建工具链。
|
||||
|
||||
### 引导选项
|
||||
|
||||
在构建完工具链之后,接下来的工作是引导加载程序。引导加载程序用于设置硬件,以便于越来越复杂的软件能够使用这些硬件。第一阶段的引导加载程序通常由目标平台制造商提供,它通常被烧录到类似于 EEPROM 或者 NOR 闪存这类的在硬件上的存储中。第一阶段的引导加载程序将使设备从这里(比如,一个 SD 存储卡)开始引导。树莓派的引导加载程序就是这样的,它样做也就没有必要再去创建一个定制引导加载程序。
|
||||
|
||||
尽管如此,许多项目还是增加了第二阶段的引导加载程序,以便于去执行一个多样化的任务。在无需使用 Linux 内核或者像 plymouth 这样的用户空间工具的情况下提供一个启动动画,就是其中一个这样的任务。一个更常见的第二阶段引导加载程序的任务是去提供基于网络的引导或者使连接到 PCI 上的磁盘可用。在那种情况下,一个第三阶段的引导加载程序,比如 GRUB,可能才是让系统运行起来所必需的。
|
||||
|
||||
最重要的是,引导加载程序加载 Linux 内核并使它开始运行。如果第一阶段引导加载程序没有提供一个在启动时传递内核参数的机制,那么,在第二阶段的引导加载程序中就必须要提供。
|
||||
|
||||
有许多的开源引导加载程序可以使用。[U-Boot 项目][5] 通常用于像树莓派这样的 ARM 平台。CoreBoot 一般是用于像 Chromebook 这样的 x86 平台。引导加载程序是特定于目标硬件专用的。引导加载程序的选择总体上取决于项目的需求以及目标硬件(可以去网络上在线搜索开源引导加载程序的列表)。
|
||||
|
||||
### 现在到了 Linux 登场的时候
|
||||
|
||||
引导加载程序将加载 Linux 内核到内存中,然后去运行它。Linux 就像一个扩展的引导加载程序:它进行进行硬件设置以及准备加载高级软件。内核的核心将设置和提供在应用程序和硬件之间共享使用的内存;提供任务管理器以允许多个应用程序同时运行;初始化没有被引导加载程序配置的或者是已经配置了但是没有完成的硬件组件;以及开启人机交互界面。内核也许不会配置为在自身完成这些工作,但是,它可以包含一个嵌入的、轻量级的文件系统,这类文件系统大家熟知的有 initramfs 或者 initrd,它们可以独立于内核而创建,用于去辅助设置硬件。
|
||||
|
||||
内核操作的另外的事情是去下载二进制块(通常称为固件)到硬件设备。固件是用特定格式预编译的对象文件,用于在引导加载程序或者内核不能访问的地方去初始化特定硬件。许多这种固件对象可以从 Linux 内核源仓库中获取,但是,还有很多其它的固件只能从特定的硬件供应商处获得。例如,经常由它们自己提供固件的设备有数字电视调谐器或者 WiFi 网卡等。
|
||||
|
||||
固件可以从 initramfs 中加载,也或者是在内核从根文件系统中启动 init 进程之后加载。但是,当你去创建一个定制的 Linux 发行版时,创建内核的过程常常就是获取各种固件的过程。
|
||||
|
||||
### 轻量级核心平台
|
||||
|
||||
Linux 内核做的最后一件事情是尝试去运行一个被称为 init 进程的专用程序。这个专用程序的名字可能是 init 或者 linuxrc 或者是由加载程序传递给内核的名字。init 进程保存在一个能够被内核访问的文件系统中。在 initramfs 这种情况下,这个文件系统保存在内存中(它可能是被内核自己放置到那里,也可能是被引导加载程序放置在那里)。但是,对于运行更复杂的应用程序,initramfs 通常并不够完整。因此需要另外一个文件系统,这就是众所周知的根文件系统。
|
||||
|
||||
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12278f3.png)
|
||||
|
||||
*图 3. 构建 root 配置菜单*
|
||||
|
||||
initramfs 文件系统可以使用 Linux 内核自身构建,但是更常用的作法是,使用一个被称为 [BusyBox][6] 的项目去创建。BusyBox 组合许多 GNU 实用程序(比如,grep 或者 awk)到一个单个的二进制文件中,以便于减小文件系统自身的大小。BusyBox 通常用于去启动根文件系统的创建过程。
|
||||
|
||||
但是,BusyBox 是特意轻量化设计的。它并不打算提供目标平台所需要的所有工具,甚至提供的工具也是经过功能简化的。BusyBox 有一个“姊妹”项目叫做 [Buildroot][7],它可以用于去得到一个完整的根文件系统,提供了各种库、实用程序,以及脚本语言。像 Crosstool-NG 和 Linux 内核一样,BusyBox 和 Buildroot 也都允许使用 kconfig 菜单系统去定制配置。更重要的是,Buildroot 系统自动处理依赖关系,因此,选定的实用程序将会保证该程序所需要的软件也会被构建并安装到 root 文件系统。
|
||||
|
||||
Buildroot 可以用多种格式去生成一个根文件系统包。但是,需要重点注意的是,这个文件系统是被归档的。单个的实用程序和库并不是以 Debian 或者 RPM 格式打包进去的。使用 Buildroot 将生成一个根文件系统镜像,但是它的内容不是单独的包。即使如此,Buildroot 还是提供了对 opkg 和 rpm 包管理器的支持的。这意味着,虽然根文件系统自身并不支持包管理,但是,安装在根文件系统上的定制应用程序能够进行包管理。
|
||||
|
||||
### 交叉编译和脚本化
|
||||
|
||||
Buildroot 的其中一个特性是能够生成一个临时树。这个目录包含库和实用程序,它可以被用于去交叉编译其它应用程序。使用临时树和交叉编译工具链,使得在主机系统上而不是目标平台上对 Buildroot 之外的其它应用程序编译成为可能。使用 rpm 或者 opkg 包管理软件之后,这些应用程序可以在运行时使用包管理软件安装在目标平台的根文件系统上。
|
||||
|
||||
大多数定制系统的构建都是围绕着用脚本语言构建应用程序的想法去构建的。如果需要在目标平台上运行脚本,在 Buildroot 上有多种可用的选择,包括 Python、PHP、Lua 以及基于 Node.js 的 JavaScript。对于需要使用 OpenSSL 加密的应用程序也提供支持。
|
||||
|
||||
### 接下来做什么
|
||||
|
||||
Linux 内核和引导加载程序的编译过程与大多数应用程序是一样的。它们的构建系统被设计为去构建一个专用的软件位。Crosstool-NG 和 Buildroot 是<ruby>元构建<rt>metabuild</rt></ruby>。元构建是将一系列有自己构建系统的软件集合封装为一个构建系统。可靠的元构建包括 [Yocto][8] 和 [OpenEmbedded][9]。Buildroot 的好处是可以将更高级别的元构建进行轻松的封装,以便于将定制 Linux 发行版的构建过程自动化。这样做之后,将会打开 Buildroot 指向到项目专用的缓存仓库的选项。使用缓存仓库可以加速开发过程,并且可以在无需担心上游仓库变化的情况下提供构建快照。
|
||||
|
||||
一个实现高级构建系统的示例是 [PiBox][10]。PiBox 就是封装了在本文中讨论的各种工具的一个元构建。它的目的是围绕所有工具去增加一个通用的 GNU Make 目标架构,以生成一个核心平台,这个平台可以构建或分发其它软件。PiBox 媒体中心和 kiosk 项目是安装在核心平台之上的应用层软件的实现,目的是用于去产生一个构建平台。[Iron Man 项目][11] 是为了家庭自动化的目的而扩展了这种应用程序,它集成了语音管理和物联网设备的管理。
|
||||
|
||||
但是,PiBox 如果没有这些核心的软件工具,它什么也做不了。并且,如果不去深入了解一个完整的定制发行版的构建过程,那么你将无法正确运行 PiBox。而且,如果没有 PiBox 开发团队对这个项目的长期奉献,也就没有 PiBox 项目,它完成了定制发行版构建中的大量任务。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/custom-embedded-linux-distributions
|
||||
|
||||
作者:[Michael J.Hammel][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/user/1000879
|
||||
[1]:https://www.kernel.org
|
||||
[2]:http://crosstool-ng.github.io
|
||||
[3]:https://elinux.org/Main_Page
|
||||
[4]:https://elinux.org/Main_Page
|
||||
[5]:https://www.denx.de/wiki/U-Boot
|
||||
[6]:https://busybox.net
|
||||
[7]:https://buildroot.org
|
||||
[8]:https://www.yoctoproject.org
|
||||
[9]:https://www.openembedded.org/wiki/Main_Page
|
||||
[10]:https://www.piboxproject.com
|
||||
[11]:http://redmine.graphics-muse.org/projects/ironman/wiki/Getting_Started
|
@ -1,26 +1,23 @@
|
||||
优化 MySQL: 3 个简单的小调整
|
||||
============================================================
|
||||
|
||||
如果你不改变 MySQL 的缺省配置,你的服务器的性能就像下图的挂着一档的法拉利一样 “虎落平阳被犬欺” …
|
||||
|
||||
如果你不改变 MySQL 的缺省配置,你的服务器的性能就像下图的坏在一档的法拉利一样 “虎落平阳被犬欺” …
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1000/1*b7M28XbrOc4FF3tJP-vvyg.png)
|
||||
|
||||
我并不期望成为一个专家级的 DBA,但是,在我优化 MySQL 时,我推崇 80/20 原则,明确说就是通过简单的调整一些配置,你可以压榨出高达 80% 的性能提升。尤其是在服务器资源越来越便宜的当下。
|
||||
|
||||
#### 警告:
|
||||
|
||||
1. 没有两个数据库或者应用程序是完全相同的。这里假设我们要调整的数据库是为一个“典型”的 web 网站服务的,你优先考虑的是快速查询、良好的用户体验以及处理大量的流量。
|
||||
### 警告
|
||||
|
||||
1. 没有两个数据库或者应用程序是完全相同的。这里假设我们要调整的数据库是为一个“典型”的 Web 网站服务的,优先考虑的是快速查询、良好的用户体验以及处理大量的流量。
|
||||
2. 在你对服务器进行优化之前,请做好数据库备份!
|
||||
|
||||
### 1\. 使用 InnoDB 存储引擎
|
||||
### 1、 使用 InnoDB 存储引擎
|
||||
|
||||
如果你还在使用 MyISAM 存储引擎,那么是时候转换到 InnoDB 了。有很多的理由都表明 InnoDB 比 MyISAM 更有优势,如果你关注性能,那么,我们来看一下它们是如何利用物理内存的:
|
||||
|
||||
* MyISAM:仅在内存中保存索引。
|
||||
|
||||
* InnoDB:在内存中保存索引_和_ 数据。
|
||||
* InnoDB:在内存中保存索引**和**数据。
|
||||
|
||||
结论:保存在内存的内容访问速度要比磁盘上的更快。
|
||||
|
||||
@ -30,13 +27,13 @@
|
||||
ALTER TABLE table_name ENGINE=InnoDB;
|
||||
```
|
||||
|
||||
_注意:_ _你已经创建了所有合适的索引,对吗?为了更好的性能,创建索引永远是第一优先考虑的事情。_
|
||||
*注意:你已经创建了所有合适的索引,对吗?为了更好的性能,创建索引永远是第一优先考虑的事情。*
|
||||
|
||||
### 2\. 让 InnoDB 使用所有的内存
|
||||
### 2、 让 InnoDB 使用所有的内存
|
||||
|
||||
你可以在 _my.cnf_ 文件中编辑你的 MySQL 配置。使用 `innodb_buffer_pool_size` 参数去配置在你的服务器上允许 InnoDB 使用物理内存数量。
|
||||
你可以在 `my.cnf` 文件中编辑你的 MySQL 配置。使用 `innodb_buffer_pool_size` 参数去配置在你的服务器上允许 InnoDB 使用物理内存数量。
|
||||
|
||||
对此(假设你的服务器_仅仅_运行 MySQL),公认的“经验法则”是设置为你的服务器物理内存的 80%。在保证操作系统不使用 swap 而正常运行所需要的足够内存之后 ,尽可能多地为 MySQL 分配物理内存。
|
||||
对此(假设你的服务器_仅仅_运行 MySQL),公认的“经验法则”是设置为你的服务器物理内存的 80%。在保证操作系统不使用交换分区而正常运行所需要的足够内存之后 ,尽可能多地为 MySQL 分配物理内存。
|
||||
|
||||
因此,如果你的服务器物理内存是 32 GB,可以将那个参数设置为多达 25 GB。
|
||||
|
||||
@ -44,9 +41,9 @@ ALTER TABLE table_name ENGINE=InnoDB;
|
||||
innodb_buffer_pool_size = 25600M
|
||||
```
|
||||
|
||||
_注意:_ _ (1) 如果你的服务器内存较小并且小于 1 GB。为了适用本文的方法,你应该去升级你的服务器。 (2) 如果你的服务器内存特别大,比如,它有 200 GB,那么,根据一般常识,你也没有必要为操作系统保留多达 40 GB 的内存。
|
||||
*注意:(1)如果你的服务器内存较小并且小于 1 GB。为了适用本文的方法,你应该去升级你的服务器。 (2) 如果你的服务器内存特别大,比如,它有 200 GB,那么,根据一般常识,你也没有必要为操作系统保留多达 40 GB 的内存。 *
|
||||
|
||||
### 3\. 让 InnoDB 多任务运行
|
||||
### 3、 让 InnoDB 多任务运行
|
||||
|
||||
如果服务器上的参数 `innodb_buffer_pool_size` 的配置是大于 1 GB,将根据参数 `innodb_buffer_pool_instances` 的设置, 将 InnoDB 的缓冲池划分为多个。
|
||||
|
||||
@ -66,7 +63,7 @@ innodb_buffer_pool_instances = 24
|
||||
|
||||
### 注意!
|
||||
|
||||
在修改了 _my.cnf_ 文件后需要重启 MySQL 才能生效:
|
||||
在修改了 `my.cnf` 文件后需要重启 MySQL 才能生效:
|
||||
|
||||
```
|
||||
sudo service mysql restart
|
||||
@ -74,7 +71,7 @@ sudo service mysql restart
|
||||
|
||||
* * *
|
||||
|
||||
还有更多更科学的方法来优化这些参数,这几点作为一个通用准则来应用,将使你的 MySQL 服务器性能更好。
|
||||
还有更多更科学的方法来优化这些参数,但是这几点可以作为一个通用准则来应用,将使你的 MySQL 服务器性能更好。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -88,7 +85,7 @@ via: https://medium.com/@richb_/tuning-mysql-3-simple-tweaks-6356768f9b90
|
||||
|
||||
作者:[Rich Barrett](https://medium.com/@richb_)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,117 +0,0 @@
|
||||
XLCYun 翻译中
|
||||
|
||||
Manjaro Gaming: Gaming on Linux Meets Manjaro’s Awesomeness
|
||||
======
|
||||
[![Meet Manjaro Gaming, a Linux distro designed for gamers with the power of Manjaro][1]][1]
|
||||
|
||||
[Gaming on Linux][2]? Yes, that's very much possible and we have a dedicated new Linux distribution aiming for gamers.
|
||||
|
||||
Manjaro Gaming is a Linux distro designed for gamers with the power of Manjaro. Those who have used Manjaro Linux before, know exactly why it is a such a good news for gamers.
|
||||
|
||||
[Manjaro][3] is a Linux distro based on one of the most popular distro - [Arch Linux][4]. Arch Linux is widely known for its bleeding-edge nature offering a lightweight, powerful, extensively customizable and up-to-date experience. And while all those are absolutely great, the main drawback is that Arch Linux embraces the DIY (do it yourself) approach where users need to possess a certain level of technical expertise to get along with it.
|
||||
|
||||
Manjaro strips that requirement and makes Arch accessible to newcomers, and at the same time provides all the advanced and powerful features of Arch for the experienced users as well. In short, Manjaro is an user-friendly Linux distro that works straight out of the box.
|
||||
|
||||
The reasons why Manjaro makes a great and extremely suitable distro for gaming are:
|
||||
|
||||
* Manjaro automatically detects computer's hardware (e.g. Graphics cards)
|
||||
* Automatically installs the necessary drivers and software (e.g. Graphics drivers)
|
||||
* Various codecs for media files playback comes pre-installed with it
|
||||
* Has dedicated repositories that deliver fully tested and stable packages
|
||||
|
||||
|
||||
|
||||
Manjaro Gaming is packed with all of Manjaro's awesomeness with the addition of various tweaks and software packages dedicated to make gaming on Linux smooth and enjoyable.
|
||||
|
||||
![Inside Manjaro Gaming][5]
|
||||
|
||||
#### Tweaks
|
||||
|
||||
Some of the tweaks made on Manjaro Gaming are:
|
||||
|
||||
* Manjaro Gaming uses highly customizable XFCE desktop environment with an overall dark theme.
|
||||
* Sleep mode is disabled for preventing computers from sleeping while playing games with GamePad or watching long cutscenes.
|
||||
|
||||
|
||||
|
||||
#### Softwares
|
||||
|
||||
Maintaining Manjaro's tradition of working straight out of the box, Manjaro Gaming comes bundled with various Open Source software to provide often needed functionalities for gamers. Some of the software included are:
|
||||
|
||||
* [**KdenLIVE**][6]: Videos editing software for editing gaming videos
|
||||
* [**Mumble**][7]: Voice chatting software for gamers
|
||||
* [**OBS Studio**][8]: Software for video recording and live streaming games videos on [Twitch][9]
|
||||
* **[OpenShot][10]** : Powerful video editor for Linux
|
||||
* [**PlayOnLinux**][11]: For running Windows games on Linux with [Wine][12] backend
|
||||
* [**Shutter**][13]: Feature-rich screenshot tool
|
||||
|
||||
|
||||
|
||||
#### Emulators
|
||||
|
||||
Manjaro Gaming comes with a long list of gaming emulators:
|
||||
|
||||
* **[DeSmuME][14]** : Nintendo DS emulator
|
||||
* **[Dolphin Emulator][15]** : GameCube and Wii emulator
|
||||
* [**DOSBox**][16]: DOS Games emulator
|
||||
* **[FCEUX][17]** : Nintendo Entertainment System (NES), Famicom, and Famicom Disk System (FDS) emulator
|
||||
* **Gens/GS** : Sega Mega Drive emulator
|
||||
* **[PCSXR][18]** : PlayStation Emulator
|
||||
* [**PCSX2**][19]: Playstation 2 emulator
|
||||
* [**PPSSPP**][20]: PSP emulator
|
||||
* **[Stella][21]** : Atari 2600 VCS emulator
|
||||
* [**VBA-M**][22]: Gameboy and GameboyAdvance emulator
|
||||
* [**Yabause**][23]: Sega Saturn Emulator
|
||||
* **[ZSNES][24]** : Super Nintendo emulator
|
||||
|
||||
|
||||
|
||||
#### Others
|
||||
|
||||
There are some terminal add-ons - Color, ILoveCandy and Screenfetch. [Conky Manager][25] with Retro Conky theme is also included.
|
||||
|
||||
**Point to be noted: Not all the features mentioned are included in the current release of Manjaro Gaming (which is 16.03). Some of them are scheduled to be included in the next release - Manjaro Gaming 16.06.**
|
||||
|
||||
### Downloads
|
||||
|
||||
Manjaro Gaming 16.06 is going to be the first proper release of Manjaro Gaming. But if you are interested enough to try it now, Manjaro Gaming 16.03 is available for downloading on the Sourceforge [project page][26]. Go there and grab the ISO.
|
||||
|
||||
How do you feel about this new Gaming Linux distro? Are you thinking of giving it a try? Let us know!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/manjaro-gaming-linux/
|
||||
|
||||
作者:[Munif Tanjim][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/munif/
|
||||
[1]:https://itsfoss.com/wp-content/uploads/2016/06/Manjaro-Gaming.jpg
|
||||
[2]:https://itsfoss.com/linux-gaming-guide/
|
||||
[3]:https://manjaro.github.io/
|
||||
[4]:https://www.archlinux.org/
|
||||
[5]:https://itsfoss.com/wp-content/uploads/2016/06/Manjaro-Gaming-Inside-1024x576.png
|
||||
[6]:https://kdenlive.org/
|
||||
[7]:https://www.mumble.info
|
||||
[8]:https://obsproject.com/
|
||||
[9]:https://www.twitch.tv/
|
||||
[10]:http://www.openshot.org/
|
||||
[11]:https://www.playonlinux.com
|
||||
[12]:https://www.winehq.org/
|
||||
[13]:http://shutter-project.org/
|
||||
[14]:http://desmume.org/
|
||||
[15]:https://dolphin-emu.org
|
||||
[16]:https://www.dosbox.com/
|
||||
[17]:http://www.fceux.com/
|
||||
[18]:https://pcsxr.codeplex.com
|
||||
[19]:http://pcsx2.net/
|
||||
[20]:http://www.ppsspp.org/
|
||||
[21]:http://stella.sourceforge.net/
|
||||
[22]:http://vba-m.com/
|
||||
[23]:https://yabause.org/
|
||||
[24]:http://www.zsnes.com/
|
||||
[25]:https://itsfoss.com/conky-gui-ubuntu-1304/
|
||||
[26]:https://sourceforge.net/projects/mgame/
|
56
sources/talk/20171128 Your API is missing Swagger.md
Normal file
56
sources/talk/20171128 Your API is missing Swagger.md
Normal file
@ -0,0 +1,56 @@
|
||||
Your API is missing Swagger
|
||||
======
|
||||
|
||||
![](https://ryanmccue.ca/content/images/2017/11/top-20mobileapps--3-.png)
|
||||
|
||||
We have all struggled through thrown together, convoluted API documentation. It is frustrating, and in the worst case, can lead to bad requests. The process of understanding an API is something most developers go through on a regular basis, so it is any wonder that the majority of APIs have horrific documentation.
|
||||
|
||||
[Swagger][1] is the solution to this problem. Swagger came out in 2011 and is an open source software framework which has many tools that help developers design, build, document, and consume RESTful APIs. Designing an API using Swagger, or documenting it after with Swagger helps everyone consumers of your API seamlessly. One of the amazing features which many people do not know about Swagger is that you can actually **generate** a client from it! That's right, if a service you're consuming has Swagger documentation you can generate a client to consume it!
|
||||
|
||||
All major languages support Swagger and connect it to your API. Depending on the language you're writing your API in you can have the Swagger documentation generated from the actual code. Here are some of the standout Swagger libraries I've seen recently.
|
||||
|
||||
### Golang
|
||||
|
||||
Golang has a couple great tools for integrating Swagger into your API. The first is [go-swagger][2], which is a tool that lets you generate the scaffolding for an API from a Swagger file. This is a fundamentally different way of thinking about APIs. Instead of building the endpoints and thinking about new ones on the fly, go-swagger gets you to think through your API before you write a single line of code. This can help visualize what you want the API to do first. Another tool which Golang has is called [Goa][3]. A quote from their website sums up what Goa is:
|
||||
|
||||
> goa provides a novel approach for developing microservices that saves time when working on independent services and helps with keeping the overall system consistent. goa uses code generation to handle both the boilerplate and ancillary artifacts such as documentation, client modules, and client tools.
|
||||
|
||||
They take designing the API before implementing it to a new level. Goa has a DSL to help you programmatically describe your entire API, from endpoints to payloads, to responses. From this DSL Goa generates a Swagger file for anyone that consumes your API, and it will enforce your endpoints output the correct data, which will keep your API and documentation in sync. This is counter-intuitive when you start, but after actually implementing an API with Goa, you will not know how you ever did it before.
|
||||
|
||||
### Python
|
||||
|
||||
[Flask][4] has a great extension for building an API with Swagger called [Flask-RESTPlus][5].
|
||||
|
||||
> If you are familiar with Flask, Flask-RESTPlus should be easy to pick up. It provides a coherent collection of decorators and tools to describe your API and expose its documentation properly using Swagger.
|
||||
|
||||
It uses python decorators to generate swagger documentation and can be used to enforce endpoint output similar to Goa. It can be very powerful and makes generating swagger from an API stupid easy.
|
||||
|
||||
### NodeJS
|
||||
|
||||
Finally, NodeJS has a powerful tool for working with Swagger called [swagger-js-codegen][6]. It can generate both servers and clients from a swagger file.
|
||||
|
||||
> This package generates a nodejs, reactjs or angularjs class from a swagger specification file. The code is generated using mustache templates and is quality checked by jshint and beautified by js-beautify.
|
||||
|
||||
It is not quite as easy to use as Goa and Flask-RESTPlus, but if Node is your thing, this will do the job. It shines when it comes to generating frontend code to interface with your API, which is perfect if you're developing a web app to go along with the API.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Swagger is a simple yet powerful representation of your RESTful API. When used properly it can help flush out your API design and make it easier to consume. Harnessing its full power can save you time by forming and visualizing your API before you write a line of code, then generate the boilerplate surrounding the core logic. And with tools like [Goa][3], [Flask-RESTPlus][5], and [swagger-js-codegen][6] which will make the whole experience of architecting and implementing an API painless, there is no excuse not to have Swagger.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://ryanmccue.ca/your-api-is-missing-swagger/
|
||||
|
||||
作者:[Ryan McCue][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ryanmccue.ca/author/ryan/
|
||||
[1]:http://swagger.io
|
||||
[2]:https://github.com/go-swagger/go-swagger
|
||||
[3]:https://goa.design/
|
||||
[4]:http://flask.pocoo.org/
|
||||
[5]:https://github.com/noirbizarre/flask-restplus
|
||||
[6]:https://github.com/wcandillon/swagger-js-codegen
|
@ -0,0 +1,54 @@
|
||||
5 Podcasts Every Dev Should Listen to
|
||||
======
|
||||
|
||||
![](https://ryanmccue.ca/content/images/2017/11/Electric-Love.png)
|
||||
|
||||
Being a developer is a tough job, the landscape is constantly changing, and new frameworks and best practices come out every month. Having a great go-to list of podcasts keeping you up to date on the industry can make a huge difference. I've done some of the hard work and created a list of the top 5 podcasts I personally listen too.
|
||||
|
||||
### This Developer's Life
|
||||
|
||||
Unlike many developer-focused podcasts, there is no talk of code or explanations of software architecture in [This Developer's Life][1]. There are just relatable stories from other developers. This Developer's Life dives into the issues developers face in their daily lives, from a developers point of view. [Rob Conery][2] and [Scott Hanselman][3] host the show and it focuses on all aspects of a developers life. For example, what it feels like to get fired. To hit a home run. To be competitive. It is a very well made podcast and isn't just for developers, but it can also be enjoyed by those that love and live with them.
|
||||
|
||||
### Developer Tea
|
||||
|
||||
Don’t have a lot of time? [Developer Tea][4] is "A podcast for developers designed to fit inside your tea break." The podcast exists to help driven developers connect with their purpose and excel at their work so that they can make an impact. Hosted by [Jonathan Cutrell][5], the director of technology at Whiteboard, Developer Tea breaks down the news and gives useful insights into all aspects of a developers life in and out of work. Cutrell explains listener questions mixed in with news, interviews, and career advice during his show, which releases multiple episodes every week.
|
||||
|
||||
### Software Engineering Today
|
||||
|
||||
[Software Engineering Daily][6] is a daily podcast which focuses on heavily technical topics like software development and system architecture. It covering a range of topics from load balancing at scale and serverless event-driven architecture to augmented reality. Hosted by [Jeff Meyerson][7], this podcast is great for developers who have a passion for learning about complicated software topics to expand their knowledge base.
|
||||
|
||||
### Talking Code
|
||||
|
||||
The [Talking Code][8] podcast is from 2015, and contains 24 episodes which have "short expert interviews that help you decode what developers are saying." The hosts, [Josh Smith][9] and [Venkat Dinavahi][10], talk about diverse web development topics like how to become an effective junior developer and how to go from junior to senior developer, to topics like building modern web applications and making the most out of your analytics. This podcast is perfect for those getting into web development and those who look to level up their web development skills.
|
||||
|
||||
### The Laracasts Snippet
|
||||
|
||||
[The Laracasts Snippet][11] is a bite-size podcast where each episode offers a single thought on some aspect of web development. The host, [Jeffrey Way][12], is a prominent character in the Laravel community and runs the site [Laracasts][12]. His insights are broad and are useful for developers of all backgrounds.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Podcasts are on the rise and more and more developers are listening to them. With such a rapidly expanding list of new podcasts coming out it can be tough to pick the top 5, but if you listen to these podcasts, you will have a competitive edge as a developer.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://ryanmccue.ca/podcasts-every-developer-should-listen-too/
|
||||
|
||||
作者:[Ryan McCue][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ryanmccue.ca/author/ryan/
|
||||
[1]:http://thisdeveloperslife.com/
|
||||
[2]:https://rob.conery.io/
|
||||
[3]:https://www.hanselman.com/
|
||||
[4]:https://developertea.com/
|
||||
[5]:http://jonathancutrell.com/
|
||||
[6]:https://softwareengineeringdaily.com/
|
||||
[7]:http://jeffmeyerson.com/
|
||||
[8]:http://talkingcode.com/
|
||||
[9]:https://twitter.com/joshsmith
|
||||
[10]:https://twitter.com/venkatdinavahi
|
||||
[11]:https://laracasts.simplecast.fm/
|
||||
[12]:https://laracasts.com
|
@ -0,0 +1,48 @@
|
||||
Blueprint for Simple Scalable Microservices
|
||||
======
|
||||
|
||||
![](https://ryanmccue.ca/content/images/2017/12/Copy-of-Copy-of-Electric-Love--1-.png)
|
||||
|
||||
When you're building a microservice, what do you value? A fully managed and scalable system? It's hard to know where to start with AWS; there are so many options for hosting code, you can use EC2, ECS, Elastic Beanstalk, Lambda. Everyone has patterns for deploying microservices. Using the pattern below will provide a great structure for a scalable microservice architecture.
|
||||
|
||||
### Elastic Beanstalk
|
||||
|
||||
The first and most important piece is [Elastic Beanstalk][1]. It is a great, simple way to deploy auto-scaling microservices. All you need to do is upload your code to Elastic Beanstalk via their command line tool or management console. Once it's in Elastic Beanstalk the deployment, capacity provisioning, load balancing, auto-scaling is handled by AWS.
|
||||
|
||||
### S3
|
||||
|
||||
Another important service is [S3][2]; it is an object storage built to store and retrieve data. S3 has lots of uses, from storing images, to backups. Particular use cases are storing sensitive files such as private keys, environment variable files which will be accessed and used by multiple instances or services. Finally, using S3 for less sensitive, publically accessible files like configuration files, Dockerfiles, and images.
|
||||
|
||||
### Kinesis
|
||||
|
||||
[Kinesis][3] is a tool which allows for microservices to communicate with each other and other projects like Lambda, which we will discuss farther down. Kinesis does this by real-time, persistent data streaming, which enables microservices to emit events. Data can be persisted for up to 7 days for persistent and batch processing.
|
||||
|
||||
### RDS
|
||||
|
||||
[Amazon RDS][4] is a great, fully managed relational database hosted by AWS. Using RDS over your own database server is beneficial because AWS manages everything. It makes it easy to set up, operate, and scale a relational databases.
|
||||
|
||||
### Lambda
|
||||
|
||||
Finally, [AWS Lambda][5] lets you run code without provisioning or managing servers. Lambda has many uses; you can even create the whole APIs with it. Some great uses for it in a microservice architecture are cron jobs and image manipulation. Crons can be scheduled with [CloudWatch][6].
|
||||
|
||||
### Conclusion
|
||||
|
||||
These AWS products you can create fully scalable, stateless microservices that can communicate with each other. Using Elastic Beanstalk to run microservices, S3 to store files, Kinesis to emit events and Lambdas to subscribe to them and run other tasks. Finally, RDS for easily managing and scaling relational databases.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://ryanmccue.ca/blueprint-for-simple-scalable-microservices/
|
||||
|
||||
作者:[Ryan McCue][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ryanmccue.ca/author/ryan/
|
||||
[1]:https://aws.amazon.com/elasticbeanstalk/?nc2=h_m1
|
||||
[2]:https://aws.amazon.com/s3/?nc2=h_m1
|
||||
[3]:https://aws.amazon.com/kinesis/?nc2=h_m1
|
||||
[4]:https://aws.amazon.com/rds/?nc2=h_m1
|
||||
[5]:https://aws.amazon.com/lambda/?nc2=h_m1
|
||||
[6]:https://aws.amazon.com/cloudwatch/?nc2=h_m1
|
@ -0,0 +1,60 @@
|
||||
5 Things to Look for When You Contract Out the Backend of Your App
|
||||
======
|
||||
|
||||
![](https://ryanmccue.ca/content/images/2017/12/Copy-of-Copy-of-Copy-of-Copy-of-Copy-of-Electric-Love.png)
|
||||
|
||||
For many app developers, it can be hard to know what to do when it comes to the backend of your app. There are a few options, Firebase, throw together a quick Node API, contract it out. I am going to make a blog post soon weighing the pros and cons of each of these options, but for now, let's assume you want the API done professionally.
|
||||
|
||||
You are going to want to look for specific things before you give the contract to some freelancer or agency.
|
||||
|
||||
### 1. Documentation
|
||||
|
||||
Documentation is one of the most important pieces here, the API could be amazing, but if it is impossible to understand which endpoints are available, what parameters they provide, and what they respond with you won't have much luck integrating the API into your app. Surprisingly this is one of the pieces with most contractors get wrong.
|
||||
|
||||
So what are you looking for? First, make sure they understand the importance of documentation, this alone makes a huge difference. Second, the should preferably be using an open standard like [Swagger][1] for documentation. If they do both of these things, you should have documentation covered.
|
||||
|
||||
### 2. Communication
|
||||
|
||||
You know the saying "communication is key," well that applies to API development. This is harder to gauge, but sometimes a developer will get the contract, and then disappear. This doesn't mean they aren't working on it, but it means there isn't a good feedback loop to sort out problems before they get too large.
|
||||
|
||||
A good way to get around this is to have a weekly, or however often you want, meeting to go over progress and make sure the API is shaping up the way you want. Even if the meeting is just going over the endpoints and confirming they are returning the data you need.
|
||||
|
||||
### 3. Error Handling
|
||||
|
||||
Error handling is crucial, this basically means if there is an error on the backend, whether it's an invalid request or an unexpected internal server error, it will be handled properly and a useful response is given to the client. It's important that they are handled gracefully. Often this can get overlooked in the API development process.
|
||||
|
||||
This is a tricky thing to look out for, but by letting them know you expect useful error messages and maybe put it into the contract, you should get the error messages you need. This may seem like a small thing but being able to present the user of your app with the actual thing they've done wrong, like "Passwords must be between 6-64 characters" improves the UX immensely.
|
||||
|
||||
### 4. Database
|
||||
|
||||
This section may be a bit controversial, but I think that 90% of apps really just need a SQL database. I know NoSQL is sexy, but you get so many extra benefits from using SQL I feel that's what you should use for the backend of your app. Of course, there are cases where NoSQL is the better option, but broadly speaking you should probably just use a SQL database.
|
||||
|
||||
SQL adds so much added flexibility by being able to add, modify, and remove columns. The option to aggregate data with a simple query is also immensely useful. And finally, the ability to do transactions and be sure all your data is valid will help you sleep better at night.
|
||||
|
||||
The reason I say all the above is because I would recommend looking for someone who is willing to build your API with a SQL database.
|
||||
|
||||
### 5. Infrastructure
|
||||
|
||||
The last major thing to look for when contracting out your backend is infrastructure. This is essential because you want your app to scale. If you get 10,000 users join your app in one day for some reason, you want your backend to handle that. Using services like [AWS Elastic Beanstalk][2] or [Heroku][3] you can create APIs which will scale up automatically with load. That means if your app takes off overnight your API will scale with the load and not buckle under it.
|
||||
|
||||
Making sure your contractor is building it with scalability in mind is key. I wrote a [post on scalable APIs][4] if you're interested in learning more about a good AWS stack.
|
||||
|
||||
### Conclusion
|
||||
|
||||
It is important to get a quality backend when you contract it out. You're paying for a professional to design and build the backend of your app, so if they're lacking in any of the above points it will reduce the chance of success for but the backend, but for your app. If you make a checklist with these points and go over them with contractors, you should be able to weed out the under-qualified applicants and focus your attention on the contractors that know what they're doing.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://ryanmccue.ca/things-to-look-for-when-you-contract-out-the-backend-your-app/
|
||||
|
||||
作者:[Ryan McCue][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ryanmccue.ca/author/ryan/
|
||||
[1]:https://swagger.io/
|
||||
[2]:https://aws.amazon.com/elasticbeanstalk/
|
||||
[3]:https://www.heroku.com/
|
||||
[4]:https://ryanmccue.ca/blueprint-for-simple-scalable-microservices/
|
88
sources/talk/20171225 Where to Get Your App Backend Built.md
Normal file
88
sources/talk/20171225 Where to Get Your App Backend Built.md
Normal file
@ -0,0 +1,88 @@
|
||||
Where to Get Your App Backend Built
|
||||
======
|
||||
|
||||
![](https://ryanmccue.ca/content/images/2017/12/Copy-of-Copy-of-Copy-of-Copy-of-Copy-of-Copy-of-Electric-Love.png)
|
||||
|
||||
Building a great app takes lots of work. From designing the views to adding the right transitions and images. One thing which is often overlooked is the backend, connecting your app to the outside world. A backend which is not up to the same quality as your app can wreck even the most perfect user interface. That is why choosing the right option for your backend budget and needs is essential.
|
||||
|
||||
There are three main choices you have when you're getting it built. First, you have agencies, they are a company with salespeople, project managers, and developers. Second, you have market rate freelancers, they are developers who charge market rate for their work and are often in North America or western Europe. Finally, there are budget freelancers, they are inexpensive and usually in parts of Asia and South America.
|
||||
|
||||
I am going to break down the pros and cons of each of these options.
|
||||
|
||||
### Agency
|
||||
|
||||
Agencies are often a safe bet if you're looking for a more hands-off approach agencies are often the way to go, they have project managers who will manage your project and communicate your requirements to developers. This takes some of the work off of your plate and can free it up to work on your app. Agencies also often have a team of developers at their disposal, so if the developer working on your project takes a vacation, they can swap another developer in without much hassle.
|
||||
|
||||
With all these upsides there is a downside. Price. Having a sales team, a project management team, and a developer team isn't cheap. Agencies often cost quite a bit of money compared to freelancers.
|
||||
|
||||
So in summary:
|
||||
|
||||
#### Pros
|
||||
|
||||
* Hands Off
|
||||
* No Single Point of Failure
|
||||
|
||||
|
||||
|
||||
#### Cons
|
||||
|
||||
* Very expensive
|
||||
|
||||
|
||||
|
||||
### Market Rate Freelancer
|
||||
|
||||
Another option you have are market rate freelancers, these are highly skilled developers who often have worked in agencies, but decided to go their own way and get clients themselves. They generally produce high-quality work at a lower cost than agencies.
|
||||
|
||||
The downside to freelancers is since they're only one person they might not be available right away to start your work. Especially high demand freelancers you may have to wait a few weeks or months before they start development. They also are hard to replace, if they get sick or go on vacation, it can often be hard to find someone to continue the work, unless you get a good recommendation from the freelancer.
|
||||
|
||||
#### Pros
|
||||
|
||||
* Cost Effective
|
||||
* Similar quality to agency
|
||||
* Great for short term
|
||||
|
||||
|
||||
|
||||
#### Cons
|
||||
|
||||
* May not be available
|
||||
* Hard to replace
|
||||
|
||||
|
||||
|
||||
### Budget Freelancer
|
||||
|
||||
The last option I'm going over is budget freelancers who are often found on job boards such as Fiverr and Upwork. They work for very cheap, but that often comes at the cost of quality and communication. Often you will not get what you're looking for, or it will be very brittle code which buckles under strain.
|
||||
|
||||
If you're on a very tight budget, it may be worth rolling the dice on a highly rated budget freelancer, although you must be okay with the risk of potentially throwing the code away.
|
||||
|
||||
#### Pros
|
||||
|
||||
* Very cheap
|
||||
|
||||
|
||||
|
||||
#### Cons
|
||||
|
||||
* Often low quality
|
||||
* May not be what you asked for
|
||||
|
||||
|
||||
|
||||
### Conclusion
|
||||
|
||||
Getting the right backend for your app is important. It is often a good idea to stick with agencies or market rate freelancers due to the predictability and higher quality code, but if you're on a very tight budget rolling the dice with budget freelancers could pay off. At the end of the day, it doesn't matter where the code is from, as long as it works and does what it's supposed to do.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://ryanmccue.ca/where-to-get-your-app-backend-built/
|
||||
|
||||
作者:[Ryan McCue][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ryanmccue.ca/author/ryan/
|
@ -1,84 +0,0 @@
|
||||
Why isn't open source hot among computer science students?
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_OSDC_OpenClass_520x292_FINAL_JD.png?itok=ly78pMqu)
|
||||
|
||||
Image by : opensource.com
|
||||
|
||||
The technical savvy and inventive energy of young programmers is alive and well.
|
||||
|
||||
This was clear from the diligent work that I witnessed while participating in this year's [PennApps][1], the nation's largest college hackathon. Over the course of 48 hours, my high school- and college-age peers created projects ranging from a [blink-based communication device for shut-in patients][2] to a [burrito maker with IoT connectivity][3]. The spirit of open source was tangible throughout the event, as diverse groups bonded over a mutual desire to build, the free flow of ideas and tech know-how, fearless experimentation and rapid prototyping, and an overwhelming eagerness to participate.
|
||||
|
||||
Why then, I wondered, wasn't open source a hot topic among my tech geek peers?
|
||||
|
||||
To learn more about what college students think when they hear "open source," I surveyed several college students who are members of the same professional computer science organization I belong to. All members of this community must apply during high school or college and are selected based on their computer science-specific achievements and leadership--whether that means leading a school robotics team, founding a nonprofit to bring coding into insufficiently funded classrooms, or some other worthy endeavor. Given these individuals' accomplishments in computer science, I thought that their perspectives would help in understanding what young programmers find appealing (or unappealing) about open source projects.
|
||||
|
||||
The online survey I prepared and disseminated included the following questions:
|
||||
|
||||
* Do you like to code personal projects? Have you ever contributed to an open source project?
|
||||
* Do you feel like it's more beneficial to you to start your own programming projects, or to contribute to existing open source efforts?
|
||||
* How would you compare the prestige associated with coding for an organization that produces open source software versus proprietary software?
|
||||
|
||||
|
||||
|
||||
Though the overwhelming majority said that they at least occasionally enjoyed coding personal projects in their spare time, most had never contributed to an open source project. When I further explored this trend, a few common preconceptions about open source projects and organizations came to light. To persuade my peers that open source projects are worth their time, and to provide educators and open source organizations insight on their students, I'll address the three top preconceptions.
|
||||
|
||||
### Preconception #1: Creating personal projects from scratch is better experience than contributing to an existing open source project.
|
||||
|
||||
Of the college-age programmers I surveyed, 24 out of 26 asserted that starting their own personal projects felt potentially more beneficial than building on open source ones.
|
||||
|
||||
As a bright-eyed freshman in computer science, I believed this too. I had often heard from older peers that personal projects would make me more appealing to intern recruiters. No one ever mentioned the possibility of contributing to open source projects--so in my mind, it wasn't relevant.
|
||||
|
||||
I now realize that open source projects offer powerful preparation for the real world. Contributing to open source projects cultivates [an awareness of how tools and languages piece together][4] in a way that even individual projects cannot. Moreover, open source is an exercise in coordination and collaboration, building students' [professional skills in communication, teamwork, and problem-solving. ][5]
|
||||
|
||||
### Preconception #2: My coding skills just won't cut it.
|
||||
|
||||
A few respondents said they were intimidated by open source projects, unsure of where to contribute, or fearful of stunting project progress. Unfortunately, feelings of inferiority, which too often especially affect female programmers, do not stop at the open source community. In fact, "Imposter Syndrome" may even be magnified, as [open source advocates typically reject bureaucracy][6]--and as difficult as bureaucracy makes internal mobility, it helps newcomers know their place in an organization.
|
||||
|
||||
I remember how intimidated I felt by contribution guidelines while looking through open source projects on GitHub for the first time. However, guidelines are not intended to encourage exclusivity, but to provide a [guiding hand][7]. To that end, I think of guidelines as a way of establishing expectations without relying on a hierarchical structure.
|
||||
|
||||
Several open source projects actively carve a place for new project contributors. [TEAMMATES][8], an educational feedback management tool, is one of the many open source projects that marks issues "up for grabs" for first-timers. In the comments, programmers of all skill levels iron out implementation details, demonstrating that open source is a place for eager new programmers and seasoned software veterans alike. For young programmers who are still hesitant, [a few open source projects][9] have been thoughtful enough to adopt an [Imposter Syndrome disclaimer][10].
|
||||
|
||||
### Preconception #3: Proprietary software firms do better work than open source software organizations.
|
||||
|
||||
Only five of the 26 respondents I surveyed thought that open and proprietary software organizations were considered equal in prestige. This is likely due to the misperception that "open" means "profitless," and thus low-quality (see [Doesn't 'open source' just mean something is free of charge?][11]).
|
||||
|
||||
However, open source software and profitable software are not mutually exclusive. In fact, small and large businesses alike often pay for free open source software to receive technical support services. As [Red Hat CEO Jim Whitehurst explains][12], "We have engineering teams that track every single change--a bug fix, security enhancement, or whatever--made to Linux, and ensure our customers' mission-critical systems remain up-to-date and stable."
|
||||
|
||||
Moreover, the nature of openness facilitates rather than hinders quality by enabling more people to examine source code. [Igor Faletski, CEO of Mobify][13], writes that Mobify's team of "25 software developers and quality assurance professionals" is "no match for the all the software developers in the world who might make use of [Mobify's open source] platform. Each of them is a potential tester of, or contributor to, the project."
|
||||
|
||||
Another problem may be that young programmers are not aware of the open source software they interact with every day. I used many tools--including MySQL, Eclipse, Atom, Audacity, and WordPress--for months or even years without realizing they were open source. College students, who often rush to download syllabus-specified software to complete class assignments, may be unaware of which software is open source. This makes open source seem more foreign than it is.
|
||||
|
||||
So students, don't knock open source before you try it. Check out this [list of beginner-friendly projects][14] and [these six starting points][15] to begin your open source journey.
|
||||
|
||||
Educators, remind your students of the open source community's history of successful innovation, and lead them toward open source projects outside the classroom. You will help develop sharper, better-prepared, and more confident students.
|
||||
|
||||
### About the author
|
||||
Susie Choi - Susie is an undergraduate student studying computer science at Duke University. She is interested in the implications of technological innovation and open source principles for issues relating to education and socioeconomic inequality.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/12/students-and-open-source-3-common-preconceptions
|
||||
|
||||
作者:[Susie Choi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/susiechoi
|
||||
[1]:http://pennapps.com/
|
||||
[2]:https://devpost.com/software/blink-9o2iln
|
||||
[3]:https://devpost.com/software/daburrito
|
||||
[4]:https://hackernoon.com/benefits-of-contributing-to-open-source-2c97b6f529e9
|
||||
[5]:https://opensource.com/education/16/8/5-reasons-student-involvement-open-source
|
||||
[6]:https://opensource.com/open-organization/17/7/open-thinking-curb-bureaucracy
|
||||
[7]:https://opensource.com/life/16/3/contributor-guidelines-template-and-tips
|
||||
[8]:https://github.com/TEAMMATES/teammates/issues?q=is%3Aissue+is%3Aopen+label%3Ad.FirstTimers
|
||||
[9]:https://github.com/adriennefriend/imposter-syndrome-disclaimer/blob/master/examples.md
|
||||
[10]:https://github.com/adriennefriend/imposter-syndrome-disclaimer
|
||||
[11]:https://opensource.com/resources/what-open-source
|
||||
[12]:https://hbr.org/2013/01/yes-you-can-make-money-with-op
|
||||
[13]:https://hbr.org/2012/10/open-sourcing-may-be-worth
|
||||
[14]:https://github.com/MunGell/awesome-for-beginners
|
||||
[15]:https://opensource.com/life/16/1/6-beginner-open-source
|
@ -1,3 +1,4 @@
|
||||
XLCYun 翻译中
|
||||
How to get into DevOps
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_resume_rh1x.png?itok=S3HGxi6E)
|
||||
|
@ -1,3 +1,4 @@
|
||||
translating by wyxplus
|
||||
How to price cryptocurrencies
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
translating by leowang
|
||||
Moving to Linux from dated Windows machines
|
||||
======
|
||||
|
||||
|
@ -1,75 +0,0 @@
|
||||
Open source software: 20 years and counting
|
||||
============================================================
|
||||
|
||||
### On the 20th anniversary of the coining of the term "open source software," how did it rise to dominance and what's next?
|
||||
|
||||
![Open source software: 20 years and counting](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/2cents.png?itok=XlT7kFNY "Open source software: 20 years and counting")
|
||||
Image by : opensource.com
|
||||
|
||||
Twenty years ago, in February 1998, the term "open source" was first applied to software. Soon afterwards, the Open Source Definition was created and the seeds that became the Open Source Initiative (OSI) were sown. As the OSD’s author [Bruce Perens relates][9],
|
||||
|
||||
> “Open source” is the proper name of a campaign to promote the pre-existing concept of free software to business, and to certify licenses to a rule set.
|
||||
|
||||
Twenty years later, that campaign has proven wildly successful, beyond the imagination of anyone involved at the time. Today open source software is literally everywhere. It is the foundation for the internet and the web. It powers the computers and mobile devices we all use, as well as the networks they connect to. Without it, cloud computing and the nascent Internet of Things would be impossible to scale and perhaps to create. It has enabled new ways of doing business to be tested and proven, allowing giant corporations like Google and Facebook to start from the top of a mountain others already climbed.
|
||||
|
||||
Like any human creation, it has a dark side as well. It has also unlocked dystopian possibilities for surveillance and the inevitably consequent authoritarian control. It has provided criminals with new ways to cheat their victims and unleashed the darkness of bullying delivered anonymously and at scale. It allows destructive fanatics to organize in secret without the inconvenience of meeting. All of these are shadows cast by useful capabilities, just as every human tool throughout history has been used both to feed and care and to harm and control. We need to help the upcoming generation strive for irreproachable innovation. As [Richard Feynman said][10],
|
||||
|
||||
> To every man is given the key to the gates of heaven. The same key opens the gates of hell.
|
||||
|
||||
As open source has matured, the way it is discussed and understood has also matured. The first decade was one of advocacy and controversy, while the second was marked by adoption and adaptation.
|
||||
|
||||
1. In the first decade, the key question concerned business models—“how can I contribute freely yet still be paid?”—while during the second, more people asked about governance—“how can I participate yet keep control/not be controlled?”
|
||||
|
||||
2. Open source projects of the first decade were predominantly replacements for off-the-shelf products; in the second decade, they were increasingly components of larger solutions.
|
||||
|
||||
3. Projects of the first decade were often run by informal groups of individuals; in the second decade, they were frequently run by charities created on a project-by-project basis.
|
||||
|
||||
4. Open source developers of the first decade were frequently devoted to a single project and often worked in their spare time. In the second decade, they were increasingly employed to work on a specific technology—professional specialists.
|
||||
|
||||
5. While open source was always intended as a way to promote software freedom, during the first decade, conflict arose with those preferring the term “free software.” In the second decade, this conflict was largely ignored as open source adoption accelerated.
|
||||
|
||||
So what will the third decade bring?
|
||||
|
||||
1. _The complexity business model_ —The predominant business model will involve monetizing the solution of the complexity arising from the integration of many open source parts, especially from deployment and scaling. Governance needs will reflect this.
|
||||
|
||||
2. _Open source mosaics_ —Open source projects will be predominantly families of component parts, together with being built into stacks of components. The resultant larger solutions will be a mosaic of open source parts.
|
||||
|
||||
3. _Families of projects_ —More and more projects will be hosted by consortia/trade associations like the Linux Foundation and OpenStack, and by general-purpose charities like Apache and the Software Freedom Conservancy.
|
||||
|
||||
4. _Professional generalists_ —Open source developers will increasingly be employed to integrate many technologies into complex solutions and will contribute to a range of projects.
|
||||
|
||||
5. _Software freedom redux_ —As new problems arise, software freedom (the application of the Four Freedoms to user and developer flexibility) will increasingly be applied to identify solutions that work for collaborative communities and independent deployers.
|
||||
|
||||
I’ll be expounding on all this in conference keynotes around the world during 2018\. Watch for [OSI’s 20th Anniversary World Tour][11]!
|
||||
|
||||
_This article was originally published on [Meshed Insights Ltd.][2] and is reprinted with permission. This article, as well as my work at OSI, is supported by [Patreon patrons][3]._
|
||||
|
||||
### About the author
|
||||
|
||||
[![Simon Phipps (smiling)](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/picture-2305.jpg?itok=CefW_OYh)][12] Simon Phipps - Computer industry and open source veteran Simon Phipps started [Public Software][4], a European host for open source projects, and volunteers as President at OSI and a director at The Document Foundation. His posts are sponsored by [Patreon patrons][5] - become one if you'd like to see more! Over a 30+ year career he has been involved at a strategic level in some of the world’s leading... [more about Simon Phipps][6][More about me][7]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/open-source-20-years-and-counting
|
||||
|
||||
作者:[Simon Phipps ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/simonphipps
|
||||
[1]:https://opensource.com/article/18/2/open-source-20-years-and-counting?rate=TZxa8jxR6VBcYukor0FDsTH38HxUrr7Mt8QRcn0sC2I
|
||||
[2]:https://meshedinsights.com/2017/12/21/20-years-and-counting/
|
||||
[3]:https://patreon.com/webmink
|
||||
[4]:https://publicsoftware.eu/
|
||||
[5]:https://patreon.com/webmink
|
||||
[6]:https://opensource.com/users/simonphipps
|
||||
[7]:https://opensource.com/users/simonphipps
|
||||
[8]:https://opensource.com/user/12532/feed
|
||||
[9]:https://perens.com/2017/09/26/on-usage-of-the-phrase-open-source/
|
||||
[10]:https://www.brainpickings.org/2013/07/19/richard-feynman-science-morality-poem/
|
||||
[11]:https://opensource.org/node/905
|
||||
[12]:https://opensource.com/users/simonphipps
|
||||
[13]:https://opensource.com/users/simonphipps
|
||||
[14]:https://opensource.com/users/simonphipps
|
@ -0,0 +1,62 @@
|
||||
Security Is Not an Absolute
|
||||
======
|
||||
|
||||
If there’s one thing I wish people from outside the security industry knew when dealing with information security, it’s that **Security is not an absolute**. Most of the time, it’s not even quantifiable. Even in the case of particular threat models, it’s often impossible to make statements about the security of a system with certainty.
|
||||
|
||||
At work, I deal with a lot of very smart people who are not “security people”, but are well-meaning and trying to do the right thing. Online, I sometimes find myself in conversations on [/r/netsec][1], [/r/netsecstudents][2], [/r/asknetsec][3], or [security.stackexchange][4] where someone wants to know something about information security. Either way, it’s quite common that someone asks the fateful question: “Is this secure?”. There are actually only two answers to this question, and neither one is “Yes.”
|
||||
|
||||
The first answer is, fairly obviously, “No.” There are some ideas that are not secure under any reasonable definition of security. Imagine an employer that makes the PIN for your payroll system the day and month on which you started your new job. Clearly, all it takes is someone posting “started my new job today!” to social media, and their PIN has been outed. Consider transporting an encrypted hard drive with the password on a sticky note attached to the outside of the drive. Both of these systems have employed some form of “security control” (even if I use the term loosely), and both are clearly insecure to even the most rudimentary of attacker. Consequently, answering “Is this secure?” with a firm “No” seems appropriate.
|
||||
|
||||
The second answer is more nuanced: “It depends.” What it depends on, and whether those conditions exist in the system in use, are what many security professionals get paid to evaluate. For example, consider the employer in the previous paragraph. Instead of using a fixed scheme for PINs, they now generate a random 4-digit PIN and mail it to each new employee. Is this secure? That all depends on the threat model being applied to the scenario. If we allow an attacker unlimited attempts to log in as that user, then no 4 digit PIN (random or deterministic) is reasonably secure. On average, an attacker will need no more than 5000 requests to find the valid PIN. That can be done by a very basic script in 10s of minutes. If, on the other hand, we lock the account after 10 failed attempts, then we’ve reduced the attacker to a 0.1% chance of success for a given account. Is this secure? For a single account, this is probably reasonably secure (although most users might be uncomfortable at even a 1 in 1000 chance of an attacker succeeding against their personal account) but what if the attacker has a list of 1000 usernames? The attacker now has a **64%** chance of successfully accessing at least 1 account. I think most businesses would find those odds very much against their favor.
|
||||
|
||||
So why can’t we ever come up with an answer of “Yes, this is a secure system”? Well, there’s several factors at play here. The first is that very little in life in general is an absolute:
|
||||
|
||||
* Your doctor cannot tell you with certainty that you will be alive tomorrow.
|
||||
* A seismologist can’t say that there absolutely won’t be a 9.0 earthquake that levels a big chunk of the West Coast.
|
||||
* Your car manufacturer cannot guarantee that the 4 wheels on your car do not fall of on your way to work tomorrow.
|
||||
|
||||
|
||||
|
||||
However, all of these possibilities are very remote events. Most people are comfortable with these probabilities, largely because they do not think much about them, but even if they did, they would believe that it would not happen to them. (And almost always, they would be correct in that assumption.)
|
||||
|
||||
Unfortunately, in information security, we have three things working against us:
|
||||
|
||||
* The risks are much less understood by those seeking to understand them.
|
||||
* The reality is that there are enough security threats that are **much** more common than the events above.
|
||||
* The threats against which security must guard are **adaptive**.
|
||||
|
||||
|
||||
|
||||
Because most people have a hard time reasoning about the likelihood of attacks and threats against them, they seek absolute reassurance. They don’t want to be told “it depends”, they just want to hear “yes, you’re fine.” Many of these individuals are the hypochondriacs of the information security world – they think every possible attack will get them, and they want absolute reassurance they’re safe from those attacks. Alternatively, they don’t understand that there are degrees of security and threat models, and just want to be reassured that they are perfectly secure. Either way, the effect is the same – they don’t understand, but are afraid, and so want the reassurance of complete security.
|
||||
|
||||
We’re in an era where security breaches are unfortunately common, and developers and users alike are hearing about these vulnerabilities and breaches all the time. This causes them to pay far more attention to security then they otherwise would. By itself, this isn’t bad – all of us in the industry have been trying to get everyone’s attention about security issues for decades. Getting it now is better late than never. But because we’re so far behind the curve, the breaches being common, everytone is rushing to find out their risk and get reassurance now. Rather than consider the nuances of the situation, they just want a simple answer to “Am I secure?”
|
||||
|
||||
The last of these issues, however, is also the most unique to information security. For decades, we’ve looked for the formula to make a system perfectly secure. However, each countermeasure or security system is quickly defeated by attackers. We’re in a cat-and-mouse game, rather than an engineering discipline.
|
||||
|
||||
This isn’t to say that security is not an engineering practice – it certainly is in many ways (and my official title claims that I am an engineer), but just that it differs from other engineering areas. The forces faced by a building do not change in face of design changes by the structural engineer. Gravity remains a constant, wind forces are predictible for a given design, the seismic nature of an area is approximately known. Making the building have stronger doors does not suddenly increase the wind forces on the windows. In security, however, when we “strengthen the doors”, the attackers do turn to the “windows” of our system. Our threats are **adaptive** – for each control we implement, they adapt to attempt to circumvent that control. For this reason, a system that was believed secure against the known threats one year is completely broken the next.
|
||||
|
||||
Another form of the security absolutism is those that realize there are degrees of security, but want to take it to an almost ridiculous level of paranoia. Nearly always, these seem to be interested in forms of cryptography – perhaps because cryptography offers numbers that can be tweaked, giving an impression of differing levels of security.
|
||||
|
||||
* Generating RSA encryption keys of over 4k bits in length, even though all cryptographers agree this is pointless.
|
||||
* Asking why AES-512 doesn’t exist, even though SHA-512 does. (Because the length of a hash and the length of a key do not equal in effective strength against attacks.)
|
||||
* Setting up bizarre browser settings and then complaining about websites being broken. (Disabling all JavaScript, all cookies, all ciphers that are less than 256 bits and not perfect forward secrecy, etc.)
|
||||
|
||||
|
||||
|
||||
So the next time you want to know “Is this secure?”, consider the threat model: what are you trying to defend against? Recognize that there are no security absolutes and guarantees, and that good security engineering practice often involves compromise. Sometimes the compromise is one of usability or utility, sometimes the compromise involves working in a less-than-perfect world.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://systemoverlord.com/2018/02/05/security-is-not-an-absolute.html
|
||||
|
||||
作者:[David][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://systemoverlord.com/about
|
||||
[1]:https://reddit.com/r/netsec
|
||||
[2]:https://reddit.com/r/netsecstudents
|
||||
[3]:https://reddit.com/r/asknetsec
|
||||
[4]:https://security.stackexchange.com
|
@ -0,0 +1,75 @@
|
||||
Building Slack for the Linux community and adopting snaps
|
||||
======
|
||||
![][1]
|
||||
|
||||
Used by millions around the world, [Slack][2] is an enterprise software platform that allows teams and businesses of all sizes to communicate effectively. Slack works seamlessly with other software tools within a single integrated environment, providing an accessible archive of an organisation’s communications, information and projects. Although Slack has grown at a rapid rate in the 4 years since their inception, their desktop engineering team who work across Windows, MacOS and Linux consists of just 4 people currently. We spoke to Felix Rieseberg, Staff Software Engineer, who works on this team following the release of Slack’s first [snap last month][3] to discover more about the company’s attitude to the Linux community and why they decided to build a snap.
|
||||
|
||||
[Install Slack snap][4]
|
||||
|
||||
### Can you tell us about the Slack snap which has been published?
|
||||
|
||||
We launched our first snap last month as a new way to distribute to our Linux community. In the enterprise space, we find that people tend to adopt new technology at a slower pace than consumers, so we will continue to offer a .deb package.
|
||||
|
||||
### What level of interest do you see for Slack from the Linux community?
|
||||
|
||||
I’m excited that interest for Slack is growing across all platforms, so it is hard for us to say whether the interest coming out of the Linux community is different from the one we’re generally seeing. However, it is important for us to meet users wherever they do their work. We have a dedicated QA engineer focusing entirely on Linux and we really do try hard to deliver the best possible experience.
|
||||
|
||||
We generally find it is a little harder to build for Linux, than say Windows, as there is a less predictable base to work from – and this is an area where the Linux community truly shines. We have a fairly large number of users that are quite helpful when it comes to reporting bugs and hunting root causes down.
|
||||
|
||||
### How did you find out about snaps?
|
||||
|
||||
Martin Wimpress at Canonical reached out to me and explained the concept of snaps. Honestly, initially I was hesitant – even though I use Ubuntu – because it seemed like another standard to build and maintain. However, once understanding the benefits I was convinced it was a worthwhile investment.
|
||||
|
||||
### What was the appeal of snaps that made you decide to invest in them?
|
||||
|
||||
Without doubt, the biggest reason we decided to build the snap is the updating feature. We at Slack make heavy use of web technologies, which in turn allows us to offer a wide variety of features – like the integration of YouTube videos or Spotify playlists. Much like a browser, that means that we frequently need to update the application.
|
||||
|
||||
On macOS and Windows, we already had a dedicated auto-updater that doesn’t require the user to even think about updates. We have found that any sort of interruption, even for an update, is an annoyance that we’d like to avoid. Therefore, the automatic updates via snaps seemed far more seamless and easy.
|
||||
|
||||
### How does building snaps compare to other forms of packaging you produce? How easy was it to integrate with your existing infrastructure and process?
|
||||
|
||||
As far as Linux is concerned, we have not tried other “new” packaging formats, but we’ll never say never. Snaps were an easy choice given that the majority of our Linux customers do use Ubuntu. The fact that snaps also run on other distributions was a decent bonus. I think it is really neat how Canonical is making snaps cross-distro rather than focusing on just Ubuntu.
|
||||
|
||||
Building it was surprisingly easy: We have one unified build process that creates installers and packages – and our snap creation simply takes the .deb package and churns out a snap. For other technologies, we sometimes had to build in-house tools to support our buildchain, but the `snapcraft` tool turned out to be just the right thing. The team at Canonical were incredibly helpful to push it through as we did experience a few problems along the way.
|
||||
|
||||
### How do you see the store changing the way users find and install your software?
|
||||
|
||||
What is really unique about Slack is that people don’t just stumble upon it – they know about it from elsewhere and actively try to find it. Therefore, our levels of awareness are already high but having the snap available in the store, I hope, will make installation a lot easier for our users.
|
||||
|
||||
We always try to do the best for our users. The more convinced we become that it is better than other installation options, the more we will recommend the snap to our users.
|
||||
|
||||
### What are your expectations or already seen savings by using snaps instead of having to package for other distros?
|
||||
|
||||
We expect the snap to offer more convenience for our users and ensure they enjoy using Slack more. From our side, the snap will save time on customer support as users won’t be stuck on previous versions which will naturally resolve a lot of issues. Having the snap is an additional bonus for us and something to build on, rather than displacing anything we already have.
|
||||
|
||||
### What release channels (edge/beta/candidate/stable) in the store are you using or plan to use, if any?
|
||||
|
||||
We used the edge channel exclusively in the development to share with the team at Canonical. Slack for Linux as a whole is still in beta, but long-term, having the options for channels is interesting and being able to release versions to interested customers a little earlier will certainly be beneficial.
|
||||
|
||||
### How do you think packaging your software as a snap helps your users? Did you get any feedback from them?
|
||||
|
||||
Installation and updating generally being easier will be the big benefit to our users. Long-term, the question is “Will users that installed the snap experience less problems than other customers?” I have a decent amount of hope that the built-in dependencies in snaps make it likely.
|
||||
|
||||
### What advice or knowledge would you share with developers who are new to snaps?
|
||||
|
||||
I would recommend starting with the Debian package to build your snap – that was shockingly easy. It also starts the scope smaller to avoid being overwhelmed. It is a fairly small time investment and probably worth it. Also if you can, try to find someone at Canonical to work with – they have amazing engineers.
|
||||
|
||||
### Where do you see the biggest opportunity for development?
|
||||
|
||||
We are taking it step by step currently – first get people on the snap, and build from there. People using it will already be more secure as they will benefit from the latest updates.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://insights.ubuntu.com/2018/02/06/building-slack-for-the-linux-community-and-adopting-snaps/
|
||||
|
||||
作者:[Sarah][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://insights.ubuntu.com/author/sarahfd/
|
||||
[1]:https://insights.ubuntu.com/wp-content/uploads/a115/Slack_linux_screenshot@2x-2.png
|
||||
[2]:https://slack.com/
|
||||
[3]:https://insights.ubuntu.com/2018/01/18/canonical-brings-slack-to-the-snap-ecosystem/
|
||||
[4]:https://snapcraft.io/slack/
|
@ -0,0 +1,49 @@
|
||||
How to start an open source program in your company
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_openisopen.png?itok=FjmDxIaL)
|
||||
|
||||
Many internet-scale companies, including Google, Facebook, and Twitter, have established formal open source programs (sometimes referred to as open source program offices, or OSPOs for short), a designated place where open source consumption and production is supported inside a company. With such an office in place, any business can execute its open source strategies in clear terms, giving the company tools needed to make open source a success. An open source program office's responsibilities may include establishing policies for code use, distribution, selection, and auditing; engaging with open source communities; training developers; and ensuring legal compliance.
|
||||
|
||||
Internet-scale companies aren't the only ones establishing open source programs; studies show that [65% of companies][1] across industries are using and contributing to open source. In the last couple of years we’ve seen [VMware][2], [Amazon][3], [Microsoft][4], and even the [UK government][5] hire open source leaders and/or create open source programs. Having an open source strategy has become critical for businesses and even governments, and all organizations should be following in their footsteps.
|
||||
|
||||
### How to start an open source program
|
||||
|
||||
Although each open source office will be customized to a specific organization’s needs, there are standard steps that every company goes through. These include:
|
||||
|
||||
* **Finding a leader:** Identifying the right person to lead the open source program is the first step. The [TODO Group][6] maintains a list of [sample job descriptions][7] that may be helpful in finding candidates.
|
||||
* **Deciding on the program structure:** There are a variety of ways to fit an open source program office into an organization's existing structure, depending on its focus. Companies with large intellectual property portfolios may be most comfortable placing the office within the legal department. Engineering-driven organizations may choose to place the office in an engineering department, especially if the focus of the office is to improve developer productivity. Others may want the office to be within the marketing department to support sales of open source products. For inspiration, the TODO Group offers [open source program case studies][8] that can be useful.
|
||||
* **Setting policies and processes:** There needs to be a standardized method for implementing the organization’s open source strategy. The policies, which should require as little oversight as possible, lay out the requirements and rules for working with open source across the organization. They should be clearly defined, easily accessible, and even automated with tooling. Ideally, employees should be able to question policies and provide recommendations for improving or revising them. Numerous organizations active in open source, such as Google, [publish their policies publicly][9], which can be a good place to start. The TODO Group offers examples of other [open source policies][10] organizations can use as resources.
|
||||
|
||||
|
||||
|
||||
### A worthy step
|
||||
|
||||
Opening an open source program office is a big step for most organizations, especially if they are (or are transitioning into) a software company. The benefits to the organization are tremendous and will more than make up for the investment in the long run—not only in employee satisfaction but also in developer efficiency. There are many resources to help on the journey. The TODO Group guides [How to Create an Open Source Program][11], [Measuring Your Open Source Program's Success][12], and [Tools for Managing Open Source Programs][13] are great starting points.
|
||||
|
||||
Open source will truly be sustainable as more companies formalize programs to contribute back to these projects. I hope these resources are useful to you, and I wish you luck on your open source program journey.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/how-start-open-source-program-your-company
|
||||
|
||||
作者:[Chris Aniszczyk][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/caniszczyk
|
||||
[1]:https://www.blackducksoftware.com/2016-future-of-open-source
|
||||
[2]:http://www.cio.com/article/3095843/open-source-tools/vmware-today-has-a-strong-investment-in-open-source-dirk-hohndel.html
|
||||
[3]:http://fortune.com/2016/12/01/amazon-open-source-guru/
|
||||
[4]:https://opensource.microsoft.com/
|
||||
[5]:https://www.linkedin.com/jobs/view/169669924
|
||||
[6]:http://todogroup.org
|
||||
[7]:https://github.com/todogroup/job-descriptions
|
||||
[8]:https://github.com/todogroup/guides/tree/master/casestudies
|
||||
[9]:https://opensource.google.com/docs/why/
|
||||
[10]:https://github.com/todogroup/policies
|
||||
[11]:https://github.com/todogroup/guides/blob/master/creating-an-open-source-program.md
|
||||
[12]:https://github.com/todogroup/guides/blob/master/measuring-your-open-source-program.md
|
||||
[13]:https://github.com/todogroup/guides/blob/master/tools-for-managing-open-source-programs.md
|
@ -0,0 +1,99 @@
|
||||
UQDS: A software-development process that puts quality first
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag)
|
||||
|
||||
The Ultimate Quality Development System (UQDS) is a software development process that provides clear guidelines for how to use branches, tickets, and code reviews. It was invented more than a decade ago by Divmod and adopted by [Twisted][1], an event-driven framework for Python that underlies popular commercial platforms like HipChat as well as open source projects like Scrapy (a web scraper).
|
||||
|
||||
Divmod, sadly, is no longer around—it has gone the way of many startups. Luckily, since many of its products were open source, its legacy lives on.
|
||||
|
||||
When Twisted was a young project, there was no clear process for when code was "good enough" to go in. As a result, while some parts were highly polished and reliable, others were alpha quality software—with no way to tell which was which. UQDS was designed as a process to help an existing project with definite quality challenges ramp up its quality while continuing to add features and become more useful.
|
||||
|
||||
UQDS has helped the Twisted project evolve from having frequent regressions and needing multiple release candidates to get a working version, to achieving its current reputation of stability and reliability.
|
||||
|
||||
### UQDS's building blocks
|
||||
|
||||
UQDS was invented by Divmod back in 2006. At that time, Continuous Integration (CI) was in its infancy and modern version control systems, which allow easy branch merging, were barely proofs of concept. Although Divmod did not have today's modern tooling, it put together CI, some ad-hoc tooling to make [Subversion branches][2] work, and a lot of thought into a working process. Thus the UQDS methodology was born.
|
||||
|
||||
UQDS is based upon fundamental building blocks, each with their own carefully considered best practices:
|
||||
|
||||
1. Tickets
|
||||
2. Branches
|
||||
3. Tests
|
||||
4. Reviews
|
||||
5. No exceptions
|
||||
|
||||
|
||||
|
||||
Let's go into each of those in a little more detail.
|
||||
|
||||
#### Tickets
|
||||
|
||||
In a project using the UQDS methodology, no change is allowed to happen if it's not accompanied by a ticket. This creates a written record of what change is needed and—more importantly—why.
|
||||
|
||||
* Tickets should define clear, measurable goals.
|
||||
* Work on a ticket does not begin until the ticket contains goals that are clearly defined.
|
||||
|
||||
|
||||
|
||||
#### Branches
|
||||
|
||||
Branches in UQDS are tightly coupled with tickets. Each branch must solve one complete ticket, no more and no less. If a branch addresses either more or less than a single ticket, it means there was a problem with the ticket definition—or with the branch. Tickets might be split or merged, or a branch split and merged, until congruence is achieved.
|
||||
|
||||
Enforcing that each branch addresses no more nor less than a single ticket—which corresponds to one logical, measurable change—allows a project using UQDS to have fine-grained control over the commits: A single change can be reverted or changes may even be applied in a different order than they were committed. This helps the project maintain a stable and clean codebase.
|
||||
|
||||
#### Tests
|
||||
|
||||
UQDS relies upon automated testing of all sorts, including unit, integration, regression, and static tests. In order for this to work, all relevant tests must pass at all times. Tests that don't pass must either be fixed or, if no longer relevant, be removed entirely.
|
||||
|
||||
Tests are also coupled with tickets. All new work must include tests that demonstrate that the ticket goals are fully met. Without this, the work won't be merged no matter how good it may seem to be.
|
||||
|
||||
A side effect of the focus on tests is that the only platforms that a UQDS-using project can say it supports are those on which the tests run with a CI framework—and where passing the test on the platform is a condition for merging a branch. Without this restriction on supported platforms, the quality of the project is not Ultimate.
|
||||
|
||||
#### Reviews
|
||||
|
||||
While automated tests are important to the quality ensured by UQDS, the methodology never loses sight of the human factor. Every branch commit requires code review, and each review must follow very strict rules:
|
||||
|
||||
1. Each commit must be reviewed by a different person than the author.
|
||||
2. Start with a comment thanking the contributor for their work.
|
||||
3. Make a note of something that the contributor did especially well (e.g., "that's the perfect name for that variable!").
|
||||
4. Make a note of something that could be done better (e.g., "this line could use a comment explaining the choices.").
|
||||
5. Finish with directions for an explicit next step, typically either merge as-is, fix and merge, or fix and submit for re-review.
|
||||
|
||||
|
||||
|
||||
These rules respect the time and effort of the contributor while also increasing the sharing of knowledge and ideas. The explicit next step allows the contributor to have a clear idea on how to make progress.
|
||||
|
||||
#### No exceptions
|
||||
|
||||
In any process, it's easy to come up with reasons why you might need to flex the rules just a little bit to let this thing or that thing slide through the system. The most important fundamental building block of UQDS is that there are no exceptions. The entire community works together to make sure that the rules do not flex, not for any reason whatsoever.
|
||||
|
||||
Knowing that all code has been approved by a different person than the author, that the code has complete test coverage, that each branch corresponds to a single ticket, and that this ticket is well considered and complete brings a piece of mind that is too valuable to risk losing, even for a single small exception. The goal is quality, and quality does not come from compromise.
|
||||
|
||||
### A downside to UQDS
|
||||
|
||||
While UQDS has helped Twisted become a highly stable and reliable project, this reliability hasn't come without cost. We quickly found that the review requirements caused a slowdown and backlog of commits to review, leading to slower development. The answer to this wasn't to compromise on quality by getting rid of UQDS; it was to refocus the community priorities such that reviewing commits became one of the most important ways to contribute to the project.
|
||||
|
||||
To help with this, the community developed a bot in the [Twisted IRC channel][3] that will reply to the command `review tickets` with a list of tickets that still need review. The [Twisted review queue][4] website returns a prioritized list of tickets for review. Finally, the entire community keeps close tabs on the number of tickets that need review. It's become an important metric the community uses to gauge the health of the project.
|
||||
|
||||
### Learn more
|
||||
|
||||
The best way to learn about UQDS is to [join the Twisted Community][5] and see it in action. If you'd like more information about the methodology and how it might help your project reach a high level of reliability and stability, have a look at the [UQDS documentation][6] in the Twisted wiki.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/uqds
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/moshez
|
||||
[1]:https://twistedmatrix.com/trac/
|
||||
[2]:http://structure.usc.edu/svn/svn.branchmerge.html
|
||||
[3]:http://webchat.freenode.net/?channels=%23twisted
|
||||
[4]:https://twisted.reviews
|
||||
[5]:https://twistedmatrix.com/trac/wiki/TwistedCommunity
|
||||
[6]:https://twistedmatrix.com/trac/wiki/UltimateQualityDevelopmentSystem
|
@ -0,0 +1,102 @@
|
||||
Why Linux is better than Windows or macOS for security
|
||||
======
|
||||
|
||||
![](https://images.idgesg.net/images/article/2018/02/linux_security_vs_macos_and_windows_locks_data_thinkstock-100748607-large.jpg)
|
||||
|
||||
Enterprises invest a lot of time, effort and money in keeping their systems secure. The most security-conscious might have a security operations center. They of course use firewalls and antivirus tools. They probably spend a lot of time monitoring their networks, looking for telltale anomalies that could indicate a breach. What with IDS, SIEM and NGFWs, they deploy a veritable alphabet of defenses.
|
||||
|
||||
But how many have given much thought to one of the cornerstones of their digital operations: the operating systems deployed on the workforce’s PCs? Was security even a factor when the desktop OS was selected?
|
||||
|
||||
This raises a question that every IT person should be able to answer: Which operating system is the most secure for general deployment?
|
||||
|
||||
We asked some experts what they think of the security of these three choices: Windows, the ever-more-complex platform that’s easily the most popular desktop system; macOS X, the FreeBSD Unix-based operating system that powers Apple Macintosh systems; and Linux, by which we mean all the various Linux distributions and related Unix-based systems.
|
||||
|
||||
### How we got here
|
||||
|
||||
One reason enterprises might not have evaluated the security of the OS they deployed to the workforce is that they made the choice years ago. Go back far enough and all operating systems were reasonably safe, because the business of hacking into them and stealing data or installing malware was in its infancy. And once an OS choice is made, it’s hard to consider a change. Few IT organizations would want the headache of moving a globally dispersed workforce to an entirely new OS. Heck, they get enough pushback when they move users to a new version of their OS of choice.
|
||||
|
||||
Still, would it be wise to reconsider? Are the three leading desktop OSes different enough in their approach to security to make a change worthwhile?
|
||||
|
||||
Certainly the threats confronting enterprise systems have changed in the last few years. Attacks have become far more sophisticated. The lone teen hacker that once dominated the public imagination has been supplanted by well-organized networks of criminals and shadowy, government-funded organizations with vast computing resources.
|
||||
|
||||
Like many of you, I have firsthand experience of the threats that are out there: I have been infected by malware and viruses on numerous Windows computers, and I even had macro viruses that infected files on my Mac. More recently, a widespread automated hack circumvented the security on my website and infected it with malware. The effects of such malware were always initially subtle, something you wouldn’t even notice, until the malware ended up so deeply embedded in the system that performance started to suffer noticeably. One striking thing about the infestations was that I was never specifically targeted by the miscreants; nowadays, it’s as easy to attack 100,000 computers with a botnet as it is to attack a dozen.
|
||||
|
||||
### Does the OS really matter?
|
||||
|
||||
The OS you deploy to your users does make a difference for your security stance, but it isn’t a sure safeguard. For one thing, a breach these days is more likely to come about because an attacker probed your users, not your systems. A [survey][1] of hackers who attended a recent DEFCON conference revealed that “84 percent use social engineering as part of their attack strategy.” Deploying a secure operating system is an important starting point, but without user education, strong firewalls and constant vigilance, even the most secure networks can be invaded. And of course there’s always the risk of user-downloaded software, extensions, utilities, plug-ins and other software that appears benign but becomes a path for malware to appear on the system.
|
||||
|
||||
And no matter which platform you choose, one of the best ways to keep your system secure is to ensure that you apply software updates promptly. Once a patch is in the wild, after all, the hackers can reverse engineer it and find a new exploit they can use in their next wave of attacks.
|
||||
|
||||
And don’t forget the basics. Don’t use root, and don’t grant guest access to even older servers on the network. Teach your users how to pick really good passwords and arm them with tools such as [1Password][2] that make it easier for them to have different passwords on every account and website they use.
|
||||
|
||||
Because the bottom line is that every decision you make regarding your systems will affect your security, even the operating system your users do their work on.
|
||||
|
||||
**[ To comment on this story, visit[Computerworld's Facebook page][3]. ]**
|
||||
|
||||
### Windows, the popular choice
|
||||
|
||||
If you’re a security manager, it is extremely likely that the questions raised by this article could be rephrased like so: Would we be more secure if we moved away from Microsoft Windows? To say that Windows dominates the enterprise market is to understate the case. [NetMarketShare][4] estimates that a staggering 88% of all computers on the internet are running a version of Windows.
|
||||
|
||||
If your systems fall within that 88%, you’re probably aware that Microsoft has continued to beef up security in the Windows system. Among its improvements have been rewriting and re-rewriting its operating system codebase, adding its own antivirus software system, improving firewalls and implementing a sandbox architecture, where programs can’t access the memory space of the OS or other applications.
|
||||
|
||||
But the popularity of Windows is a problem in itself. The security of an operating system can depend to a large degree on the size of its installed base. For malware authors, Windows provides a massive playing field. Concentrating on it gives them the most bang for their efforts.
|
||||
As Troy Wilkinson, CEO of Axiom Cyber Solutions, explains, “Windows always comes in last in the security world for a number of reasons, mainly because of the adoption rate of consumers. With a large number of Windows-based personal computers on the market, hackers historically have targeted these systems the most.”
|
||||
|
||||
It’s certainly true that, from Melissa to WannaCry and beyond, much of the malware the world has seen has been aimed at Windows systems.
|
||||
|
||||
### macOS X and security through obscurity
|
||||
|
||||
If the most popular OS is always going to be the biggest target, then can using a less popular option ensure security? That idea is a new take on the old — and entirely discredited — concept of “security through obscurity,” which held that keeping the inner workings of software proprietary and therefore secret was the best way to defend against attacks.
|
||||
|
||||
Wilkinson flatly states that macOS X “is more secure than Windows,” but he hastens to add that “macOS used to be considered a fully secure operating system with little chance of security flaws, but in recent years we have seen hackers crafting additional exploits against macOS.”
|
||||
|
||||
In other words, the attackers are branching out and not ignoring the Mac universe.
|
||||
|
||||
Security researcher Lee Muson of Comparitech says that “macOS is likely to be the pick of the bunch” when it comes to choosing a more secure OS, but he cautions that it is not impenetrable, as once thought. Its advantage is that “it still benefits from a touch of security through obscurity versus the still much larger target presented by Microsoft’s offering.”
|
||||
|
||||
Joe Moore of Wolf Solutions gives Apple a bit more credit, saying that “off the shelf, macOS X has a great track record when it comes to security, in part because it isn’t as widely targeted as Windows and in part because Apple does a pretty good job of staying on top of security issues.”
|
||||
|
||||
### And the winner is …
|
||||
|
||||
You probably knew this from the beginning: The clear consensus among experts is that Linux is the most secure operating system. But while it’s the OS of choice for servers, enterprises deploying it on the desktop are few and far between.
|
||||
|
||||
And if you did decide that Linux was the way to go, you would still have to decide which distribution of the Linux system to choose, and things get a bit more complicated there. Users are going to want a UI that seems familiar, and you are going to want the most secure OS.
|
||||
|
||||
As Moore explains, “Linux has the potential to be the most secure, but requires the user be something of a power user.” So, not for everyone.
|
||||
|
||||
Linux distros that target security as a primary feature include [Parrot Linux][5], a Debian-based distro that Moore says provides numerous security-related tools right out of the box.
|
||||
|
||||
Of course, an important differentiator is that Linux is open source. The fact that coders can read and comment upon each other’s work might seem like a security nightmare, but it actually turns out to be an important reason why Linux is so secure, says Igor Bidenko, CISO of Simplex Solutions. “Linux is the most secure OS, as its source is open. Anyone can review it and make sure there are no bugs or back doors.”
|
||||
|
||||
Wilkinson elaborates that “Linux and Unix-based operating systems have less exploitable security flaws known to the information security world. Linux code is reviewed by the tech community, which lends itself to security: By having that much oversight, there are fewer vulnerabilities, bugs and threats.”
|
||||
|
||||
That’s a subtle and perhaps counterintuitive explanation, but by having dozens — or sometimes hundreds — of people read through every line of code in the operating system, the code is actually more robust and the chance of flaws slipping into the wild is diminished. That had a lot to do with why PC World came right out and said Linux is more secure. As Katherine Noyes [explains][6], “Microsoft may tout its large team of paid developers, but it’s unlikely that team can compare with a global base of Linux user-developers around the globe. Security can only benefit through all those extra eyeballs.”
|
||||
|
||||
Another factor cited by PC World is Linux’s better user privileges model: Windows users “are generally given administrator access by default, which means they pretty much have access to everything on the system,” according to Noyes’ article. Linux, in contrast, greatly restricts “root.”
|
||||
|
||||
Noyes also noted that the diversity possible within Linux environments is a better hedge against attacks than the typical Windows monoculture: There are simply a lot of different distributions of Linux available. And some of them are differentiated in ways that specifically address security concerns. Security Researcher Lee Muson of Comparitech offers this suggestion for a Linux distro: “The[Qubes OS][7] is as good a starting point with Linux as you can find right now, with an [endorsement from Edward Snowden][8] massively overshadowing its own extremely humble claims.” Other security experts point to specialized secure Linux distributions such as [Tails Linux][9], designed to run securely and anonymously directly from a USB flash drive or similar external device.
|
||||
|
||||
### Building security momentum
|
||||
|
||||
Inertia is a powerful force. Although there is clear consensus that Linux is the safest choice for the desktop, there has been no stampede to dump Windows and Mac machines in favor of it. Nonetheless, a small but significant increase in Linux adoption would probably result in safer computing for everyone, because in market share loss is one sure way to get Microsoft’s and Apple’s attention. In other words, if enough users switch to Linux on the desktop, Windows and Mac PCs are very likely to become more secure platforms.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.computerworld.com/article/3252823/linux/why-linux-is-better-than-windows-or-macos-for-security.html
|
||||
|
||||
作者:[Dave Taylor][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.computerworld.com/author/Dave-Taylor/
|
||||
[1]:https://www.esecurityplanet.com/hackers/fully-84-percent-of-hackers-leverage-social-engineering-in-attacks.html
|
||||
[2]:http://www.1password.com
|
||||
[3]:https://www.facebook.com/Computerworld/posts/10156160917029680
|
||||
[4]:https://www.netmarketshare.com/operating-system-market-share.aspx?options=%7B%22filter%22%3A%7B%22%24and%22%3A%5B%7B%22deviceType%22%3A%7B%22%24in%22%3A%5B%22Desktop%2Flaptop%22%5D%7D%7D%5D%7D%2C%22dateLabel%22%3A%22Trend%22%2C%22attributes%22%3A%22share%22%2C%22group%22%3A%22platform%22%2C%22sort%22%3A%7B%22share%22%3A-1%7D%2C%22id%22%3A%22platformsDesktop%22%2C%22dateInterval%22%3A%22Monthly%22%2C%22dateStart%22%3A%222017-02%22%2C%22dateEnd%22%3A%222018-01%22%2C%22segments%22%3A%22-1000%22%7D
|
||||
[5]:https://www.parrotsec.org/
|
||||
[6]:https://www.pcworld.com/article/202452/why_linux_is_more_secure_than_windows.html
|
||||
[7]:https://www.qubes-os.org/
|
||||
[8]:https://twitter.com/snowden/status/781493632293605376?lang=en
|
||||
[9]:https://tails.boum.org/about/index.en.html
|
@ -0,0 +1,47 @@
|
||||
How DevOps helps deliver cool apps to users
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_wheels.png?itok=KRvpBttl)
|
||||
|
||||
A long time ago, in a galaxy far, far away, before DevOps became a mainstream practice, the software development process was excruciatingly slow, tedious, and methodical. By the time an application was ready to be deployed, a ginormous laundry list of changes and fixes to the next major release had already amassed. It took months to go back and work through the entire development cycle to prepare for each new release. Keep in mind that this process would be repeated again and again to deliver updates to users.
|
||||
|
||||
Today everything is done instantaneously and in real time, and this concept seems primitive. The mobile revolution has dramatically changed the way we interact with software, and companies that were early adopters of DevOps have totally changed the expectations for software development and deployment.
|
||||
|
||||
Consider Facebook: The Facebook mobile app is updated and refreshed every two weeks, like clockwork. This is the new standard, because users now expect software to be constantly fixed and updated. Any company that takes a month or more to deploy new features or simple bug fixes would surely fade into obscurity. If you cannot deliver what users expect, they will find someone who can.
|
||||
|
||||
Facebook, along with industry giants such as Amazon, Netflix, Google, and others, have forced enterprises to become faster and more efficient to meet today's customer expectations.
|
||||
|
||||
### Why DevOps?
|
||||
|
||||
Agile and DevOps are critically important to mobile app development since deployment cycles are lightning-quick. It’s a dense, fast-paced environment in which companies must outpace, out-think, and outmaneuver the competition to survive. In the App Store, the average top ten app remains in that position for only about a month.
|
||||
|
||||
To illustrate the old-school waterfall methodology, think back to when you first learned how to drive. Initially, you focused on every individual aspect, using a methodological process: You got in the car; fastened the seat belt; adjusted the seat, mirrors, and steering wheel; started the car, placed your hands at 10 and 2 o’clock, etc. Performing a simple task such as a lane change involved a painstaking, multi-step process executed in a particular order.
|
||||
|
||||
DevOps, in contrast, is how you would drive after several years of experience. Everything occurs intuitively and simultaneously, and you can move smoothly from A to B without putting much thought into the process.
|
||||
|
||||
The world of mobile apps is too fast-paced for old methods of app development. DevOps is designed to deliver effective, stable apps quickly and without the need for extensive resources. However, you cannot buy DevOps like an ordinary product or service. DevOps is about changing the culture and dynamics of how teams work together.
|
||||
|
||||
Large organizations like Amazon and Facebook are not the only ones embracing the DevOps culture; smaller mobile app companies are signing on as well. “Shortening the release cycle while keeping number of production incidents at a low level along with the overall cost of failure is what our customers looking for,” says Oleg Reshetnyak, head of engineering at mobile product agency [Reinvently.][1]
|
||||
|
||||
### DevOps: Not _if_ , but _when_
|
||||
|
||||
In today’s fast-paced business environment, choosing DevOps is like choosing to breathe: You either [do it or die][2].
|
||||
|
||||
According to the [U.S. Small Business Administration][3], only 16% of companies starting out today will last an entire generation. Mobile app companies that do not adopt DevOps risk going the way of the dinosaurs. Furthermore, the same study found that organizations that adopt DevOps are twice as likely to exceed profitability, product goals, and market share.
|
||||
|
||||
Innovating more quickly and securely requires three things: cloud, automation, and DevOps. Depending on how you define DevOps, the lines that separate these three factors can be unclear. However, one thing is certain: DevOps unifies everyone within the organization around the common goal of delivering higher-quality software more quickly and with less risk.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/devops-delivers-cool-apps-users
|
||||
|
||||
作者:[Stanislav Ivaschenko][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/ilyadudkin
|
||||
[1]:https://reinvently.com/
|
||||
[2]:https://squadex.com/insights/devops-or-die/
|
||||
[3]:https://www.sba.gov/
|
@ -0,0 +1,73 @@
|
||||
Why Mainframes Aren't Going Away Any Time Soon
|
||||
======
|
||||
|
||||
![](http://www.datacenterknowledge.com/sites/datacenterknowledge.com/files/styles/article_featured_standard/public/ibm%20z13%20mainframe%202015%20getty.jpg?itok=uB8agshi)
|
||||
|
||||
IBM's last earnings report showed the [first uptick in revenue in more than five years.][1] Some of that growth was from an expected source, cloud revenue, which was up 24 percent year over year and now accounts for 21 percent of Big Blue's take. Another major boost, however, came from a spike in mainframe revenue. Z series mainframe sales were up 70 percent, the company said.
|
||||
|
||||
This may sound somewhat akin to a return to vacuum tube technology in a world where transistors are yesterday's news. In actuality, this is only a sign of the changing face of IT.
|
||||
|
||||
**Related:** [One Click and Voilà, Your Entire Data Center is Encrypted][2]
|
||||
|
||||
Modern mainframes definitely aren't your father's punch card-driven machines that filled entire rooms. These days, they most often run Linux and have found a renewed place in the data center, where they're being called upon to do a lot of heavy lifting. Want to know where the largest instance of Oracle's database runs? It's on a Linux mainframe. How about the largest implementation of SAP on the planet? Again, Linux on a mainframe.
|
||||
|
||||
"Before the advent of Linux on the mainframe, the people who bought mainframes primarily were people who already had them," Leonard Santalucia explained to Data Center Knowledge several months back at the All Things Open conference. "They would just wait for the new version to come out and upgrade to it, because it would run cheaper and faster.
|
||||
|
||||
**Related:** [IBM Designs a “Performance Beast” for AI][3]
|
||||
|
||||
"When Linux came out, it opened up the door to other customers that never would have paid attention to the mainframe. In fact, probably a good three to four hundred new clients that never had mainframes before got them. They don't have any old mainframes hanging around or ones that were upgraded. These are net new mainframes."
|
||||
|
||||
Although Santalucia is CTO at Vicom Infinity, primarily an IBM reseller, at the conference he was wearing his hat as chairperson of the Linux Foundation's Open Mainframe Project. He was joined in the conversation by John Mertic, the project's director of program management.
|
||||
|
||||
Santalucia knows IBM's mainframes from top to bottom, having spent 27 years at Big Blue, the last eight as CTO for the company's systems and technology group.
|
||||
|
||||
"Because of Linux getting started with it back in 1999, it opened up a lot of doors that were closed to the mainframe," he said. "Beforehand it was just z/OS, z/VM, z/VSE, z/TPF, the traditional operating systems. When Linux came along, it got the mainframe into other areas that it never was, or even thought to be in, because of how open it is, and because Linux on the mainframe is no different than Linux on any other platform."
|
||||
|
||||
The focus on Linux isn't the only motivator behind the upsurge in mainframe use in data centers. Increasingly, enterprises with heavy IT needs are finding many advantages to incorporating modern mainframes into their plans. For example, mainframes can greatly reduce power, cooling, and floor space costs. In markets like New York City, where real estate is at a premium, electricity rates are high, and electricity use is highly taxed to reduce demand, these are significant advantages.
|
||||
|
||||
"There was one customer where we were able to do a consolidation of 25 x86 cores to one core on a mainframe," Santalucia said. "They have several thousand machines that are ten and twenty cores each. So, as far as the eye could see in this data center, [x86 server workloads] could be picked up and moved onto this box that is about the size of a sub-zero refrigerator in your kitchen."
|
||||
|
||||
In addition to saving on physical data center resources, this customer by design would likely see better performance.
|
||||
|
||||
"When you look at the workload as it's running on an x86 system, the math, the application code, the I/O to manage the disk, and whatever else is attached to that system, is all run through the same chip," he explained. "On a Z, there are multiple chip architectures built into the system. There's one specifically just for the application code. If it senses the application needs an I/O or some mathematics, it sends it off to a separate processor to do math or I/O, all dynamically handled by the underlying firmware. Your Linux environment doesn't have to understand that. When it's running on a mainframe, it knows it's running on a mainframe and it will exploit that architecture."
|
||||
|
||||
The operating system knows it's running on a mainframe because when IBM was readying its mainframe for Linux it open sourced something like 75,000 lines of code for Linux distributions to use to make sure their OS's were ready for IBM Z.
|
||||
|
||||
"A lot of times people will hear there's 170 processors on the Z14," Santalucia said. "Well, there's actually another 400 other processors that nobody counts in that count of application chips, because it is taken for granted."
|
||||
|
||||
Mainframes are also resilient when it comes to disaster recovery. Santalucia told the story of an insurance company located in lower Manhattan, within sight of the East River. The company operated a large data center in a basement that among other things housed a mainframe backed up to another mainframe located in Upstate New York. When Hurricane Sandy hit in 2012, the data center flooded, electrocuting two employees and destroying all of the servers, including the mainframe. But the mainframe's workload was restored within 24 hours from the remote backup.
|
||||
|
||||
The x86 machines were all destroyed, and the data was never recovered. But why weren't they also backed up?
|
||||
|
||||
"The reason they didn't do this disaster recovery the same way they did with the mainframe was because it was too expensive to have a mirror of all those distributed servers someplace else," he explained. "With the mainframe, you can have another mainframe as an insurance policy that's lower in price, called Capacity BackUp, and it just sits there idling until something like this happens."
|
||||
|
||||
Mainframes are also evidently tough as nails. Santalucia told another story in which a data center in Japan was struck by an earthquake strong enough to destroy all of its x86 machines. The center's one mainframe fell on its side but continued to work.
|
||||
|
||||
The mainframe also comes with built-in redundancy to guard against situations that would be disastrous with x86 machines.
|
||||
|
||||
"What if a hard disk fails on a node in x86?" the Open Mainframe Project's Mertic asked. "You're taking down a chunk of that cluster potentially. With a mainframe you're not. A mainframe just keeps on kicking like nothing's ever happened."
|
||||
|
||||
Mertic added that a motherboard can be pulled from a running mainframe, and again, "the thing keeps on running like nothing's ever happened."
|
||||
|
||||
So how do you figure out if a mainframe is right for your organization? Simple, says Santalucia. Do the math.
|
||||
|
||||
"The approach should be to look at it from a business, technical, and financial perspective -- not just a financial, total-cost-of-acquisition perspective," he said, pointing out that often, costs associated with software, migration, networking, and people are not considered. The break-even point, he said, comes when at least 20 to 30 servers are being migrated to a mainframe. After that point the mainframe has a financial advantage.
|
||||
|
||||
"You can get a few people running the mainframe and managing hundreds or thousands of virtual servers," he added. "If you tried to do the same thing on other platforms, you'd find that you need significantly more resources to maintain an environment like that. Seven people at ADP handle the 8,000 virtual servers they have, and they need seven only in case somebody gets sick.
|
||||
|
||||
"If you had eight thousand servers on x86, even if they're virtualized, do you think you could get away with seven?"
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.datacenterknowledge.com/hardware/why-mainframes-arent-going-away-any-time-soon
|
||||
|
||||
作者:[Christine Hall][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.datacenterknowledge.com/archives/author/christine-hall
|
||||
[1]:http://www.datacenterknowledge.com/ibm/mainframe-sales-fuel-growth-ibm
|
||||
[2]:http://www.datacenterknowledge.com/design/one-click-and-voil-your-entire-data-center-encrypted
|
||||
[3]:http://www.datacenterknowledge.com/design/ibm-designs-performance-beast-ai
|
@ -0,0 +1,41 @@
|
||||
Gathering project requirements using the Open Decision Framework
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/suitcase_container_bag.png?itok=q40lKCBY)
|
||||
|
||||
It's no secret that clear, concise, and measurable requirements lead to more successful projects. A study about large scale projects by [McKinsey & Company in conjunction with the University of Oxford][1] revealed that "on average, large IT projects run 45 percent over budget and 7 percent over time, while delivering 56 percent less value than predicted." The research also showed that some of the causes for this failure were "fuzzy business objectives, out-of-sync stakeholders, and excessive rework."
|
||||
|
||||
Business analysts often find themselves constructing these requirements through ongoing conversations. To do this, they must engage multiple stakeholders and ensure that engaged participants provide clear business objectives. This leads to less rework and more projects with a higher rate of success.
|
||||
|
||||
And they can do it in an open and inclusive way.
|
||||
|
||||
### A framework for success
|
||||
|
||||
One tool for increasing project success rate is the [Open Decision Framework][2]. The Open Decision Framework is an resource that can help users make more effective decisions in organizations that embrace [open principles][3]. The framework stresses three primary principles: being transparent, being inclusive, and being customer-centric.
|
||||
|
||||
**Transparent**. Many times, developers and product designers assume they know how stakeholders use a particular tool or piece of software. But these assumptions are often incorrect and lead to misconceptions about what stakeholders actually need. Practicing transparency when having discussions with developers and business owners is imperative. Development teams need to see not only the "sunny day" scenario but also the challenges that stakeholders face with certain tools or processes. Ask questions such as: "What steps must be done manually?" and "Is this tool performing as you expect?" This provides a shared understanding of the problem and a common baseline for discussion.
|
||||
|
||||
|
||||
**Inclusive**. It is vitally important for business analysts to look at body language and visual cues when gathering requirements. If someone is sitting with arms crossed or rolling their eyes, then it's a clear indication that they do not feel heard. A BA must encourage open communication by reaching out to those that don't feel heard and giving them the opportunity to be heard. Prior to starting the session, lay down ground rules that make the place safe for all to speak their opinions and to share their thoughts. Listen to the feedback provided and respond politely when feedback is offered. Diverse opinions and collaborative problem solving will bring exciting ideas to the session.
|
||||
|
||||
**Customer-centric**. The first step to being customer-centric is to recognize the customer. Who is benefiting from this change, update, or development? Early in the project, conduct a stakeholder mapping to help determine the key stakeholders, their roles in the project, and the ways they fit into the big picture. Involving the right customers and assuring that their needs are met will lead to more successful requirements being identified, more realistic (real-life) tests being conducted, and, ultimately, a successful delivery.
|
||||
|
||||
When your requirement sessions are transparent, inclusive, and customer-centric, you'll gather better requirements. And when you use the [Open Decision Framework][4] for running those sessions, participants feel more involved and empowered, and they deliver more accurate and complete requirements. In other words:
|
||||
|
||||
**Transparent + Inclusive + Customer-Centric = Better Requirements = Successful Projects**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/18/2/constructing-project-requirements
|
||||
|
||||
作者:[Tracy Buckner][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/tracyb
|
||||
[1]:http://calleam.com/WTPF/?page_id=1445
|
||||
[2]:https://opensource.com/open-organization/resources/open-decision-framework
|
||||
[3]:https://opensource.com/open-organization/resources/open-org-definition
|
||||
[4]:https://opensource.com/open-organization/16/6/introducing-open-decision-framework
|
@ -0,0 +1,127 @@
|
||||
Arch Anywhere Is Dead, Long Live Anarchy Linux
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_main.jpg?itok=fyBpTjQW)
|
||||
|
||||
Arch Anywhere was a distribution aimed at bringing Arch Linux to the masses. Due to a trademark infringement, Arch Anywhere has been completely rebranded to [Anarchy Linux][1]. And I’m here to say, if you’re looking for a distribution that will enable you to enjoy Arch Linux, a little Anarchy will go a very long way. This distribution is seriously impressive in what it sets out to do and what it achieves. In fact, anyone who previously feared Arch Linux can set those fears aside… because Anarchy Linux makes Arch Linux easy.
|
||||
|
||||
Let’s face it; Arch Linux isn’t for the faint of heart. The installation alone will turn off many a new user (and even some seasoned users). That’s where distributions like Anarchy make for an easy bridge to Arch. With a live ISO that can be tested and then installed, Arch becomes as user-friendly as any other distribution.
|
||||
|
||||
Anarchy Linux goes a little bit further than that, however. Let’s fire it up and see what it does.
|
||||
|
||||
### The installation
|
||||
|
||||
The installation of Anarchy Linux isn’t terribly challenging, but it’s also not quite as simple as for, say, [Ubuntu][2], [Linux Mint][3], or [Elementary OS][4]. Although you can run the installer from within the default graphical desktop environment (Xfce4), it’s still much in the same vein as Arch Linux. In other words, you’re going to have to do a bit of work—all within a text-based installer.
|
||||
|
||||
To start, the very first step of the installer (Figure 1) requires you to update the mirror list, which will likely trip up new users.
|
||||
|
||||
![Updating the mirror][6]
|
||||
|
||||
Figure 1: Updating the mirror list is a necessity for the Anarchy Linux installation.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
From the options, select Download & Rank New Mirrors. Tab down to OK and hit Enter on your keyboard. You can then select the nearest mirror (to your location) and be done with it. The next few installation screens are simple (keyboard layout, language, timezone, etc.). The next screen should surprise many an Arch fan. Anarchy Linux includes an auto partition tool. Select Auto Partition Drive (Figure 2), tab down to Ok, and hit Enter on your keyboard.
|
||||
|
||||
![partitioning][9]
|
||||
|
||||
Figure 2: Anarchy makes partitioning easy.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
You will then have to select the drive to be used (if you only have one drive this is only a matter of hitting Enter). Once you’ve selected the drive, choose the filesystem type to be used (ext2/3/4, btrfs, jfs, reiserfs, xfs), tab down to OK, and hit Enter. Next you must choose whether you want to create SWAP space. If you select Yes, you’ll then have to define how much SWAP to use. The next window will stop many new users in their tracks. It asks if you want to use GPT (GUID Partition Table). This is different than the traditional MBR (Master Boot Record) partitioning. GPT is a newer standard and works better with UEFI. If you’ll be working with UEFI, go with GPT, otherwise, stick with the old standby, MBR. Finally select to write the changes to the disk, and your installation can continue.
|
||||
|
||||
The next screen that could give new users pause, requires the selection of the desired installation. There are five options:
|
||||
|
||||
* Anarchy-Desktop
|
||||
|
||||
* Anarchy-Desktop-LTS
|
||||
|
||||
* Anarchy-Server
|
||||
|
||||
* Anarchy-Server-LTS
|
||||
|
||||
* Anarchy-Advanced
|
||||
|
||||
|
||||
|
||||
|
||||
If you want long term support, select Anarchy-Desktop-LTS, otherwise click Anarchy-Desktop (the default), and tab down to Ok. Click Enter on your keyboard. After you select the type of installation, you will get to select your desktop. You can select from five options: Budgie, Cinnamon, GNOME, Openbox, and Xfce4.
|
||||
Once you’ve selected your desktop, give the machine a hostname, set the root password, create a user, and enable sudo for the new user (if applicable). The next section that will raise the eyebrows of new users is the software selection window (Figure 3). You must go through the various sections and select which software packages to install. Don’t worry, if you miss something, you can always installed it later.
|
||||
|
||||
|
||||
![software][11]
|
||||
|
||||
Figure 3: Selecting the software you want on your system.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
Once you’ve made your software selections, tab to Install (Figure 4), and hit Enter on your keyboard.
|
||||
|
||||
![ready to install][13]
|
||||
|
||||
Figure 4: Everything is ready to install.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
Once the installation completes, reboot and enjoy Anarchy.
|
||||
|
||||
### Post install
|
||||
|
||||
I installed two versions of Anarchy—one with Budgie and one with GNOME. Both performed quite well, however you might be surprised to see that the version of GNOME installed is decked out with a dock. In fact, comparing the desktops side-by-side and they do a good job of resembling one another (Figure 5).
|
||||
|
||||
![GNOME and Budgie][15]
|
||||
|
||||
Figure 5: GNOME is on the right, Budgie is on the left.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
My guess is that you’ll find all desktop options for Anarchy configured in such a way to offer a similar look and feel. Of course, the second you click on the bottom left “buttons”, you’ll see those similarities immediately disappear (Figure 6).
|
||||
|
||||
![GNOME and Budgie][17]
|
||||
|
||||
Figure 6: The GNOME Dash and the Budgie menu are nothing alike.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
Regardless of which desktop you select, you’ll find everything you need to install new applications. Open up your desktop menu of choice and select Packages to search for and install whatever is necessary for you to get your work done.
|
||||
|
||||
### Why use Arch Linux without the “Arch”?
|
||||
|
||||
This is a valid question. The answer is simple, but revealing. Some users may opt for a distribution like [Arch Linux][18] because they want the feeling of “elitism” that comes with using, say, [Gentoo][19], without having to go through that much hassle. With regards to complexity, Arch rests below Gentoo, which means it’s accessible to more users. However, along with that complexity in the platform, comes a certain level of dependability that may not be found in others. So if you’re looking for a Linux distribution with high stability, that’s not quite as challenging as Gentoo or Arch to install, Anarchy might be exactly what you want. In the end, you’ll wind up with an outstanding desktop platform that’s easy to work with (and maintain), based on a very highly regarded distribution of Linux.
|
||||
|
||||
That’s why you might opt for Arch Linux without the Arch.
|
||||
|
||||
Anarchy Linux is one of the finest “user-friendly” takes on Arch Linux I’ve ever had the privilege of using. Without a doubt, if you’re looking for a friendlier version of a rather challenging desktop operating system, you cannot go wrong with Anarchy.
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][20]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/2/arch-anywhere-dead-long-live-anarchy-linux
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/jlwallen
|
||||
[1]:https://anarchy-linux.org/
|
||||
[2]:https://www.ubuntu.com/
|
||||
[3]:https://linuxmint.com/
|
||||
[4]:https://elementary.io/
|
||||
[6]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_1.jpg?itok=WgHRqFTf (Updating the mirror)
|
||||
[7]:https://www.linux.com/licenses/category/used-permission
|
||||
[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_2.jpg?itok=D7HkR97t (partitioning)
|
||||
[10]:/files/images/anarchyinstall3jpg
|
||||
[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_3.jpg?itok=5-9E2u0S (software)
|
||||
[12]:/files/images/anarchyinstall4jpg
|
||||
[13]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_4.jpg?itok=fuSZqtZS (ready to install)
|
||||
[14]:/files/images/anarchyinstall5jpg
|
||||
[15]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_5.jpg?itok=4y9kiC8I (GNOME and Budgie)
|
||||
[16]:/files/images/anarchyinstall6jpg
|
||||
[17]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_6.jpg?itok=fJ7Lmdci (GNOME and Budgie)
|
||||
[18]:https://www.archlinux.org/
|
||||
[19]:https://www.gentoo.org/
|
||||
[20]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -1,89 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
How To Turn On/Off Colors For ls Command In Bash On a Linux/Unix
|
||||
======
|
||||
|
||||
How do I turn on or off file name colors (ls command colors) in bash shell on a Linux or Unix like operating systems?
|
||||
|
||||
Most modern Linux distributions and Unix systems comes with alias that defines colors for your file. However, ls command is responsible for displaying color on screen for files, directories and other file system objects.
|
||||
|
||||
By default, color is not used to distinguish types of files. You need to pass --color option to the ls command on Linux. If you are using OS X or BSD based system pass -G option to the ls command. The syntax is as follows to turn on or off colors.
|
||||
|
||||
#### How to turn off colors for ls command
|
||||
|
||||
Type the following command
|
||||
`$ ls --color=none`
|
||||
Or just remove alias with the unalias command:
|
||||
`$ unalias ls`
|
||||
Please note that the following bash shell aliases are defined to display color with the ls command. Use combination of [alias command][1] and [grep command][2] as follows:
|
||||
`$ alias | grep ls`
|
||||
Sample outputs
|
||||
```
|
||||
alias l='ls -CF'
|
||||
alias la='ls -A'
|
||||
alias ll='ls -alF'
|
||||
alias ls='ls --color=auto'
|
||||
```
|
||||
|
||||
#### How to turn on colors for ls command
|
||||
|
||||
Use any one of the following command:
|
||||
```
|
||||
$ ls --color=auto
|
||||
$ ls --color=tty
|
||||
```
|
||||
[Define bash shell aliases ][3]if you want:
|
||||
`alias ls='ls --color=auto'`
|
||||
You can add or remove ls command alias to the ~/.bash_profile or [~/.bashrc file][4]. Edit file using a text editor such as vi command:
|
||||
`$ vi ~/.bashrc`
|
||||
Append the following code:
|
||||
```
|
||||
# my ls command aliases #
|
||||
alias ls = 'ls --color=auto'
|
||||
```
|
||||
|
||||
[Save and close the file in Vi/Vim text editor][5].
|
||||
|
||||
#### A note about *BSD/macOS/Apple OS X ls command
|
||||
|
||||
Pass the -G option to ls command to enable colorized output on a {Free,Net,Open}BSD or macOS and Apple OS X Unix family of operating systems:
|
||||
`$ ls -G`
|
||||
Sample outputs:
|
||||
[![How to enable colorized output for the ls command in Mac OS X Terminal][6]][7]
|
||||
How to enable colorized output for the ls command in Mac OS X Terminal
|
||||
|
||||
#### How do I skip colorful ls command output temporarily?
|
||||
|
||||
You can always [disable bash shell aliases temporarily][8] using any one of the following syntax:
|
||||
`\ls
|
||||
/bin/ls
|
||||
command ls
|
||||
'ls'`
|
||||
|
||||
|
||||
#### About the author
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][9], [Facebook][10], [Google+][11].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/how-to-turn-on-or-off-colors-in-bash/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/tips/bash-aliases-mac-centos-linux-unix.html (See Linux/Unix alias command examples for more info)
|
||||
[2]:https://www.cyberciti.biz/faq/howto-use-grep-command-in-linux-unix/ (See Linux/Unix grep command examples for more info)
|
||||
[3]:https://www.cyberciti.biz/tips/bash-aliases-mac-centos-linux-unix.html
|
||||
[4]:https://bash.cyberciti.biz/guide/~/.bashrc
|
||||
[5]:https://www.cyberciti.biz/faq/linux-unix-vim-save-and-quit-command/
|
||||
[6]:https://www.cyberciti.biz/media/new/faq/2016/01/color-ls-for-Mac-OS-X.jpg
|
||||
[7]:https://www.cyberciti.biz/faq/apple-mac-osx-terminal-color-ls-output-option/
|
||||
[8]:https://www.cyberciti.biz/faq/bash-shell-temporarily-disable-an-alias/
|
||||
[9]:https://twitter.com/nixcraft
|
||||
[10]:https://facebook.com/nixcraft
|
||||
[11]:https://plus.google.com/+CybercitiBiz
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
How to use lftp to accelerate ftp/https download speed on Linux/UNIX
|
||||
======
|
||||
lftp is a file transfer program. It allows sophisticated FTP, HTTP/HTTPS, and other connections. If the site URL is specified, then lftp will connect to that site otherwise a connection has to be established with the open command. It is an essential tool for all a Linux/Unix command line users. I have already written about [Linux ultra fast command line download accelerator][1] such as Axel and prozilla. lftp is another tool for the same job with more features. lftp can handle seven file access methods:
|
||||
|
@ -1,4 +1,4 @@
|
||||
6 Best Open Source Alternatives to Microsoft Office for Linux
|
||||
6 Best Open Source Alternatives to Microsoft Office for Linux
|
||||
======
|
||||
**Brief: Looking for Microsoft Office in Linux? Here are the best free and open source alternatives to Microsoft Office for Linux.**
|
||||
|
||||
|
@ -0,0 +1,94 @@
|
||||
How To Safely Generate A Random Number — Quarrelsome
|
||||
======
|
||||
### Use urandom
|
||||
|
||||
Use [urandom][1]. Use [urandom][2]. Use [urandom][3]. Use [urandom][4]. Use [urandom][5]. Use [urandom][6].
|
||||
|
||||
### But what about for crypto keys?
|
||||
|
||||
Still [urandom][6].
|
||||
|
||||
### Why not {SecureRandom, OpenSSL, havaged, &c}?
|
||||
|
||||
These are userspace CSPRNGs. You want to use the kernel’s CSPRNG, because:
|
||||
|
||||
* The kernel has access to raw device entropy.
|
||||
|
||||
* It can promise not to share the same state between applications.
|
||||
|
||||
* A good kernel CSPRNG, like FreeBSD’s, can also promise not to feed you random data before it’s seeded.
|
||||
|
||||
|
||||
|
||||
|
||||
Study the last ten years of randomness failures and you’ll read a litany of userspace randomness failures. [Debian’s OpenSSH debacle][7]? Userspace random. Android Bitcoin wallets [repeating ECDSA k’s][8]? Userspace random. Gambling sites with predictable shuffles? Userspace random.
|
||||
|
||||
Userspace OpenSSL also seeds itself from “from uninitialized memory, magical fairy dust and unicorn horns” generators almost always depend on the kernel’s generator anyways. Even if they don’t, the security of your whole system sure does. **A userspace CSPRNG doesn’t add defense-in-depth; instead, it creates two single points of failure.**
|
||||
|
||||
### Doesn’t the man page say to use /dev/random?
|
||||
|
||||
You But, more on this later. Stay your pitchforks. should ignore the man page. Don’t use /dev/random. The distinction between /dev/random and /dev/urandom is a Unix design wart. The man page doesn’t want to admit that, so it invents a security concern that doesn’t really exist. Consider the cryptographic advice in random(4) an urban legend and get on with your life.
|
||||
|
||||
### But what if I need real random values, not psuedorandom values?
|
||||
|
||||
Both urandom and /dev/random provide the same kind of randomness. Contrary to popular belief, /dev/random doesn’t provide “true random” data. For cryptography, you don’t usually want “true random”.
|
||||
|
||||
Both urandom and /dev/random are based on a simple idea. Their design is closely related to that of a stream cipher: a small secret is stretched into an indefinite stream of unpredictable values. Here the secrets are “entropy”, and the stream is “output”.
|
||||
|
||||
Only on Linux are /dev/random and urandom still meaningfully different. The Linux kernel CSPRNG rekeys itself regularly (by collecting more entropy). But /dev/random also tries to keep track of how much entropy remains in its kernel pool, and will occasionally go on strike if it decides not enough remains. This design is as silly as I’ve made it sound; it’s akin to AES-CTR blocking based on how much “key” is left in the “keystream”.
|
||||
|
||||
If you use /dev/random instead of urandom, your program will unpredictably (or, if you’re an attacker, very predictably) hang when Linux gets confused about how its own RNG works. Using /dev/random will make your programs less stable, but it won’t make them any more cryptographically safe.
|
||||
|
||||
### There’s a catch here, isn’t there?
|
||||
|
||||
No, but there’s a Linux kernel bug you might want to know about, even though it doesn’t change which RNG you should use.
|
||||
|
||||
On Linux, if your software runs immediately at boot, and/or the OS has just been installed, your code might be in a race with the RNG. That’s bad, because if you win the race, there could be a window of time where you get predictable outputs from urandom. This is a bug in Linux, and you need to know about it if you’re building platform-level code for a Linux embedded device.
|
||||
|
||||
This is indeed a problem with urandom (and not /dev/random) on Linux. It’s also a [bug in the Linux kernel][9]. But it’s also easily fixed in userland: at boot, seed urandom explicitly. Most Linux distributions have done this for a long time. But don’t switch to a different CSPRNG.
|
||||
|
||||
### What about on other operating systems?
|
||||
|
||||
FreeBSD and OS X do away with the distinction between urandom and /dev/random; the two devices behave identically. Unfortunately, the man page does a poor job of explaining why this is, and perpetuates the myth that Linux urandom is scary.
|
||||
|
||||
FreeBSD’s kernel crypto RNG doesn’t block regardless of whether you use /dev/random or urandom. Unless it hasn’t been seeded, in which case both block. This behavior, unlike Linux’s, makes sense. Linux should adopt it. But if you’re an app developer, this makes little difference to you: Linux, FreeBSD, iOS, whatever: use urandom.
|
||||
|
||||
### tl;dr
|
||||
|
||||
Use urandom.
|
||||
|
||||
### Epilog
|
||||
|
||||
[ruby-trunk Feature #9569][10]
|
||||
|
||||
> Right now, SecureRandom.random_bytes tries to detect an OpenSSL to use before it tries to detect /dev/urandom. I think it should be the other way around. In both cases, you just need random bytes to unpack, so SecureRandom could skip the middleman (and second point of failure) and just talk to /dev/urandom directly if it’s available.
|
||||
|
||||
Resolution:
|
||||
|
||||
> /dev/urandom is not suitable to be used to generate directly session keys and other application level random data which is generated frequently.
|
||||
>
|
||||
> [the] random(4) [man page] on GNU/Linux [says]…
|
||||
|
||||
Thanks to Matthew Green, Nate Lawson, Sean Devlin, Coda Hale, and Alex Balducci for reading drafts of this. Fair warning: Matthew only mostly agrees with me.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/
|
||||
|
||||
作者:[Thomas;Erin;Matasano][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://sockpuppet.org/blog
|
||||
[1]:http://blog.cr.yp.to/20140205-entropy.html
|
||||
[2]:http://cr.yp.to/talks/2011.09.28/slides.pdf
|
||||
[3]:http://golang.org/src/pkg/crypto/rand/rand_unix.go
|
||||
[4]:http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key
|
||||
[5]:http://stackoverflow.com/a/5639631
|
||||
[6]:https://twitter.com/bramcohen/status/206146075487240194
|
||||
[7]:http://research.swtch.com/openssl
|
||||
[8]:http://arstechnica.com/security/2013/08/google-confirms-critical-android-crypto-flaw-used-in-5700-bitcoin-heist/
|
||||
[9]:https://factorable.net/weakkeys12.extended.pdf
|
||||
[10]:https://bugs.ruby-lang.org/issues/9569
|
@ -1,343 +0,0 @@
|
||||
BriFuture is translating this article
|
||||
|
||||
Let’s Build A Simple Interpreter. Part 1.
|
||||
======
|
||||
|
||||
|
||||
> **" If you don't know how compilers work, then you don't know how computers work. If you're not 100% sure whether you know how compilers work, then you don't know how they work."** -- Steve Yegge
|
||||
|
||||
There you have it. Think about it. It doesn't really matter whether you're a newbie or a seasoned software developer: if you don't know how compilers and interpreters work, then you don't know how computers work. It's that simple.
|
||||
|
||||
So, do you know how compilers and interpreters work? And I mean, are you 100% sure that you know how they work? If you don't. ![][1]
|
||||
|
||||
Or if you don't and you're really agitated about it. ![][2]
|
||||
|
||||
Do not worry. If you stick around and work through the series and build an interpreter and a compiler with me you will know how they work in the end. And you will become a confident happy camper too. At least I hope so. ![][3]
|
||||
|
||||
Why would you study interpreters and compilers? I will give you three reasons.
|
||||
|
||||
1. To write an interpreter or a compiler you have to have a lot of technical skills that you need to use together. Writing an interpreter or a compiler will help you improve those skills and become a better software developer. As well, the skills you will learn are useful in writing any software, not just interpreters or compilers.
|
||||
2. You really want to know how computers work. Often interpreters and compilers look like magic. And you shouldn't be comfortable with that magic. You want to demystify the process of building an interpreter and a compiler, understand how they work, and get in control of things.
|
||||
3. You want to create your own programming language or domain specific language. If you create one, you will also need to create either an interpreter or a compiler for it. Recently, there has been a resurgence of interest in new programming languages. And you can see a new programming language pop up almost every day: Elixir, Go, Rust just to name a few.
|
||||
|
||||
|
||||
|
||||
|
||||
Okay, but what are interpreters and compilers?
|
||||
|
||||
The goal of an **interpreter** or a **compiler** is to translate a source program in some high-level language into some other form. Pretty vague, isn 't it? Just bear with me, later in the series you will learn exactly what the source program is translated into.
|
||||
|
||||
At this point you may also wonder what the difference is between an interpreter and a compiler. For the purpose of this series, let's agree that if a translator translates a source program into machine language, it is a **compiler**. If a translator processes and executes the source program without translating it into machine language first, it is an **interpreter**. Visually it looks something like this:
|
||||
|
||||
![][4]
|
||||
|
||||
I hope that by now you're convinced that you really want to study and build an interpreter and a compiler. What can you expect from this series on interpreters?
|
||||
|
||||
Here is the deal. You and I are going to create a simple interpreter for a large subset of [Pascal][5] language. At the end of this series you will have a working Pascal interpreter and a source-level debugger like Python's [pdb][6].
|
||||
|
||||
You might ask, why Pascal? For one thing, it's not a made-up language that I came up with just for this series: it's a real programming language that has many important language constructs. And some old, but useful, CS books use Pascal programming language in their examples (I understand that that's not a particularly compelling reason to choose a language to build an interpreter for, but I thought it would be nice for a change to learn a non-mainstream language :)
|
||||
|
||||
Here is an example of a factorial function in Pascal that you will be able to interpret with your own interpreter and debug with the interactive source-level debugger that you will create along the way:
|
||||
```
|
||||
program factorial;
|
||||
|
||||
function factorial(n: integer): longint;
|
||||
begin
|
||||
if n = 0 then
|
||||
factorial := 1
|
||||
else
|
||||
factorial := n * factorial(n - 1);
|
||||
end;
|
||||
|
||||
var
|
||||
n: integer;
|
||||
|
||||
begin
|
||||
for n := 0 to 16 do
|
||||
writeln(n, '! = ', factorial(n));
|
||||
end.
|
||||
```
|
||||
|
||||
The implementation language of the Pascal interpreter will be Python, but you can use any language you want because the ideas presented don't depend on any particular implementation language. Okay, let's get down to business. Ready, set, go!
|
||||
|
||||
You will start your first foray into interpreters and compilers by writing a simple interpreter of arithmetic expressions, also known as a calculator. Today the goal is pretty minimalistic: to make your calculator handle the addition of two single digit integers like **3+5**. Here is the source code for your calculator, sorry, interpreter:
|
||||
|
||||
```
|
||||
# Token types
|
||||
#
|
||||
# EOF (end-of-file) token is used to indicate that
|
||||
# there is no more input left for lexical analysis
|
||||
INTEGER, PLUS, EOF = 'INTEGER', 'PLUS', 'EOF'
|
||||
|
||||
|
||||
class Token(object):
|
||||
def __init__(self, type, value):
|
||||
# token type: INTEGER, PLUS, or EOF
|
||||
self.type = type
|
||||
# token value: 0, 1, 2. 3, 4, 5, 6, 7, 8, 9, '+', or None
|
||||
self.value = value
|
||||
|
||||
def __str__(self):
|
||||
"""String representation of the class instance.
|
||||
|
||||
Examples:
|
||||
Token(INTEGER, 3)
|
||||
Token(PLUS '+')
|
||||
"""
|
||||
return 'Token({type}, {value})'.format(
|
||||
type=self.type,
|
||||
value=repr(self.value)
|
||||
)
|
||||
|
||||
def __repr__(self):
|
||||
return self.__str__()
|
||||
|
||||
|
||||
class Interpreter(object):
|
||||
def __init__(self, text):
|
||||
# client string input, e.g. "3+5"
|
||||
self.text = text
|
||||
# self.pos is an index into self.text
|
||||
self.pos = 0
|
||||
# current token instance
|
||||
self.current_token = None
|
||||
|
||||
def error(self):
|
||||
raise Exception('Error parsing input')
|
||||
|
||||
def get_next_token(self):
|
||||
"""Lexical analyzer (also known as scanner or tokenizer)
|
||||
|
||||
This method is responsible for breaking a sentence
|
||||
apart into tokens. One token at a time.
|
||||
"""
|
||||
text = self.text
|
||||
|
||||
# is self.pos index past the end of the self.text ?
|
||||
# if so, then return EOF token because there is no more
|
||||
# input left to convert into tokens
|
||||
if self.pos > len(text) - 1:
|
||||
return Token(EOF, None)
|
||||
|
||||
# get a character at the position self.pos and decide
|
||||
# what token to create based on the single character
|
||||
current_char = text[self.pos]
|
||||
|
||||
# if the character is a digit then convert it to
|
||||
# integer, create an INTEGER token, increment self.pos
|
||||
# index to point to the next character after the digit,
|
||||
# and return the INTEGER token
|
||||
if current_char.isdigit():
|
||||
token = Token(INTEGER, int(current_char))
|
||||
self.pos += 1
|
||||
return token
|
||||
|
||||
if current_char == '+':
|
||||
token = Token(PLUS, current_char)
|
||||
self.pos += 1
|
||||
return token
|
||||
|
||||
self.error()
|
||||
|
||||
def eat(self, token_type):
|
||||
# compare the current token type with the passed token
|
||||
# type and if they match then "eat" the current token
|
||||
# and assign the next token to the self.current_token,
|
||||
# otherwise raise an exception.
|
||||
if self.current_token.type == token_type:
|
||||
self.current_token = self.get_next_token()
|
||||
else:
|
||||
self.error()
|
||||
|
||||
def expr(self):
|
||||
"""expr -> INTEGER PLUS INTEGER"""
|
||||
# set current token to the first token taken from the input
|
||||
self.current_token = self.get_next_token()
|
||||
|
||||
# we expect the current token to be a single-digit integer
|
||||
left = self.current_token
|
||||
self.eat(INTEGER)
|
||||
|
||||
# we expect the current token to be a '+' token
|
||||
op = self.current_token
|
||||
self.eat(PLUS)
|
||||
|
||||
# we expect the current token to be a single-digit integer
|
||||
right = self.current_token
|
||||
self.eat(INTEGER)
|
||||
# after the above call the self.current_token is set to
|
||||
# EOF token
|
||||
|
||||
# at this point INTEGER PLUS INTEGER sequence of tokens
|
||||
# has been successfully found and the method can just
|
||||
# return the result of adding two integers, thus
|
||||
# effectively interpreting client input
|
||||
result = left.value + right.value
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
while True:
|
||||
try:
|
||||
# To run under Python3 replace 'raw_input' call
|
||||
# with 'input'
|
||||
text = raw_input('calc> ')
|
||||
except EOFError:
|
||||
break
|
||||
if not text:
|
||||
continue
|
||||
interpreter = Interpreter(text)
|
||||
result = interpreter.expr()
|
||||
print(result)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
```
|
||||
|
||||
|
||||
Save the above code into calc1.py file or download it directly from [GitHub][7]. Before you start digging deeper into the code, run the calculator on the command line and see it in action. Play with it! Here is a sample session on my laptop (if you want to run the calculator under Python3 you will need to replace raw_input with input):
|
||||
```
|
||||
$ python calc1.py
|
||||
calc> 3+4
|
||||
7
|
||||
calc> 3+5
|
||||
8
|
||||
calc> 3+9
|
||||
12
|
||||
calc>
|
||||
```
|
||||
|
||||
For your simple calculator to work properly without throwing an exception, your input needs to follow certain rules:
|
||||
|
||||
* Only single digit integers are allowed in the input
|
||||
* The only arithmetic operation supported at the moment is addition
|
||||
* No whitespace characters are allowed anywhere in the input
|
||||
|
||||
|
||||
|
||||
Those restrictions are necessary to make the calculator simple. Don't worry, you'll make it pretty complex pretty soon.
|
||||
|
||||
Okay, now let's dive in and see how your interpreter works and how it evaluates arithmetic expressions.
|
||||
|
||||
When you enter an expression 3+5 on the command line your interpreter gets a string "3+5". In order for the interpreter to actually understand what to do with that string it first needs to break the input "3+5" into components called **tokens**. A **token** is an object that has a type and a value. For example, for the string "3" the type of the token will be INTEGER and the corresponding value will be integer 3.
|
||||
|
||||
The process of breaking the input string into tokens is called **lexical analysis**. So, the first step your interpreter needs to do is read the input of characters and convert it into a stream of tokens. The part of the interpreter that does it is called a **lexical analyzer** , or **lexer** for short. You might also encounter other names for the same component, like **scanner** or **tokenizer**. They all mean the same: the part of your interpreter or compiler that turns the input of characters into a stream of tokens.
|
||||
|
||||
The method get_next_token of the Interpreter class is your lexical analyzer. Every time you call it, you get the next token created from the input of characters passed to the interpreter. Let's take a closer look at the method itself and see how it actually does its job of converting characters into tokens. The input is stored in the variable text that holds the input string and pos is an index into that string (think of the string as an array of characters). pos is initially set to 0 and points to the character '3'. The method first checks whether the character is a digit and if so, it increments pos and returns a token instance with the type INTEGER and the value set to the integer value of the string '3', which is an integer 3:
|
||||
|
||||
![][8]
|
||||
|
||||
The pos now points to the '+' character in the text. The next time you call the method, it tests if a character at the position pos is a digit and then it tests if the character is a plus sign, which it is. As a result the method increments pos and returns a newly created token with the type PLUS and value '+':
|
||||
|
||||
![][9]
|
||||
|
||||
The pos now points to character '5'. When you call the get_next_token method again the method checks if it's a digit, which it is, so it increments pos and returns a new INTEGER token with the value of the token set to integer 5: ![][10]
|
||||
|
||||
Because the pos index is now past the end of the string "3+5" the get_next_token method returns the EOF token every time you call it:
|
||||
|
||||
![][11]
|
||||
|
||||
Try it out and see for yourself how the lexer component of your calculator works:
|
||||
```
|
||||
>>> from calc1 import Interpreter
|
||||
>>>
|
||||
>>> interpreter = Interpreter('3+5')
|
||||
>>> interpreter.get_next_token()
|
||||
Token(INTEGER, 3)
|
||||
>>>
|
||||
>>> interpreter.get_next_token()
|
||||
Token(PLUS, '+')
|
||||
>>>
|
||||
>>> interpreter.get_next_token()
|
||||
Token(INTEGER, 5)
|
||||
>>>
|
||||
>>> interpreter.get_next_token()
|
||||
Token(EOF, None)
|
||||
>>>
|
||||
```
|
||||
|
||||
So now that your interpreter has access to the stream of tokens made from the input characters, the interpreter needs to do something with it: it needs to find the structure in the flat stream of tokens it gets from the lexer get_next_token. Your interpreter expects to find the following structure in that stream: INTEGER -> PLUS -> INTEGER. That is, it tries to find a sequence of tokens: integer followed by a plus sign followed by an integer.
|
||||
|
||||
The method responsible for finding and interpreting that structure is expr. This method verifies that the sequence of tokens does indeed correspond to the expected sequence of tokens, i.e INTEGER -> PLUS -> INTEGER. After it's successfully confirmed the structure, it generates the result by adding the value of the token on the left side of the PLUS and the right side of the PLUS, thus successfully interpreting the arithmetic expression you passed to the interpreter.
|
||||
|
||||
The expr method itself uses the helper method eat to verify that the token type passed to the eat method matches the current token type. After matching the passed token type the eat method gets the next token and assigns it to the current_token variable, thus effectively "eating" the currently matched token and advancing the imaginary pointer in the stream of tokens. If the structure in the stream of tokens doesn't correspond to the expected INTEGER PLUS INTEGER sequence of tokens the eat method throws an exception.
|
||||
|
||||
Let's recap what your interpreter does to evaluate an arithmetic expression:
|
||||
|
||||
* The interpreter accepts an input string, let's say "3+5"
|
||||
* The interpreter calls the expr method to find a structure in the stream of tokens returned by the lexical analyzer get_next_token. The structure it tries to find is of the form INTEGER PLUS INTEGER. After it's confirmed the structure, it interprets the input by adding the values of two INTEGER tokens because it's clear to the interpreter at that point that what it needs to do is add two integers, 3 and 5.
|
||||
|
||||
Congratulate yourself. You've just learned how to build your very first interpreter!
|
||||
|
||||
Now it's time for exercises.
|
||||
|
||||
![][12]
|
||||
|
||||
You didn't think you would just read this article and that would be enough, did you? Okay, get your hands dirty and do the following exercises:
|
||||
|
||||
1. Modify the code to allow multiple-digit integers in the input, for example "12+3"
|
||||
2. Add a method that skips whitespace characters so that your calculator can handle inputs with whitespace characters like " 12 + 3"
|
||||
3. Modify the code and instead of '+' handle '-' to evaluate subtractions like "7-5"
|
||||
|
||||
|
||||
|
||||
**Check your understanding**
|
||||
|
||||
1. What is an interpreter?
|
||||
2. What is a compiler?
|
||||
3. What's the difference between an interpreter and a compiler?
|
||||
4. What is a token?
|
||||
5. What is the name of the process that breaks input apart into tokens?
|
||||
6. What is the part of the interpreter that does lexical analysis called?
|
||||
7. What are the other common names for that part of an interpreter or a compiler?
|
||||
|
||||
|
||||
|
||||
Before I finish this article, I really want you to commit to studying interpreters and compilers. And I want you to do it right now. Don't put it on the back burner. Don't wait. If you've skimmed the article, start over. If you've read it carefully but haven't done exercises - do them now. If you've done only some of them, finish the rest. You get the idea. And you know what? Sign the commitment pledge to start learning about interpreters and compilers today!
|
||||
|
||||
|
||||
|
||||
_I, ________, of being sound mind and body, do hereby pledge to commit to studying interpreters and compilers starting today and get to a point where I know 100% how they work!_
|
||||
|
||||
Signature:
|
||||
|
||||
Date:
|
||||
|
||||
![][13]
|
||||
|
||||
Sign it, date it, and put it somewhere where you can see it every day to make sure that you stick to your commitment. And keep in mind the definition of commitment:
|
||||
|
||||
> "Commitment is doing the thing you said you were going to do long after the mood you said it in has left you." -- Darren Hardy
|
||||
|
||||
Okay, that's it for today. In the next article of the mini series you will extend your calculator to handle more arithmetic expressions. Stay tuned.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://ruslanspivak.com/lsbasi-part1/
|
||||
|
||||
作者:[Ruslan Spivak][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ruslanspivak.com
|
||||
[1]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_i_dont_know.png
|
||||
[2]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_omg.png
|
||||
[3]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_i_know.png
|
||||
[4]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_compiler_interpreter.png
|
||||
[5]:https://en.wikipedia.org/wiki/Pascal_%28programming_language%29
|
||||
[6]:https://docs.python.org/2/library/pdb.html
|
||||
[7]:https://github.com/rspivak/lsbasi/blob/master/part1/calc1.py
|
||||
[8]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_lexer1.png
|
||||
[9]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_lexer2.png
|
||||
[10]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_lexer3.png
|
||||
[11]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_lexer4.png
|
||||
[12]:https://ruslanspivak.com/lsbasi-part1/lsbasi_exercises2.png
|
||||
[13]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_commitment_pledge.png
|
||||
[14]:http://ruslanspivak.com/lsbaws-part1/ (Part 1)
|
||||
[15]:http://ruslanspivak.com/lsbaws-part2/ (Part 2)
|
||||
[16]:http://ruslanspivak.com/lsbaws-part3/ (Part 3)
|
@ -1,161 +0,0 @@
|
||||
Learn your tools: Navigating your Git History
|
||||
============================================================
|
||||
|
||||
Starting a greenfield application everyday is nearly impossible, especially in your daily job. In fact, most of us are facing (somewhat) legacy codebases on a daily basis, and regaining the context of why some feature, or line of code exists in the codebase is very important. This is where `git`, the distributed version control system, is invaluable. Let’s dive in and see how we can use our `git` history and easily navigate through it.
|
||||
|
||||
### Git history
|
||||
|
||||
First and foremost, what is `git` history? As the name says, it is the commit history of a `git` repo. It contains a bunch of commit messages, with their authors’ name, the commit hash and the date of the commit. The easiest way to see the history of a `git`repo, is the `git log` command.
|
||||
|
||||
Sidenote: For the purpose of this post, we will use Ruby on Rails’ repo, the `master`branch. The reason behind this is because Rails has a very good `git` history, with nice commit messages, references and explanations behind every change. Given the size of the codebase, the age and the number of maintainers, it’s certainly one of the best repositories that I have seen. Of course, I am not saying there are no other repositories built with good `git` practices, but this is one that has caught my eye.
|
||||
|
||||
So back to Rails’ repo. If you run `git log` in the Rails’ repo, you will see something like this:
|
||||
|
||||
```
|
||||
commit 66ebbc4952f6cfb37d719f63036441ef98149418Author: Arthur Neves <foo@bar.com>Date: Fri Jun 3 17:17:38 2016 -0400 Dont re-define class SQLite3Adapter on test We were declaring in a few tests, which depending of the order load will cause an error, as the super class could change. see https://github.com/rails/rails/commit/ac1c4e141b20c1067af2c2703db6e1b463b985da#commitcomment-17731383commit 755f6bf3d3d568bc0af2c636be2f6df16c651eb1Merge: 4e85538 f7b850eAuthor: Eileen M. Uchitelle <foo@bar.com>Date: Fri Jun 3 10:21:49 2016 -0400 Merge pull request #25263 from abhishekjain16/doc_accessor_thread [skip ci] Fix grammarcommit f7b850ec9f6036802339e965c8ce74494f731b4aAuthor: Abhishek Jain <foo@bar.com>Date: Fri Jun 3 16:49:21 2016 +0530 [skip ci] Fix grammarcommit 4e85538dddf47877cacc65cea6c050e349af0405Merge: 082a515 cf2158cAuthor: Vijay Dev <foo@bar.com>Date: Fri Jun 3 14:00:47 2016 +0000 Merge branch 'master' of github.com:rails/docrails Conflicts: guides/source/action_cable_overview.mdcommit 082a5158251c6578714132e5c4f71bd39f462d71Merge: 4bd11d4 3bd30d9Author: Yves Senn <foo@bar.com>Date: Fri Jun 3 11:30:19 2016 +0200 Merge pull request #25243 from sukesan1984/add_i18n_validation_test Add i18n_validation_testcommit 4bd11d46de892676830bca51d3040f29200abbfaMerge: 99d8d45 e98caf8Author: Arthur Nogueira Neves <foo@bar.com>Date: Thu Jun 2 22:55:52 2016 -0400 Merge pull request #25258 from alexcameron89/master [skip ci] Make header bullets consistent in engines.mdcommit e98caf81fef54746126d31076c6d346c48ae8e1bAuthor: Alex Kitchens <foo@bar.com>Date: Thu Jun 2 21:26:53 2016 -0500 [skip ci] Make header bullets consistent in engines.md
|
||||
```
|
||||
|
||||
As you can see, the `git log` shows the commit hash, the author and his email and the date of when the commit was created. Of course, `git` being super customisable, it allows you to customise the output format of the `git log` command. Let’s say, we want to just see the first line of the commit message, we could run `git log --oneline`, which will produce a more compact log:
|
||||
|
||||
```
|
||||
66ebbc4 Dont re-define class SQLite3Adapter on test755f6bf Merge pull request #25263 from abhishekjain16/doc_accessor_threadf7b850e [skip ci] Fix grammar4e85538 Merge branch 'master' of github.com:rails/docrails082a515 Merge pull request #25243 from sukesan1984/add_i18n_validation_test4bd11d4 Merge pull request #25258 from alexcameron89/mastere98caf8 [skip ci] Make header bullets consistent in engines.md99d8d45 Merge pull request #25254 from kamipo/fix_debug_helper_test818397c Merge pull request #25240 from matthewd/reloadable-channels2c5a8ba Don't blank pad day of the month when formatting dates14ff8e7 Fix debug helper test
|
||||
```
|
||||
|
||||
To see all of the `git log` options, I recommend checking out manpage of `git log`, available in your terminal via `man git-log` or `git help log`. A tip: if `git log` is a bit scarse or complicated to use, or maybe you are just bored, I recommend checking out various `git` GUIs and command line tools. In the past I’ve used [GitX][1] which was very good, but since the command line feels like home to me, after trying [tig][2] I’ve never looked back.
|
||||
|
||||
### Finding Nemo
|
||||
|
||||
So now, since we know the bare minimum of the `git log` command, let’s see how we can explore the history more effectively in our everyday work.
|
||||
|
||||
Let’s say, hypothetically, we are suspecting an unexpected behaviour in the`String#classify` method and we want to find how and where it has been implemented.
|
||||
|
||||
One of the first commands that you can use, to see where the method is defined, is `git grep`. Simply said, this command prints out lines that match a certain pattern. Now, to find the definition of the method, it’s pretty simple - we can grep for `def classify` and see what we get:
|
||||
|
||||
```
|
||||
➜ git grep 'def classify'activesupport/lib/active_support/core_ext/string/inflections.rb: def classifyactivesupport/lib/active_support/inflector/methods.rb: def classify(table_name)tools/profile: def classify
|
||||
```
|
||||
|
||||
Now, although we can already see where our method is created, we are not sure on which line it is. If we add the `-n` flag to our `git grep` command, `git` will provide the line numbers of the match:
|
||||
|
||||
```
|
||||
➜ git grep -n 'def classify'activesupport/lib/active_support/core_ext/string/inflections.rb:205: def classifyactivesupport/lib/active_support/inflector/methods.rb:186: def classify(table_name)tools/profile:112: def classify
|
||||
```
|
||||
|
||||
Much better, right? Having the context in mind, we can easily figure out that the method that we are looking for lives in `activesupport/lib/active_support/core_ext/string/inflections.rb`, on line 205\. The `classify` method, in all of it’s glory looks like this:
|
||||
|
||||
```
|
||||
# Creates a class name from a plural table name like Rails does for table names to models.# Note that this returns a string and not a class. (To convert to an actual class# follow +classify+ with +constantize+.)## 'ham_and_eggs'.classify # => "HamAndEgg"# 'posts'.classify # => "Post"def classify ActiveSupport::Inflector.classify(self)end
|
||||
```
|
||||
|
||||
Although the method we found is the one we usually call on `String`s, it invokes another method on the `ActiveSupport::Inflector`, with the same name. Having our `git grep` result available, we can easily navigate there, since we can see the second line of the result being`activesupport/lib/active_support/inflector/methods.rb` on line 186\. The method that we are are looking for is:
|
||||
|
||||
```
|
||||
# Creates a class name from a plural table name like Rails does for table# names to models. Note that this returns a string and not a Class (To# convert to an actual class follow +classify+ with #constantize).## classify('ham_and_eggs') # => "HamAndEgg"# classify('posts') # => "Post"## Singular names are not handled correctly:## classify('calculus') # => "Calculus"def classify(table_name) # strip out any leading schema name camelize(singularize(table_name.to_s.sub(/.*\./, ''.freeze)))end
|
||||
```
|
||||
|
||||
Boom! Given the size of Rails, finding this should not take us more than 30 seconds with the help of `git grep`.
|
||||
|
||||
### So, what changed last?
|
||||
|
||||
Now, since we have the method available, we need to figure out what were the changes that this file has gone through. The since we know the correct file name and line number, we can use `git blame`. This command shows what revision and author last modified each line of a file. Let’s see what were the latest changes made to this file:
|
||||
|
||||
```
|
||||
git blame activesupport/lib/active_support/inflector/methods.rb
|
||||
```
|
||||
|
||||
Whoa! Although we get the last change of every line in the file, we are more interested in the specific method (lines 176 to 189). Let’s add a flag to the `git blame` command, that will show the blame of just those lines. Also, we will add the `-s` (suppress) option to the command, to skip the author names and the timestamp of the revision (commit) that changed the line:
|
||||
|
||||
```
|
||||
git blame -L 176,189 -s activesupport/lib/active_support/inflector/methods.rb9fe8e19a 176) # Creates a class name from a plural table name like Rails does for table5ea3f284 177) # names to models. Note that this returns a string and not a Class (To9fe8e19a 178) # convert to an actual class follow +classify+ with #constantize).51cd6bb8 179) #6d077205 180) # classify('ham_and_eggs') # => "HamAndEgg"9fe8e19a 181) # classify('posts') # => "Post"51cd6bb8 182) #51cd6bb8 183) # Singular names are not handled correctly:5ea3f284 184) #66d6e7be 185) # classify('calculus') # => "Calculus"51cd6bb8 186) def classify(table_name)51cd6bb8 187) # strip out any leading schema name5bb1d4d2 188) camelize(singularize(table_name.to_s.sub(/.*\./, ''.freeze)))51cd6bb8 189) end
|
||||
```
|
||||
|
||||
The output of the `git blame` command now shows all of the file lines and their respective revisions. Now, to see a specific revision, or in other words, what each of those revisions changed, we can use the `git show` command. When supplied a revision hash (like `66d6e7be`) as an argument, it will show you the full revision, with the author name, timestamp and the whole revision in it’s glory. Let’s see what actually changed at the latest revision that changed line 188:
|
||||
|
||||
```
|
||||
git show 5bb1d4d2
|
||||
```
|
||||
|
||||
Whoa! Did you test that? If you didn’t, it’s an awesome [commit][3] by [Schneems][4] that made a very interesting performance optimization by using frozen strings, which makes sense in our current context. But, since we are on this hypothetical debugging session, this doesn’t tell much about our current problem. So, how can we see what changes has our method under investigation gone through?
|
||||
|
||||
### Searching the logs
|
||||
|
||||
Now, we are back to the `git` log. The question is, how can we see all the revisions that the `classify` method went under?
|
||||
|
||||
The `git log` command is quite powerful, because it has a rich list of options to apply to it. We can try to see what the `git` log has stored for this file, using the `-p`options, which means show me the patch for this entry in the `git` log:
|
||||
|
||||
```
|
||||
git log -p activesupport/lib/active_support/inflector/methods.rb
|
||||
```
|
||||
|
||||
This will show us a big list of revisions, for every revision of this file. But, just like before, we are interested in the specific file lines. Let’s modify the command a bit, to show us what we need:
|
||||
|
||||
```
|
||||
git log -L 176,189:activesupport/lib/active_support/inflector/methods.rb
|
||||
```
|
||||
|
||||
The `git log` command accepts the `-L` option, which takes the lines range and the filename as arguments. The format might be a bit weird for you, but it translates to:
|
||||
|
||||
```
|
||||
git log -L <start-line>,<end-line>:<path-to-file>
|
||||
```
|
||||
|
||||
When we run this command, we can see the list of revisions for these lines, which will lead us to the first revision that created the method:
|
||||
|
||||
```
|
||||
commit 51xd6bb829c418c5fbf75de1dfbb177233b1b154Author: Foo Bar <foo@bar.com>Date: Tue Jun 7 19:05:09 2011 -0700 Refactordiff --git a/activesupport/lib/active_support/inflector/methods.rb b/activesupport/lib/active_support/inflector/methods.rb--- a/activesupport/lib/active_support/inflector/methods.rb+++ b/activesupport/lib/active_support/inflector/methods.rb@@ -58,0 +135,14 @@+ # Create a class name from a plural table name like Rails does for table names to models.+ # Note that this returns a string and not a Class. (To convert to an actual class+ # follow +classify+ with +constantize+.)+ #+ # Examples:+ # "egg_and_hams".classify # => "EggAndHam"+ # "posts".classify # => "Post"+ #+ # Singular names are not handled correctly:+ # "business".classify # => "Busines"+ def classify(table_name)+ # strip out any leading schema name+ camelize(singularize(table_name.to_s.sub(/.*\./, '')))+ end
|
||||
```
|
||||
|
||||
Now, look at that - it’s a commit from 2011\. Practically, `git` allows us to travel back in time. This is a very good example of why a proper commit message is paramount to regain context, because from the commit message we cannot really regain context of how this method came to be. But, on the flip side, you should **never ever** get frustrated about it, because you are looking at someone that basically gives away his time and energy for free, doing open source work.
|
||||
|
||||
Coming back from that tangent, we are not sure how the initial implementation of the `classify` method came to be, given that the first commit is just a refactor. Now, if you are thinking something within the lines of “but maybe, just maybe, the method was not on the line range 176 to 189, and we should look more broadly in the file”, you are very correct. The revision that we saw said “Refactor” in it’s commit message, which means that the method was actually there, but after that refactor it started to exist on that line range.
|
||||
|
||||
So, how can we confirm this? Well, believe it or not, `git` comes to the rescue again. The `git log` command accepts the `-S` option, which looks for the code change (additions or deletions) for the specified string as an argument to the command. This means that, if we call `git log -S classify`, we can see all of the commits that changed a line that contains that string.
|
||||
|
||||
If you call this command in the Rails repo, you will first see `git` slowing down a bit. But, when you realise that `git` actually parses all of the revisions in the repo to match the string, it’s actually super fast. Again, the power of `git` at your fingertips. So, to find the first version of the `classify` method, we can run:
|
||||
|
||||
```
|
||||
git log -S 'def classify'
|
||||
```
|
||||
|
||||
This will return all of the revisions where this method has been introduced or changed. If you were following along, the last commit in the log that you will see is:
|
||||
|
||||
```
|
||||
commit db045dbbf60b53dbe013ef25554fd013baf88134Author: David Heinemeier Hansson <foo@bar.com>Date: Wed Nov 24 01:04:44 2004 +0000 Initial git-svn-id: http://svn-commit.rubyonrails.org/rails/trunk@4 5ecf4fe2-1ee6-0310-87b1-e25e094e27de
|
||||
```
|
||||
|
||||
How cool is that? It’s the initial commit to Rails, made on a `svn` repo by DHH! This means that `classify` has been around since the beginning of (Rails) time. Now, to see the commit with all of it’s changes, we can run:
|
||||
|
||||
```
|
||||
git show db045dbbf60b53dbe013ef25554fd013baf88134
|
||||
```
|
||||
|
||||
Great, we got to the bottom of it. Now, by using the output from `git log -S 'def classify'` you can track the changes that have happened to this method, combined with the power of the `git log -L` command.
|
||||
|
||||
### Until next time
|
||||
|
||||
Sure, we didn’t really fix any bugs, because we were trying some `git` commands and following along the evolution of the `classify` method. But, nevertheless, `git` is a very powerful tool that we all must learn to use and to embrace. I hope this article gave you a little bit more knowledge of how useful `git` is.
|
||||
|
||||
What are your favourite (or, most effective) ways of navigating through the `git`history?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Backend engineer, interested in Ruby, Go, microservices, building resilient architectures and solving challenges at scale. I coach at Rails Girls in Amsterdam, maintain a list of small gems and often contribute to Open Source.
|
||||
This is where I write about software development, programming languages and everything else that interests me.
|
||||
|
||||
------
|
||||
|
||||
via: https://ieftimov.com/learn-your-tools-navigating-git-history
|
||||
|
||||
作者:[Ilija Eftimov ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ieftimov.com/
|
||||
[1]:http://gitx.frim.nl/
|
||||
[2]:https://github.com/jonas/tig
|
||||
[3]:https://github.com/rails/rails/commit/5bb1d4d288d019e276335465d0389fd2f5246bfd
|
||||
[4]:https://twitter.com/schneems
|
289
sources/tech/20161106 Myths about -dev-urandom.md
Normal file
289
sources/tech/20161106 Myths about -dev-urandom.md
Normal file
@ -0,0 +1,289 @@
|
||||
Myths about /dev/urandom
|
||||
======
|
||||
|
||||
There are a few things about /dev/urandom and /dev/random that are repeated again and again. Still they are false.
|
||||
|
||||
I'm mostly talking about reasonably recent Linux systems, not other UNIX-like systems.
|
||||
|
||||
### /dev/urandom is insecure. Always use /dev/random for cryptographic purposes.
|
||||
|
||||
Fact: /dev/urandom is the preferred source of cryptographic randomness on UNIX-like systems.
|
||||
|
||||
### /dev/urandom is a pseudo random number generator, a PRNG, while /dev/random is a “true” random number generator.
|
||||
|
||||
Fact: Both /dev/urandom and /dev/random are using the exact same CSPRNG (a cryptographically secure pseudorandom number generator). They only differ in very few ways that have nothing to do with “true” randomness.
|
||||
|
||||
### /dev/random is unambiguously the better choice for cryptography. Even if /dev/urandom were comparably secure, there's no reason to choose the latter.
|
||||
|
||||
Fact: /dev/random has a very nasty problem: it blocks.
|
||||
|
||||
### But that's good! /dev/random gives out exactly as much randomness as it has entropy in its pool. /dev/urandom will give you insecure random numbers, even though it has long run out of entropy.
|
||||
|
||||
Fact: No. Even disregarding issues like availability and subsequent manipulation by users, the issue of entropy “running low” is a straw man. About 256 bits of entropy are enough to get computationally secure numbers for a long, long time.
|
||||
|
||||
And the fun only starts here: how does /dev/random know how much entropy there is available to give out? Stay tuned!
|
||||
|
||||
### But cryptographers always talk about constant re-seeding. Doesn't that contradict your last point?
|
||||
|
||||
Fact: You got me! Kind of. It is true, the random number generator is constantly re-seeded using whatever entropy the system can lay its hands on. But that has (partly) other reasons.
|
||||
|
||||
Look, I don't claim that injecting entropy is bad. It's good. I just claim that it's bad to block when the entropy estimate is low.
|
||||
|
||||
### That's all good and nice, but even the man page for /dev/(u)random contradicts you! Does anyone who knows about this stuff actually agree with you?
|
||||
|
||||
Fact: No, it really doesn't. It seems to imply that /dev/urandom is insecure for cryptographic use, unless you really understand all that cryptographic jargon.
|
||||
|
||||
The man page does recommend the use of /dev/random in some cases (it doesn't hurt, in my opinion, but is not strictly necessary), but it also recommends /dev/urandom as the device to use for “normal” cryptographic use.
|
||||
|
||||
And while appeal to authority is usually nothing to be proud of, in cryptographic issues you're generally right to be careful and try to get the opinion of a domain expert.
|
||||
|
||||
And yes, quite a few experts share my view that /dev/urandom is the go-to solution for your random number needs in a cryptography context on UNIX-like systems. Obviously, their opinions influenced mine, not the other way around.
|
||||
|
||||
Hard to believe, right? I must certainly be wrong! Well, read on and let me try to convince you.
|
||||
|
||||
I tried to keep it out, but I fear there are two preliminaries to be taken care of, before we can really tackle all those points.
|
||||
|
||||
Namely, what is randomness, or better: what kind of randomness am I talking about here?
|
||||
|
||||
And, even more important, I'm really not being condescending. I have written this document to have a thing to point to, when this discussion comes up again. More than 140 characters. Without repeating myself again and again. Being able to hone the writing and the arguments itself, benefitting many discussions in many venues.
|
||||
|
||||
And I'm certainly willing to hear differing opinions. I'm just saying that it won't be enough to state that /dev/urandom is bad. You need to identify the points you're disagreeing with and engage them.
|
||||
|
||||
### You're saying I'm stupid!
|
||||
|
||||
Emphatically no!
|
||||
|
||||
Actually, I used to believe that /dev/urandom was insecure myself, a few years ago. And it's something you and me almost had to believe, because all those highly respected people on Usenet, in web forums and today on Twitter told us. Even the man page seems to say so. Who were we to dismiss their convincing argument about “entropy running low”?
|
||||
|
||||
This misconception isn't so rampant because people are stupid, it is because with a little knowledge about cryptography (namely some vague idea what entropy is) it's very easy to be convinced of it. Intuition almost forces us there. Unfortunately intuition is often wrong in cryptography. So it is here.
|
||||
|
||||
### True randomness
|
||||
|
||||
What does it mean for random numbers to be “truly random”?
|
||||
|
||||
I don't want to dive into that issue too deep, because it quickly gets philosophical. Discussions have been known to unravel fast, because everyone can wax about their favorite model of randomness, without paying attention to anyone else. Or even making himself understood.
|
||||
|
||||
I believe that the “gold standard” for “true randomness” are quantum effects. Observe a photon pass through a semi-transparent mirror. Or not. Observe some radioactive material emit alpha particles. It's the best idea we have when it comes to randomness in the world. Other people might reasonably believe that those effects aren't truly random. Or even that there is no randomness in the world at all. Let a million flowers bloom.
|
||||
|
||||
Cryptographers often circumvent this philosophical debate by disregarding what it means for randomness to be “true”. They care about unpredictability. As long as nobody can get any information about the next random number, we're fine. And when you're talking about random numbers as a prerequisite in using cryptography, that's what you should aim for, in my opinion.
|
||||
|
||||
Anyway, I don't care much about those “philosophically secure” random numbers, as I like to think of your “true” random numbers.
|
||||
|
||||
### Two kinds of security, one that matters
|
||||
|
||||
But let's assume you've obtained those “true” random numbers. What are you going to do with them?
|
||||
|
||||
You print them out, frame them and hang them on your living-room wall, to revel in the beauty of a quantum universe? That's great, and I certainly understand.
|
||||
|
||||
Wait, what? You're using them? For cryptographic purposes? Well, that spoils everything, because now things get a bit ugly.
|
||||
|
||||
You see, your truly-random, quantum effect blessed random numbers are put into some less respectable, real-world tarnished algorithms.
|
||||
|
||||
Because almost all of the cryptographic algorithms we use do not hold up to ### information-theoretic security**. They can “only” offer **computational security. The two exceptions that come to my mind are Shamir's Secret Sharing and the One-time pad. And while the first one may be a valid counterpoint (if you actually intend to use it), the latter is utterly impractical.
|
||||
|
||||
But all those algorithms you know about, AES, RSA, Diffie-Hellman, Elliptic curves, and all those crypto packages you're using, OpenSSL, GnuTLS, Keyczar, your operating system's crypto API, these are only computationally secure.
|
||||
|
||||
What's the difference? While information-theoretically secure algorithms are secure, period, those other algorithms cannot guarantee security against an adversary with unlimited computational power who's trying all possibilities for keys. We still use them because it would take all the computers in the world taken together longer than the universe has existed, so far. That's the level of “insecurity” we're talking about here.
|
||||
|
||||
Unless some clever guy breaks the algorithm itself, using much less computational power. Even computational power achievable today. That's the big prize every cryptanalyst dreams about: breaking AES itself, breaking RSA itself and so on.
|
||||
|
||||
So now we're at the point where you don't trust the inner building blocks of the random number generator, insisting on “true randomness” instead of “pseudo randomness”. But then you're using those “true” random numbers in algorithms that you so despise that you didn't want them near your random number generator in the first place!
|
||||
|
||||
Truth is, when state-of-the-art hash algorithms are broken, or when state-of-the-art block ciphers are broken, it doesn't matter that you get “philosophically insecure” random numbers because of them. You've got nothing left to securely use them for anyway.
|
||||
|
||||
So just use those computationally-secure random numbers for your computationally-secure algorithms. In other words: use /dev/urandom.
|
||||
|
||||
### Structure of Linux's random number generator
|
||||
|
||||
#### An incorrect view
|
||||
|
||||
Chances are, your idea of the kernel's random number generator is something similar to this:
|
||||
|
||||
![image: mythical structure of the kernel's random number generator][1]
|
||||
|
||||
“True randomness”, albeit possibly skewed and biased, enters the system and its entropy is precisely counted and immediately added to an internal entropy counter. After de-biasing and whitening it's entering the kernel's entropy pool, where both /dev/random and /dev/urandom get their random numbers from.
|
||||
|
||||
The “true” random number generator, /dev/random, takes those random numbers straight out of the pool, if the entropy count is sufficient for the number of requested numbers, decreasing the entropy counter, of course. If not, it blocks until new entropy has entered the system.
|
||||
|
||||
The important thing in this narrative is that /dev/random basically yields the numbers that have been input by those randomness sources outside, after only the necessary whitening. Nothing more, just pure randomness.
|
||||
|
||||
/dev/urandom, so the story goes, is doing the same thing. Except when there isn't sufficient entropy in the system. In contrast to /dev/random, it does not block, but gets “low quality random” numbers from a pseudorandom number generator (conceded, a cryptographically secure one) that is running alongside the rest of the random number machinery. This CSPRNG is just seeded once (or maybe every now and then, it doesn't matter) with “true randomness” from the randomness pool, but you can't really trust it.
|
||||
|
||||
In this view, that seems to be in a lot of people's minds when they're talking about random numbers on Linux, avoiding /dev/urandom is plausible.
|
||||
|
||||
Because either there is enough entropy left, then you get the same you'd have gotten from /dev/random. Or there isn't, then you get those low-quality random numbers from a CSPRNG that almost never saw high-entropy input.
|
||||
|
||||
Devilish, right? Unfortunately, also utterly wrong. In reality, the internal structure of the random number generator looks like this.
|
||||
|
||||
#### A better simplification
|
||||
|
||||
##### Before Linux 4.8
|
||||
|
||||
![image: actual structure of the kernel's random number generator before Linux 4.8][2] This is a pretty rough simplification. In fact, there isn't just one, but three pools filled with entropy. One primary pool, and one for /dev/random and /dev/urandom each, feeding off the primary pool. Those three pools all have their own entropy counts, but the counts of the secondary pools (for /dev/random and /dev/urandom) are mostly close to zero, and “fresh” entropy flows from the primary pool when needed, decreasing its entropy count. Also there is a lot of mixing and re-injecting outputs back into the system going on. All of this is far more detail than is necessary for this document.
|
||||
|
||||
See the big difference? The CSPRNG is not running alongside the random number generator, filling in for those times when /dev/urandom wants to output something, but has nothing good to output. The CSPRNG is an integral part of the random number generation process. There is no /dev/random handing out “good and pure” random numbers straight from the whitener. Every randomness source's input is thoroughly mixed and hashed inside the CSPRNG, before it emerges as random numbers, either via /dev/urandom or /dev/random.
|
||||
|
||||
Another important difference is that there is no entropy counting going on here, but estimation. The amount of entropy some source is giving you isn't something obvious that you just get, along with the data. It has to be estimated. Please note that when your estimate is too optimistic, the dearly held property of /dev/random, that it's only giving out as many random numbers as available entropy allows, is gone. Unfortunately, it's hard to estimate the amount of entropy.
|
||||
|
||||
The Linux kernel uses only the arrival times of events to estimate their entropy. It does that by interpolating polynomials of those arrival times, to calculate “how surprising” the actual arrival time was, according to the model. Whether this polynomial interpolation model is the best way to estimate entropy is an interesting question. There is also the problem that internal hardware restrictions might influence those arrival times. The sampling rates of all kinds of hardware components may also play a role, because it directly influences the values and the granularity of those event arrival times.
|
||||
|
||||
In the end, to the best of our knowledge, the kernel's entropy estimate is pretty good. Which means it's conservative. People argue about how good it really is, but that issue is far above my head. Still, if you insist on never handing out random numbers that are not “backed” by sufficient entropy, you might be nervous here. I'm sleeping sound because I don't care about the entropy estimate.
|
||||
|
||||
So to make one thing crystal clear: both /dev/random and /dev/urandom are fed by the same CSPRNG. Only the behavior when their respective pool runs out of entropy, according to some estimate, differs: /dev/random blocks, while /dev/urandom does not.
|
||||
|
||||
##### From Linux 4.8 onward
|
||||
|
||||
In Linux 4.8 the equivalency between /dev/urandom and /dev/random was given up. Now /dev/urandom output does not come from an entropy pool, but directly from a CSPRNG.
|
||||
|
||||
![image: actual structure of the kernel's random number generator from Linux 4.8 onward][3]
|
||||
|
||||
We will see shortly why that is not a security problem.
|
||||
|
||||
### What's wrong with blocking?
|
||||
|
||||
Have you ever waited for /dev/random to give you more random numbers? Generating a PGP key inside a virtual machine maybe? Connecting to a web server that's waiting for more random numbers to create an ephemeral session key?
|
||||
|
||||
That's the problem. It inherently runs counter to availability. So your system is not working. It's not doing what you built it to do. Obviously, that's bad. You wouldn't have built it if you didn't need it.
|
||||
|
||||
I'm working on safety-related systems in factory automation. Can you guess what the main reason for failures of safety systems is? Manipulation. Simple as that. Something about the safety measure bugged the worker. It took too much time, was too inconvenient, whatever. People are very resourceful when it comes to finding “inofficial solutions”.
|
||||
|
||||
But the problem runs even deeper: people don't like to be stopped in their ways. They will devise workarounds, concoct bizarre machinations to just get it running. People who don't know anything about cryptography. Normal people.
|
||||
|
||||
Why not patching out the call to `random()`? Why not having some guy in a web forum tell you how to use some strange ioctl to increase the entropy counter? Why not switch off SSL altogether?
|
||||
|
||||
In the end you just educate your users to do foolish things that compromise your system's security without you ever knowing about it.
|
||||
|
||||
It's easy to disregard availability, usability or other nice properties. Security trumps everything, right? So better be inconvenient, unavailable or unusable than feign security.
|
||||
|
||||
But that's a false dichotomy. Blocking is not necessary for security. As we saw, /dev/urandom gives you the same kind of random numbers as /dev/random, straight out of a CSPRNG. Use it!
|
||||
|
||||
### The CSPRNGs are alright
|
||||
|
||||
But now everything sounds really bleak. If even the high-quality random numbers from /dev/random are coming out of a CSPRNG, how can we use them for high-security purposes?
|
||||
|
||||
It turns out, that “looking random” is the basic requirement for a lot of our cryptographic building blocks. If you take the output of a cryptographic hash, it has to be indistinguishable from a random string so that cryptographers will accept it. If you take a block cipher, its output (without knowing the key) must also be indistinguishable from random data.
|
||||
|
||||
If anyone could gain an advantage over brute force breaking of cryptographic building blocks, using some perceived weakness of those CSPRNGs over “true” randomness, then it's the same old story: you don't have anything left. Block ciphers, hashes, everything is based on the same mathematical fundament as CSPRNGs. So don't be afraid.
|
||||
|
||||
### What about entropy running low?
|
||||
|
||||
It doesn't matter.
|
||||
|
||||
The underlying cryptographic building blocks are designed such that an attacker cannot predict the outcome, as long as there was enough randomness (a.k.a. entropy) in the beginning. A usual lower limit for “enough” may be 256 bits. No more.
|
||||
|
||||
Considering that we were pretty hand-wavey about the term “entropy” in the first place, it feels right. As we saw, the kernel's random number generator cannot even precisely know the amount of entropy entering the system. Only an estimate. And whether the model that's the basis for the estimate is good enough is pretty unclear, too.
|
||||
|
||||
### Re-seeding
|
||||
|
||||
But if entropy is so unimportant, why is fresh entropy constantly being injected into the random number generator?
|
||||
|
||||
djb [remarked][4] that more entropy actually can hurt.
|
||||
|
||||
First, it cannot hurt. If you've got more randomness just lying around, by all means use it!
|
||||
|
||||
There is another reason why re-seeding the random number generator every now and then is important:
|
||||
|
||||
Imagine an attacker knows everything about your random number generator's internal state. That's the most severe security compromise you can imagine, the attacker has full access to the system.
|
||||
|
||||
You've totally lost now, because the attacker can compute all future outputs from this point on.
|
||||
|
||||
But over time, with more and more fresh entropy being mixed into it, the internal state gets more and more random again. So that such a random number generator's design is kind of self-healing.
|
||||
|
||||
But this is injecting entropy into the generator's internal state, it has nothing to do with blocking its output.
|
||||
|
||||
### The random and urandom man page
|
||||
|
||||
The man page for /dev/random and /dev/urandom is pretty effective when it comes to instilling fear into the gullible programmer's mind:
|
||||
|
||||
> A read from the /dev/urandom device will not block waiting for more entropy. As a result, if there is not sufficient entropy in the entropy pool, the returned values are theoretically vulnerable to a cryptographic attack on the algorithms used by the driver. Knowledge of how to do this is not available in the current unclassified literature, but it is theoretically possible that such an attack may exist. If this is a concern in your application, use /dev/random instead.
|
||||
|
||||
Such an attack is not known in “unclassified literature”, but the NSA certainly has one in store, right? And if you're really concerned about this (you should!), please use /dev/random, and all your problems are solved.
|
||||
|
||||
The truth is, while there may be such an attack available to secret services, evil hackers or the Bogeyman, it's just not rational to just take it as a given.
|
||||
|
||||
And even if you need that peace of mind, let me tell you a secret: no practical attacks on AES, SHA-3 or other solid ciphers and hashes are known in the “unclassified” literature, either. Are you going to stop using those, as well? Of course not!
|
||||
|
||||
Now the fun part: “use /dev/random instead”. While /dev/urandom does not block, its random number output comes from the very same CSPRNG as /dev/random's.
|
||||
|
||||
If you really need information-theoretically secure random numbers (you don't!), and that's about the only reason why the entropy of the CSPRNGs input matters, you can't use /dev/random, either!
|
||||
|
||||
The man page is silly, that's all. At least it tries to redeem itself with this:
|
||||
|
||||
> If you are unsure about whether you should use /dev/random or /dev/urandom, then probably you want to use the latter. As a general rule, /dev/urandom should be used for everything except long-lived GPG/SSL/SSH keys.
|
||||
|
||||
Fine. I think it's unnecessary, but if you want to use /dev/random for your “long-lived keys”, by all means, do so! You'll be waiting a few seconds typing stuff on your keyboard, that's no problem.
|
||||
|
||||
But please don't make connections to a mail server hang forever, just because you “wanted to be safe”.
|
||||
|
||||
### Orthodoxy
|
||||
|
||||
The view espoused here is certainly a tiny minority's opinions on the Internet. But ask a real cryptographer, you'll be hard pressed to find someone who sympathizes much with that blocking /dev/random.
|
||||
|
||||
Let's take [Daniel Bernstein][5], better known as djb:
|
||||
|
||||
> Cryptographers are certainly not responsible for this superstitious nonsense. Think about this for a moment: whoever wrote the /dev/random manual page seems to simultaneously believe that
|
||||
>
|
||||
> * (1) we can't figure out how to deterministically expand one 256-bit /dev/random output into an endless stream of unpredictable keys (this is what we need from urandom), but
|
||||
>
|
||||
> * (2) we _can_ figure out how to use a single key to safely encrypt many messages (this is what we need from SSL, PGP, etc.).
|
||||
>
|
||||
>
|
||||
|
||||
>
|
||||
> For a cryptographer this doesn't even pass the laugh test.
|
||||
|
||||
Or [Thomas Pornin][6], who is probably one of the most helpful persons I've ever encountered on the Stackexchange sites:
|
||||
|
||||
> The short answer is yes. The long answer is also yes. /dev/urandom yields data which is indistinguishable from true randomness, given existing technology. Getting "better" randomness than what /dev/urandom provides is meaningless, unless you are using one of the few "information theoretic" cryptographic algorithm, which is not your case (you would know it).
|
||||
>
|
||||
> The man page for urandom is somewhat misleading, arguably downright wrong, when it suggests that /dev/urandom may "run out of entropy" and /dev/random should be preferred;
|
||||
|
||||
Or maybe [Thomas Ptacek][7], who is not a real cryptographer in the sense of designing cryptographic algorithms or building cryptographic systems, but still the founder of a well-reputed security consultancy that's doing a lot of penetration testing and breaking bad cryptography:
|
||||
|
||||
> Use urandom. Use urandom. Use urandom. Use urandom. Use urandom. Use urandom.
|
||||
|
||||
### Not everything is perfect
|
||||
|
||||
/dev/urandom isn't perfect. The problems are twofold:
|
||||
|
||||
On Linux, unlike FreeBSD, /dev/urandom never blocks. Remember that the whole security rested on some starting randomness, a seed?
|
||||
|
||||
Linux's /dev/urandom happily gives you not-so-random numbers before the kernel even had the chance to gather entropy. When is that? At system start, booting the computer.
|
||||
|
||||
FreeBSD does the right thing: they don't have the distinction between /dev/random and /dev/urandom, both are the same device. At startup /dev/random blocks once until enough starting entropy has been gathered. Then it won't block ever again.
|
||||
|
||||
In the meantime, Linux has implemented a new syscall, originally introduced by OpenBSD as getentropy(2): getrandom(2). This syscall does the right thing: blocking until it has gathered enough initial entropy, and never blocking after that point. Of course, it is a syscall, not a character device, so it isn't as easily accessible from shell or script languages. It is available from Linux 3.17 onward.
|
||||
|
||||
On Linux it isn't too bad, because Linux distributions save some random numbers when booting up the system (but after they have gathered some entropy, since the startup script doesn't run immediately after switching on the machine) into a seed file that is read next time the machine is booting. So you carry over the randomness from the last running of the machine.
|
||||
|
||||
Obviously that isn't as good as if you let the shutdown scripts write out the seed, because in that case there would have been much more time to gather entropy. The advantage is obviously that this does not depend on a proper shutdown with execution of the shutdown scripts (in case the computer crashes, for example).
|
||||
|
||||
And it doesn't help you the very first time a machine is running, but the Linux distributions usually do the same saving into a seed file when running the installer. So that's mostly okay.
|
||||
|
||||
Virtual machines are the other problem. Because people like to clone them, or rewind them to a previously saved check point, this seed file doesn't help you.
|
||||
|
||||
But the solution still isn't using /dev/random everywhere, but properly seeding each and every virtual machine after cloning, restoring a checkpoint, whatever.
|
||||
|
||||
### tldr;
|
||||
|
||||
Just use /dev/urandom!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2uo.de/myths-about-urandom/
|
||||
|
||||
作者:[Thomas Hühn][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2uo.de/
|
||||
[1]:https://www.2uo.de/myths-about-urandom/structure-no.png
|
||||
[2]:https://www.2uo.de/myths-about-urandom/structure-yes.png
|
||||
[3]:https://www.2uo.de/myths-about-urandom/structure-new.png
|
||||
[4]:http://blog.cr.yp.to/20140205-entropy.html
|
||||
[5]:http://www.mail-archive.com/cryptography@randombit.net/msg04763.html
|
||||
[6]:http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key/3939#3939
|
||||
[7]:http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/
|
@ -0,0 +1,295 @@
|
||||
Prevent Files And Folders From Accidental Deletion Or Modification In Linux
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2017/02/Prevent-Files-And-Folders-From-Accidental-Deletion-Or-Modification-In-Linux-720x340.jpg)
|
||||
|
||||
Some times, I accidentally “SHIFT+DELETE” my data. Yes, I am an idiot who don’t double check what I am exactly going to delete. And, I am too dumb or lazy to backup the data. Result? Data loss! They are gone in a fraction of second. I do it every now and then. If you’re anything like me, I’ve got a good news. There is a simple, yet useful commandline utility called **“chattr”** (abbreviation of **Ch** ange **Attr** ibute) which can be used to prevent files and folders from accidental deletion or modification in Unix-like distributions. It applies/removes certain attributes to a file or folder in your Linux system. So the users can’t delete or modify the files and folders either accidentally or intentionally, even as root user. Sounds useful, isn’t it?
|
||||
|
||||
In this brief tutorial, we are going to see how to use chattr in real time in-order to prevent files and folders from accidental deletion in Linux.
|
||||
|
||||
### Prevent Files And Folders From Accidental Deletion Or Modification In Linux
|
||||
|
||||
By default, Chattr is available in most modern Linux operating systems. Let us see some examples.
|
||||
|
||||
The default syntax of chattr command is:
|
||||
```
|
||||
chattr [operator] [switch] [filename]
|
||||
|
||||
```
|
||||
|
||||
chattr has the following operators.
|
||||
|
||||
* The operator **‘+’** causes the selected attributes to be added to the existing attributes of the files;
|
||||
* The operator **‘-‘** causes them to be removed;
|
||||
* The operator **‘=’** causes them to be the only attributes that the files have.
|
||||
|
||||
|
||||
|
||||
Chattr has different attributes namely – **aAcCdDeijsStTu**. Each letter applies a particular attributes to a file.
|
||||
|
||||
* **a** – append only,
|
||||
* **A** – no atime updates,
|
||||
* **c** – compressed,
|
||||
* **C** – no copy on write,
|
||||
* **d** – no dump,
|
||||
* **D** – synchronous directory updates,
|
||||
* **e** – extent format,
|
||||
* **i** – immutable,
|
||||
* **j** – data journalling,
|
||||
* **P** – project hierarchy,
|
||||
* **s** – secure deletion,
|
||||
* **S** – synchronous updates,
|
||||
* **t** – no tail-merging,
|
||||
* **T** – top of directory hierarchy,
|
||||
* **u** – undeletable.
|
||||
|
||||
|
||||
|
||||
In this tutorial, we are going to discuss the usage of two attributes, namely **a** , **i** which are used to prevent the deletion of files and folders. That’s what our topic today, isn’t? Indeed!
|
||||
|
||||
### Prevent files from accidental deletion
|
||||
|
||||
Let me create a file called **file.txt** in my current directory.
|
||||
```
|
||||
$ touch file.txt
|
||||
|
||||
```
|
||||
|
||||
Now, I am going to apply **“i”** attribute which makes the file immutable. It means you can’t delete, modify the file, even if you’re the file owner and the root user.
|
||||
```
|
||||
$ sudo chattr +i file.txt
|
||||
|
||||
```
|
||||
|
||||
You can check the file attributes using command:
|
||||
```
|
||||
$ lsattr file.txt
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
----i---------e---- file.txt
|
||||
|
||||
```
|
||||
|
||||
Now, try to remove the file either as a normal user or with sudo privileges.
|
||||
```
|
||||
$ rm file.txt
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
rm: cannot remove 'file.txt': Operation not permitted
|
||||
|
||||
```
|
||||
|
||||
Let me try with sudo command:
|
||||
```
|
||||
$ sudo rm file.txt
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
rm: cannot remove 'file.txt': Operation not permitted
|
||||
|
||||
```
|
||||
|
||||
Let us try to append some contents in the text file.
|
||||
```
|
||||
$ echo 'Hello World!' >> file.txt
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
bash: file.txt: Operation not permitted
|
||||
|
||||
```
|
||||
|
||||
Try with **sudo** privilege:
|
||||
```
|
||||
$ sudo echo 'Hello World!' >> file.txt
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
bash: file.txt: Operation not permitted
|
||||
|
||||
```
|
||||
|
||||
As you noticed in the above outputs, We can’t delete or modify the file even as root user or the file owner.
|
||||
|
||||
To revoke attributes, just use **“-i”** switch as shown below.
|
||||
```
|
||||
$ sudo chattr -i file.txt
|
||||
|
||||
```
|
||||
|
||||
Now, the immutable attribute has been removed. You can now delete or modify the file.
|
||||
```
|
||||
$ rm file.txt
|
||||
|
||||
```
|
||||
|
||||
Similarly, you can restrict the directories from accidental deletion or modification as described in the next section.
|
||||
|
||||
### Prevent folders from accidental deletion and modification
|
||||
|
||||
Create a directory called dir1 and a file called file.txt inside this directory.
|
||||
```
|
||||
$ mkdir dir1 && touch dir1/file.txt
|
||||
|
||||
```
|
||||
|
||||
Now, make this directory and its contents (file.txt) immutable using command:
|
||||
```
|
||||
$ sudo chattr -R +i dir1
|
||||
|
||||
```
|
||||
|
||||
Where,
|
||||
|
||||
* **-R** – will make the dir1 and its contents immutable recursively.
|
||||
* **+i** – makes the directory immutable.
|
||||
|
||||
|
||||
|
||||
Now, try to delete the directory either as normal user or using sudo user.
|
||||
```
|
||||
$ rm -fr dir1
|
||||
|
||||
$ sudo rm -fr dir1
|
||||
|
||||
```
|
||||
|
||||
You will get the following output:
|
||||
```
|
||||
rm: cannot remove 'dir1/file.txt': Operation not permitted
|
||||
|
||||
```
|
||||
|
||||
Try to append some contents in the file using “echo” command. Did you make it? Of course, you couldn’t!
|
||||
|
||||
To revoke the attributes back, run:
|
||||
```
|
||||
$ sudo chattr -R -i dir1
|
||||
|
||||
```
|
||||
|
||||
Now, you can delete or modify the contents of this directory as usual.
|
||||
|
||||
### Prevent files and folders from accidental deletion, but allow append operation
|
||||
|
||||
We know now how to prevent files and folders from accidental deletion and modification. Next, we are going to prevent files and folders from deletion, but allow the file for writing in append mode only. That means you can’t edit, modify the existing data in the file, rename the file, and delete the file. You can only open the file for writing in append mode.
|
||||
|
||||
To set append mode attribution to a file/directory, we do the following.
|
||||
|
||||
**For files:**
|
||||
```
|
||||
$ sudo chattr +a file.txt
|
||||
|
||||
```
|
||||
|
||||
**For directories: **
|
||||
```
|
||||
$ sudo chattr -R +a dir1
|
||||
|
||||
```
|
||||
|
||||
A file/folder with the ‘a’ attribute set can only be open in append mode for writing.
|
||||
|
||||
Add some contents to the file(s) to check whether it works or not.
|
||||
```
|
||||
$ echo 'Hello World!' >> file.txt
|
||||
|
||||
$ echo 'Hello World!' >> dir1/file.txt
|
||||
|
||||
```
|
||||
|
||||
Check the file contents using cat command:
|
||||
```
|
||||
$ cat file.txt
|
||||
|
||||
$ cat dir1/file.txt
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Hello World!
|
||||
|
||||
```
|
||||
|
||||
You will see that you can now be able to append the contents. It means we can modify the files and folders.
|
||||
|
||||
Let us try to delete the file or folder now.
|
||||
```
|
||||
$ rm file.txt
|
||||
|
||||
```
|
||||
|
||||
**Output:**
|
||||
```
|
||||
rm: cannot remove 'file.txt': Operation not permitted
|
||||
|
||||
```
|
||||
|
||||
Let us try to delete the folder:
|
||||
```
|
||||
$ rm -fr dir1/
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
rm: cannot remove 'dir1/file.txt': Operation not permitted
|
||||
|
||||
```
|
||||
|
||||
To remove the attributes, run the following commands:
|
||||
|
||||
**For files:**
|
||||
```
|
||||
$ sudo chattr -R -a file.txt
|
||||
|
||||
```
|
||||
|
||||
**For directories: **
|
||||
```
|
||||
$ sudo chattr -R -a dir1/
|
||||
|
||||
```
|
||||
|
||||
Now, you can delete or modify the files and folders as usual.
|
||||
|
||||
For more details, refer the man pages.
|
||||
```
|
||||
man chattr
|
||||
|
||||
```
|
||||
|
||||
### Wrapping up
|
||||
|
||||
Data protection is one of the main job of a System administrator. There are numerous free and commercial data protection software are available on the market. Luckily, we’ve got this built-in tool that helps us to protect the data from accidental deletion or modification. Chattr can be used as additional tool to protect the important system files and data in your Linux system.
|
||||
|
||||
And, that’s all for today. Hope this helps. I will be soon here with another useful article. Until then, stay tuned with OSTechNix!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/prevent-files-folders-accidental-deletion-modification-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
@ -1,97 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Monitoring network bandwidth with iftop command
|
||||
======
|
||||
System Admins are required to monitor IT infrastructure to make sure that everything is up & running. We have to monitor performance of hardware i.e memory, hdds & CPUs etc & so does we have to monitor our network. We need to make sure that our network is not being over utilised or our applications, websites might not work. In this tutorial, we are going to learn to use IFTOP utility.
|
||||
|
||||
( **Recommended read** :[ **Resource monitoring using Nagios**][1], [**Tools for checking system info**,][2] [**Important logs to monitor**][3])
|
||||
|
||||
Iftop is network monitoring utility that provides real time real time bandwidth monitoring. Iftop measures total data moving in & out of the individual socket connections i.e. it captures packets moving in and out via network adapter & than sums those up to find the bandwidth being utilized.
|
||||
|
||||
## Installation on Debian/Ubuntu
|
||||
|
||||
Iftop is available with default repositories of Debian/Ubuntu & can be simply installed using the command below,
|
||||
|
||||
```
|
||||
$ sudo apt-get install iftop
|
||||
```
|
||||
|
||||
## Installation on RHEL/Centos using yum
|
||||
|
||||
For installing iftop on CentOS or RHEL, we need to enable EPEL repository. To enable repository, run the following on your terminal,
|
||||
|
||||
### RHEL/CentOS 7
|
||||
|
||||
```
|
||||
$ rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-10.noarch.rpm
|
||||
```
|
||||
|
||||
### RHEL/CentOS 6 (64 Bit)
|
||||
|
||||
```
|
||||
$ rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
|
||||
```
|
||||
|
||||
### RHEL/CentOS 6 (32 Bit)
|
||||
|
||||
```
|
||||
$ rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
|
||||
```
|
||||
|
||||
After epel repository has been installed, we can now install iftop by running,
|
||||
|
||||
```
|
||||
$ yum install iftop
|
||||
```
|
||||
|
||||
This will install iftop utility on your system. We will now use it to monitor our network,
|
||||
|
||||
## Using IFTOP
|
||||
|
||||
You can start using iftop by opening your terminal windown & type,
|
||||
|
||||
```
|
||||
$ iftop
|
||||
```
|
||||
|
||||
![network monitoring][5]
|
||||
|
||||
You will now be presented with network activity happening on your machine. You can also use
|
||||
|
||||
```
|
||||
$ iftop -n
|
||||
```
|
||||
|
||||
Which will present the network information on your screen but with '-n' , you will not be presented with the names related to IP addresses but only ip addresses. This option allows for some bandwidth to be saved, which goes into resolving IP addresses to names.
|
||||
|
||||
Now we can also see all the commands that can be used with iftop. Once you have ran iftop, press 'h' button on the keyboard to see all the commands that can be used with iftop.
|
||||
|
||||
![network monitoring][7]
|
||||
|
||||
To monitor a particular network interface, we can mention interface with iftop,
|
||||
|
||||
```
|
||||
$ iftop -I enp0s3
|
||||
```
|
||||
|
||||
You can check further options that are used with iftop using help, as mentioned above. But these mentioned examples are only what you might to monitor network.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/monitoring-network-bandwidth-iftop-command/
|
||||
|
||||
作者:[SHUSAIN][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/installing-configuring-nagios-server/
|
||||
[2]:http://linuxtechlab.com/commands-system-hardware-info/
|
||||
[3]:http://linuxtechlab.com/important-logs-monitor-identify-issues/
|
||||
[4]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=661%2C424
|
||||
[5]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2017/04/iftop-1.jpg?resize=661%2C424
|
||||
[6]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=663%2C416
|
||||
[7]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2017/04/iftop-help.jpg?resize=663%2C416
|
@ -1,131 +0,0 @@
|
||||
How to Use the ZFS Filesystem on Ubuntu Linux
|
||||
======
|
||||
There are a myriad of [filesystems available for Linux][1]. So why try a new one? They all work, right? They're not all the same, and some have some very distinct advantages, like ZFS.
|
||||
|
||||
### Why ZFS
|
||||
|
||||
ZFS is awesome. It's a truly modern filesystem with built-in capabilities that make sense for handling loads of data.
|
||||
|
||||
Now, if you're considering ZFS for your ultra-fast NVMe SSD, it might not be the best option. It's slower than others. That's okay, though. It was designed to store huge amounts of data and keep it safe.
|
||||
|
||||
ZFS eliminates the need to set up traditional RAID arrays. Instead, you can create ZFS pools, and even add drives to those pools at any time. ZFS pools behave almost exactly like RAID, but the functionality is built right into the filesystem.
|
||||
|
||||
ZFS also acts like a replacement for LVM, allowing you to partition and manage partitions on the fly without the need to handle things at a lower level and worry about the associated risks.
|
||||
|
||||
It's also a CoW filesystem. Without getting too technical, that means that ZFS protects your data from gradual corruption over time. ZFS creates checksums of files and lets you roll back those files to a previous working version.
|
||||
|
||||
### Installing ZFS
|
||||
|
||||
![Install ZFS on Ubuntu][2]
|
||||
|
||||
Installing ZFS on Ubuntu is very easy, though the process is slightly different for Ubuntu LTS and the latest releases.
|
||||
|
||||
**Ubuntu 16.04 LTS**
|
||||
```
|
||||
sudo apt install zfs
|
||||
```
|
||||
|
||||
**Ubuntu 17.04 and Later**
|
||||
```
|
||||
sudo apt install zfsutils
|
||||
```
|
||||
|
||||
After you have the utilities installed, you can create ZFS drives and partitions using the tools provided by ZFS.
|
||||
|
||||
### Creating Pools
|
||||
|
||||
![Create ZFS Pool][3]
|
||||
|
||||
Pools are the rough equivalent of RAID in ZFS. They are flexible and can easily be manipulated.
|
||||
|
||||
#### RAID0
|
||||
|
||||
RAID0 just pools your drives into what behaves like one giant drive. It can increase your drive speeds, but if one of your drives fails, you're probably going to be out of luck.
|
||||
|
||||
To achieve RAID0 with ZFS, just create a plain pool.
|
||||
```
|
||||
sudo zpool create your-pool /dev/sdc /dev/sdd
|
||||
```
|
||||
|
||||
#### RAID1/MIRROR
|
||||
|
||||
You can achieve RAID1 functionality with the `mirror` keyword in ZFS. Raid1 creates a 1-to-1 copy of your drive. This means that your data is constantly backed up. It also increases performance. Of course, you use half of your storage to the duplication.
|
||||
```
|
||||
sudo zpool create your-pool mirror /dev/sdc /dev/sdd
|
||||
```
|
||||
|
||||
#### RAID5/RAIDZ1
|
||||
|
||||
ZFS implements RAID5 functionality as RAIDZ1. RAID5 requires drives in multiples of three and allows you to keep 2/3 of your storage space by writing backup parity data to 1/3 of the drive space. If one drive fails, the array will remain online, but the failed drive should be replaced ASAP.
|
||||
```
|
||||
sudo zpool create your-pool raidz1 /dev/sdc /dev/sdd /dev/sde
|
||||
```
|
||||
|
||||
#### RAID6/RAIDZ2
|
||||
|
||||
RAID6 is almost exactly like RAID5, but it works in multiples of four instead of multiples of three. It doubles the parity data to allow up to two drives to fail without bringing the array down.
|
||||
```
|
||||
sudo zpool create your-pool raidz2 /dev/sdc /dev/sdd /dev/sde /dev/sdf
|
||||
```
|
||||
|
||||
#### RAID10/Striped Mirror
|
||||
|
||||
RAID10 aims to be the best of both worlds by providing both a speed increase and data redundancy with striping. You need drives in multiples of four and will only have access to half of the space. You can create a pool in RAID10 by creating two mirrors in the same pool command.
|
||||
```
|
||||
sudo zpool create your-pool mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf
|
||||
```
|
||||
|
||||
### Working With Pools
|
||||
|
||||
![ZFS pool Status][4]
|
||||
|
||||
There are also some management tools that you have to work with your pools once you've created them. First, check the status of your pools.
|
||||
```
|
||||
sudo zpool status
|
||||
```
|
||||
|
||||
#### Updates
|
||||
|
||||
When you update ZFS you'll need to update your pools, too. Your pools will notify you of any updates when you check their status. To update a pool, run the following command.
|
||||
```
|
||||
sudo zpool upgrade your-pool
|
||||
```
|
||||
|
||||
You can also upgrade them all.
|
||||
```
|
||||
sudo zpool upgrade -a
|
||||
```
|
||||
|
||||
#### Adding Drives
|
||||
|
||||
You can also add drives to your pools at any time. Tell `zpool` the name of the pool and the location of the drive, and it'll take care of everything.
|
||||
```
|
||||
sudo zpool add your-pool /dev/sdx
|
||||
```
|
||||
|
||||
### Other Thoughts
|
||||
|
||||
![ZFS in File Browser][5]
|
||||
|
||||
ZFS creates a directory in the root filesystem for your pools. You can browse to them by name using your GUI file manager or the CLI.
|
||||
|
||||
ZFS is awesomely powerful, and there are plenty of other things that you can do with it, too, but these are the basics. It is an excellent filesystem for working with loads of storage, even if it is just a RAID array of hard drives that you use for your files. ZFS works excellently with NAS systems, too.
|
||||
|
||||
Regardless of how stable and robust ZFS is, it's always best to back up your data when you implement something new on your hard drives.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/use-zfs-filesystem-ubuntu-linux/
|
||||
|
||||
作者:[Nick Congleton][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/nickcongleton/
|
||||
[1]:https://www.maketecheasier.com/best-linux-filesystem-for-ssd/
|
||||
[2]:https://www.maketecheasier.com/assets/uploads/2017/09/zfs-install.jpg (Install ZFS on Ubuntu)
|
||||
[3]:https://www.maketecheasier.com/assets/uploads/2017/09/zfs-create-pool.jpg (Create ZFS Pool)
|
||||
[4]:https://www.maketecheasier.com/assets/uploads/2017/09/zfs-pool-status.jpg (ZFS pool Status)
|
||||
[5]:https://www.maketecheasier.com/assets/uploads/2017/09/zfs-pool-open.jpg (ZFS in File Browser)
|
@ -1,3 +1,6 @@
|
||||
Translating by erialin
|
||||
|
||||
|
||||
Linux Gunzip Command Explained with Examples
|
||||
======
|
||||
|
||||
|
@ -0,0 +1,133 @@
|
||||
Top 7 open source project management tools for agile teams
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_orgchart1.png?itok=tukiFj89)
|
||||
|
||||
Opensource.com has surveyed the landscape of popular open source project management tools. We've done this before—but this year we've added a twist. This time, we're looking specifically at tools that support [agile][1] methodology, including related practices such as [Scrum][2], Lean, and Kanban.
|
||||
|
||||
The growth of interest in and use of agile is why we've decided to focus on these types of tools this year. A majority of organizations—71%—say they [are using agile approaches][3] at least sometimes. In addition, agile projects are [28% more successful][4] than projects managed with traditional approaches.
|
||||
|
||||
For this roundup, we looked at the project management tools we covered in [2014][5], [2015][6], and [2016][7] and plucked the ones that support agile, then did research to uncover any additions or changes. Whether your organization is already using agile or is one of the many planning to adopt agile approaches in 2018, one of these seven open source project management tools may be exactly what you're looking for.
|
||||
|
||||
### MyCollab
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/mycollab_kanban-board.png)
|
||||
|
||||
[MyCollab][8] is a suite of three collaboration modules for small and midsize businesses: project management, customer relationship management (CRM), and document creation and editing software. There are two licensing options: a commercial "ultimate" edition, which is faster and can be run on-premises or in the cloud, and the open source "community edition," which is the version we're interested in here.
|
||||
|
||||
The community edition doesn't have a cloud option and is slower, due to not using query cache, but provides essential project management features, including tasks, issues management, activity stream, roadmap view, and a Kanban board for agile teams. While it doesn't have a separate mobile app, it works on mobile devices as well as Windows, MacOS, Linux, and Unix computers.
|
||||
|
||||
The latest version of MyCollab is 5.4.10 and the source code is available on [GitHub][9]. It is licensed under AGPLv3 and requires a Java runtime and MySQL stack to operate. It's available for [download][10] for Windows, Linux, Unix, and MacOS.
|
||||
|
||||
### Odoo
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/odoo_projects_screenshots_01a.gif)
|
||||
|
||||
[Odoo][11] is more than project management software; it's a full, integrated business application suite that includes accounting, human resources, website & e-commerce, inventory, manufacturing, sales management (CRM), and other tools.
|
||||
|
||||
The free and open source community edition has limited [features][12] compared to the paid enterprise suite. Its project management application includes a Kanban-style task-tracking view for agile teams, which was updated in its latest release, Odoo 11.0, to include a progress bar and animation for tracking project status. The project management tool also includes Gantt charts, tasks, issues, graphs, and more. Odoo has a thriving [community][13] and provides [user guides][14] and other training resources.
|
||||
|
||||
It is licensed under GPLv3 and requires Python and PostgreSQL. It is available for [download][15] for Windows, Linux, and Red Hat Package Manager, as a [Docker][16] image, and as source on [GitHub][17].
|
||||
|
||||
### OpenProject
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/openproject-screenshot-agile-scrum.png)
|
||||
|
||||
[OpenProject][18] is a powerful open source project management tool that is notable for its ease of use and rich project management and team collaboration features.
|
||||
|
||||
Its modules support project planning, scheduling, roadmap and release planning, time tracking, cost reporting, budgeting, bug tracking, and agile and Scrum. Its agile features, including creating stories, prioritizing sprints, and tracking tasks, are integrated with OpenProject's other modules.
|
||||
|
||||
OpenProject is licensed under GPLv3 and its source code is available on [GitHub][19]. Its latest version, 7.3.2. is available for [download][20] for Linux; you can learn more about installing and configuring it in Birthe Lindenthal's article "[Getting started with OpenProject][21]."
|
||||
|
||||
### OrangeScrum
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/orangescrum_kanban.png)
|
||||
|
||||
As you would expect from its name, [OrangeScrum][22] supports agile methodologies, specifically with a Scrum task board and Kanban-style workflow view. It's geared for smaller organizations—freelancers, agencies, and small and midsize businesses.
|
||||
|
||||
The open source version offers many of the [features][23] in OrangeScrum's paid editions, including a mobile app, resource utilization, and progress tracking. Other features, including Gantt charts, time logs, invoicing, and client management, are available as paid add-ons, and the paid editions include a cloud option, which the community version does not.
|
||||
|
||||
OrangeScrum is licensed under GPLv3 and is based on the CakePHP framework. It requires Apache, PHP 5.3 or higher, and MySQL 4.1 or higher, and works on Windows, Linux, and MacOS. Its latest release, 1.6.1. is available for [download][24], and its source code can be found on [GitHub][25].
|
||||
|
||||
### ]project-open[
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/projectopen_dashboard.png)
|
||||
|
||||
[]project-open[][26] is a dual-licensed enterprise project management tool, meaning that its core is open source, and some additional features are available in commercially licensed modules. According to the project's [comparison][27] of the community and enterprise editions, the open source core offers plenty of features for small and midsize organizations.
|
||||
|
||||
]project-open[ supports [agile][28] projects with Scrum and Kanban support, as well as classic Gantt/waterfall projects and hybrid or mixed projects.
|
||||
|
||||
The application is licensed under GPL and the [source code][29] is accessible via CVS. ]project-open[ is available as [installers][26] for both Linux and Windows, but also in cloud images and as a virtual appliance.
|
||||
|
||||
### Taiga
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/taiga_screenshot.jpg)
|
||||
|
||||
[Taiga][30] is an open source project management platform that focuses on Scrum and agile development, with features including a Kanban board, tasks, sprints, issues, a backlog, and epics. Other features include ticket management, multi-project support, wiki pages, and third-party integrations.
|
||||
|
||||
It also offers a free mobile app for iOS, Android, and Windows devices, and provides import tools that make it easy to migrate from other popular project management applications.
|
||||
|
||||
Taiga is free for public projects, with no restrictions on either the number of projects or the number of users. For private projects, there is a wide range of [paid plans][31] available under a "freemium" model, but, notably, the software's features are the same, no matter which type of plan you have.
|
||||
|
||||
Taiga is licensed under GNU Affero GPLv3, and requires a stack that includes Nginx, Python, and PostgreSQL. The latest release, [3.1.0 Perovskia atriplicifolia][32], is available on [GitHub][33].
|
||||
|
||||
### Tuleap
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/tuleap-scrum-prioritized-backlog.png)
|
||||
|
||||
[Tuleap][34] is an application lifecycle management (ALM) platform that aims to manage projects for every type of team—small, midsize, large, waterfall, agile, or hybrid—but its support for agile teams is prominent. Notably, it offers support for Scrum, Kanban, sprints, tasks, reports, continuous integration, backlogs, and more.
|
||||
|
||||
Other [features][35] include issue tracking, document tracking, collaboration tools, and integration with Git, SVN, and Jenkins, all of which make it an appealing choice for open source software development projects.
|
||||
|
||||
Tuleap is licensed under GPLv2. More information, including Docker and CentOS downloads, is available on their [Get Started][36] page. You can also get the source code for its latest version, 9.14, on Tuleap's [Git][37].
|
||||
|
||||
The trouble with this type of list is that it's usually out of date as soon as it's published. Are you using an open source project management tool that supports agile that we forgot to include? Or do you have feedback on the ones we mentioned? Please leave a comment below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/agile-project-management-tools
|
||||
|
||||
作者:[Opensource.com][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com
|
||||
[1]:http://agilemanifesto.org/principles.html
|
||||
[2]:https://opensource.com/resources/scrum
|
||||
[3]:https://www.pmi.org/-/media/pmi/documents/public/pdf/learning/thought-leadership/pulse/pulse-of-the-profession-2017.pdf
|
||||
[4]:https://www.pwc.com/gx/en/actuarial-insurance-services/assets/agile-project-delivery-confidence.pdf
|
||||
[5]:https://opensource.com/business/14/1/top-project-management-tools-2014
|
||||
[6]:https://opensource.com/business/15/1/top-project-management-tools-2015
|
||||
[7]:https://opensource.com/business/16/3/top-project-management-tools-2016
|
||||
[8]:https://community.mycollab.com/
|
||||
[9]:https://github.com/MyCollab/mycollab
|
||||
[10]:https://www.mycollab.com/ce-registration/
|
||||
[11]:https://www.odoo.com/
|
||||
[12]:https://www.odoo.com/page/editions
|
||||
[13]:https://www.odoo.com/page/community
|
||||
[14]:https://www.odoo.com/documentation/user/11.0/
|
||||
[15]:https://www.odoo.com/page/download
|
||||
[16]:https://hub.docker.com/_/odoo/
|
||||
[17]:https://github.com/odoo/odoo
|
||||
[18]:https://www.openproject.org/
|
||||
[19]:https://github.com/opf/openproject
|
||||
[20]:https://www.openproject.org/download-and-installation/
|
||||
[21]:https://opensource.com/article/17/11/how-install-and-use-openproject
|
||||
[22]:https://www.orangescrum.org/
|
||||
[23]:https://www.orangescrum.org/compare-orangescrum
|
||||
[24]:http://www.orangescrum.org/free-download
|
||||
[25]:https://github.com/Orangescrum/orangescrum/
|
||||
[26]:http://www.project-open.com/en/list-installers
|
||||
[27]:http://www.project-open.com/en/products/editions.html
|
||||
[28]:http://www.project-open.com/en/project-type-agile
|
||||
[29]:http://www.project-open.com/en/developers-cvs-checkout
|
||||
[30]:https://taiga.io/
|
||||
[31]:https://tree.taiga.io/support/subscription-and-plans/payment-process-faqs/#q.-what-s-about-custom-plans-private-projects-with-more-than-25-members-?
|
||||
[32]:https://blog.taiga.io/taiga-perovskia-atriplicifolia-release-310.html
|
||||
[33]:https://github.com/taigaio
|
||||
[34]:https://www.tuleap.org/
|
||||
[35]:https://www.tuleap.org/features/project-management
|
||||
[36]:https://www.tuleap.org/get-started
|
||||
[37]:https://tuleap.net/plugins/git/tuleap/tuleap/stable
|
@ -1,3 +1,4 @@
|
||||
Translating by stevenzdg988
|
||||
How To Find The Installed Proprietary Packages In Arch Linux
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/01/Absolutely-Proprietary-720x340.jpg)
|
||||
|
@ -1,201 +0,0 @@
|
||||
translating by Flowsnow
|
||||
|
||||
Ansible: the Automation Framework That Thinks Like a Sysadmin
|
||||
======
|
||||
|
||||
I've written about and trained folks on various DevOps tools through the years, and although they're awesome, it's obvious that most of them are designed from the mind of a developer. There's nothing wrong with that, because approaching configuration management programmatically is the whole point. Still, it wasn't until I started playing with Ansible that I felt like it was something a sysadmin quickly would appreciate.
|
||||
|
||||
Part of that appreciation comes from the way Ansible communicates with its client computers—namely, via SSH. As sysadmins, you're all very familiar with connecting to computers via SSH, so right from the word "go", you have a better understanding of Ansible than the other alternatives.
|
||||
|
||||
With that in mind, I'm planning to write a few articles exploring how to take advantage of Ansible. It's a great system, but when I was first exposed to it, it wasn't clear how to start. It's not that the learning curve is steep. In fact, if anything, the problem was that I didn't really have that much to learn before starting to use Ansible, and that made it confusing. For example, if you don't have to install an agent program (Ansible doesn't have any software installed on the client computers), how do you start?
|
||||
|
||||
### Getting to the Starting Line
|
||||
|
||||
The reason Ansible was so difficult for me at first is because it's so flexible with how to configure the server/client relationship, I didn't know what I was supposed to do. The truth is that Ansible doesn't really care how you set up the SSH system; it will utilize whatever configuration you have. There are just a couple things to consider:
|
||||
|
||||
1. Ansible needs to connect to the client computer via SSH.
|
||||
|
||||
2. Once connected, Ansible needs to elevate privilege so it can configure the system, install packages and so on.
|
||||
|
||||
Unfortunately, those two considerations really open a can of worms. Connecting to a remote computer and elevating privilege is a scary thing to allow. For some reason, it feels less vulnerable when you simply install an agent on the remote computer and let Chef or Puppet handle privilege escalation. It's not that Ansible is any less secure, but rather, it puts the security decisions in your hands.
|
||||
|
||||
Next I'm going to list a bunch of potential configurations, along with the pros and cons of each. This isn't an exhaustive list, but it should get you thinking along the right lines for what will be ideal in your environment. I also should note that I'm not going to mention systems like Vagrant, because although Vagrant is wonderful for building a quick infrastructure for testing and developing, it's so very different from a bunch of servers that the considerations are too dissimilar really to compare.
|
||||
|
||||
### Some SSH Scenarios
|
||||
|
||||
1) SSHing into remote computer as root with password in Ansible config.
|
||||
|
||||
I started with a terrible idea. The "pros" of this setup is that it eliminates the need for privilege escalation, and there are no other user accounts required on the remote server. But, the cost for such convenience isn't worth it. First, most systems won't let you SSH in as root without changing the default configuration. Those default configurations are there because, quite frankly, it's just a bad idea to allow the root user to connect remotely. Second, putting a root password in a plain-text configuration file on the Ansible machine is mortifying. Really, I mentioned this possibility because it is a possibility, but it's one that should be avoided. Remember, Ansible allows you to configure the connection yourself, and it will let you do really dumb things. Please don't.
|
||||
|
||||
2) SSHing into a remote computer as a regular user, using a password stored in the Ansible config.
|
||||
|
||||
An advantage of this scenario is that it doesn't require much configuration of the clients. Most users are able to SSH in by default, so Ansible should be able to use credentials and log in fine. I personally dislike the idea of a password being stored in plain text in a configuration file, but at least it isn't the root password. If you use this method, be sure to consider how privilege escalation will take place on the remote server. I know I haven't talked about escalating privilege yet, but if you have a password in the config file, that same password likely will be used to gain sudo access. So with one slip, you've compromised not only the remote user's account, but also potentially the entire system.
|
||||
|
||||
3) SSHing into a remote computer as a regular user, authenticating with a key pair that has an empty passphrase.
|
||||
|
||||
This eliminates storing passwords in a configuration file, at least for the logging in part of the process. Key pairs without passphrases aren't ideal, but it's something I often do in an environment like my house. On my internal network, I typically use a key pair without a passphrase to automate many things like cron jobs that require authentication. This isn't the most secure option, because a compromised private key means unrestricted access to the remote user's account, but I like it better than a password in a config file.
|
||||
|
||||
4) SSHing into a remote computer as a regular user, authenticating with a key pair that is secured by a passphrase.
|
||||
|
||||
This is a very secure way of handling remote access, because it requires two different authentication factors: 1) the private key and 2) the passphrase to decrypt it. If you're just running Ansible interactively, this might be the ideal setup. When you run a command, Ansible should prompt you for the private key's passphrase, and then it'll use the key pair to log in to the remote system. Yes, the same could be done by just using a standard password login and not specifying the password in the configuration file, but if you're going to be typing a password on the command line anyway, why not add the layer of protection a key pair offers?
|
||||
|
||||
5) SSHing with a passphrase-protected key pair, but using ssh-agent to "unlock" the private key.
|
||||
|
||||
This doesn't perfectly answer the question of unattended, automated Ansible commands, but it does make a fairly secure setup convenient as well. The ssh-agent program authenticates the passphrase one time and then uses that authentication to make future connections. When I'm using Ansible, this is what I think I'd like to be doing. If I'm completely honest, I still usually use key pairs without passphrases, but that's typically because I'm working on my home servers, not something prone to attack.
|
||||
|
||||
There are some other considerations to keep in mind when configuring your SSH environment. Perhaps you're able to restrict the Ansible user (which is often your local user name) so it can log in only from a specific IP address. Perhaps your Ansible server can live in a different subnet, behind a strong firewall so its private keys are more difficult to access remotely. Maybe the Ansible server doesn't have an SSH server installed on itself so there's no incoming access at all. Again, one of the strengths of Ansible is that it uses the SSH protocol for communication, and it's a protocol you've all had years to tweak into a system that works best in your environment. I'm not a big fan of proclaiming what the "best practice" is, because in reality, the best practice is to consider your environment and choose the setup that fits your situation the best.
|
||||
|
||||
### Privilege Escalation
|
||||
|
||||
Once your Ansible server connects to its clients via SSH, it needs to be able to escalate privilege. If you chose option 1 above, you're already root, and this is a moot point. But since no one chose option 1 (right?), you need to consider how a regular user on the client computer gains access. Ansible supports a wide variety of escalation systems, but in Linux, the most common options are sudo and su. As with SSH, there are a few situations to consider, although there are certainly other options.
|
||||
|
||||
1) Escalate privilege with su.
|
||||
|
||||
For Red Hat/CentOS users, the instinct might be to use su in order to gain system access. By default, those systems configure the root password during install, and to gain privileged access, you need to type it in. The problem with using su is that although it gives you total access to the remote system, it also gives you total access to the remote system. (Yes, that was sarcasm.) Also, the su program doesn't have the ability to authenticate with key pairs, so the password either must be interactively typed or stored in the configuration file. And since it's literally the root password, storing it in the config file should sound like a horrible idea, because it is.
|
||||
|
||||
2) Escalate privilege with sudo.
|
||||
|
||||
This is how Debian/Ubuntu systems are configured. A user in the correct group has access to sudo a command and execute it with root privileges. Out of the box, this still has the problem of password storage or interactive typing. Since storing the user's password in the configuration file seems a little less horrible, I guess this is a step up from using su, but it still gives complete access to a system if the password is compromised. (After all, typing sudo su - will allow users to become root just as if they had the root password.)
|
||||
|
||||
3) Escalate privilege with sudo and configure NOPASSWD in the sudoers file.
|
||||
|
||||
Again, in my local environment, this is what I do. It's not perfect, because it gives unrestricted root access to the user account and doesn't require any passwords. But when I do this, and use SSH key pairs without passphrases, it allows me to automate Ansible commands easily. I'll note again, that although it is convenient, it is not a terribly secure idea.
|
||||
|
||||
4) Escalate privilege with sudo and configure NOPASSWD on specific executables.
|
||||
|
||||
This idea might be the best compromise of security and convenience. Basically, if you know what you plan to do with Ansible, you can give NOPASSWD privilege to the remote user for just those applications it will need to use. It might get a little confusing, since Ansible uses Python for lots of things, but with enough trial and error, you should be able to figure things out. It is more work, but does eliminate some of the glaring security holes.
|
||||
|
||||
### Implementing Your Plan
|
||||
|
||||
Once you decide how you're going to handle Ansible authentication and privilege escalation, you need to set it up. After you become well versed at Ansible, you might be able to use the tool itself to help "bootstrap" new clients, but at first, it's important to configure clients manually so you know what's happening. It's far better to automate a process you're familiar with than to start with automation from the beginning.
|
||||
|
||||
I've written about SSH key pairs in the past, and there are countless articles online for setting it up. The short version, from your Ansible computer, looks something like this:
|
||||
|
||||
```
|
||||
|
||||
# ssh-keygen
|
||||
# ssh-copy-id -i .ssh/id_dsa.pub remoteuser@remote.computer.ip
|
||||
# ssh remoteuser@remote.computer.ip
|
||||
|
||||
```
|
||||
|
||||
If you've chosen to use no passphrase when creating your key pairs, that last step should get you into the remote computer without typing a password or passphrase.
|
||||
|
||||
In order to set up privilege escalation in sudo, you'll need to edit the sudoers file. You shouldn't edit the file directly, but rather use:
|
||||
|
||||
```
|
||||
|
||||
# sudo visudo
|
||||
|
||||
```
|
||||
|
||||
This will open the sudoers file and allow you to make changes safely (it error-checks when you save, so you don't accidentally lock yourself out with a typo). There are examples in the file, so you should be able to figure out how to assign the exact privileges you want.
|
||||
|
||||
Once it's all configured, you should test it manually before bringing Ansible into the picture. Try SSHing to the remote client, and then try escalating privilege using whatever methods you've chosen. Once you have configured the way you'll connect, it's time to install Ansible.
|
||||
|
||||
### Installing Ansible
|
||||
|
||||
Since the Ansible program gets installed only on the single computer, it's not a big chore to get going. Red Hat/Ubuntu systems do package installs a bit differently, but neither is difficult.
|
||||
|
||||
In Red Hat/CentOS, first enable the EPEL repository:
|
||||
|
||||
```
|
||||
|
||||
sudo yum install epel-release
|
||||
|
||||
```
|
||||
|
||||
Then install Ansible:
|
||||
|
||||
```
|
||||
|
||||
sudo yum install ansible
|
||||
|
||||
```
|
||||
|
||||
In Ubuntu, first enable the Ansible PPA:
|
||||
|
||||
```
|
||||
|
||||
sudo apt-add-repository spa:ansible/ansible
|
||||
(press ENTER to access the key and add the repo)
|
||||
|
||||
```
|
||||
|
||||
Then install Ansible:
|
||||
|
||||
```
|
||||
|
||||
sudo apt-get update
|
||||
sudo apt-get install ansible
|
||||
|
||||
```
|
||||
|
||||
### Configuring Ansible Hosts File
|
||||
|
||||
The Ansible system has no way of knowing which clients you want it to control unless you give it a list of computers. That list is very simple, and it looks something like this:
|
||||
|
||||
```
|
||||
|
||||
# file /etc/ansible/hosts
|
||||
|
||||
[webservers]
|
||||
blogserver ansible_host=192.168.1.5
|
||||
wikiserver ansible_host=192.168.1.10
|
||||
|
||||
[dbservers]
|
||||
mysql_1 ansible_host=192.168.1.22
|
||||
pgsql_1 ansible_host=192.168.1.23
|
||||
|
||||
```
|
||||
|
||||
The bracketed sections are specifying groups. Individual hosts can be listed in multiple groups, and Ansible can refer either to individual hosts or groups. This is also the configuration file where things like plain-text passwords would be stored, if that's the sort of setup you've planned. Each line in the configuration file configures a single host, and you can add multiple declarations after the ansible_host statement. Some useful options are:
|
||||
|
||||
```
|
||||
|
||||
ansible_ssh_pass
|
||||
ansible_become
|
||||
ansible_become_method
|
||||
ansible_become_user
|
||||
ansible_become_pass
|
||||
|
||||
```
|
||||
|
||||
### The Ansible Vault
|
||||
|
||||
I also should note that although the setup is more complex, and not something you'll likely do during your first foray into the world of Ansible, the program does offer a way to encrypt passwords in a vault. Once you're familiar with Ansible and you want to put it into production, storing those passwords in an encrypted Ansible vault is ideal. But in the spirit of learning to crawl before you walk, I recommend starting in a non-production environment and using passwordless methods at first.
|
||||
|
||||
### Testing Your System
|
||||
|
||||
Finally, you should test your system to make sure your clients are connecting. The ping test will make sure the Ansible computer can ping each host:
|
||||
|
||||
```
|
||||
|
||||
ansible -m ping all
|
||||
|
||||
```
|
||||
|
||||
After running, you should see a message for each defined host showing a ping: pong if the ping was successful. This doesn't actually test authentication, just the network connectivity. Try this to test your authentication:
|
||||
|
||||
```
|
||||
|
||||
ansible -m shell -a 'uptime' webservers
|
||||
|
||||
```
|
||||
|
||||
You should see the results of the uptime command for each host in the webservers group.
|
||||
|
||||
In a future article, I plan start to dig in to Ansible's ability to manage the remote computers. I'll look at various modules and how you can use the ad-hoc mode to accomplish in a few keystrokes what would take a long time to handle individually on the command line. If you didn't get the results you expected from the sample Ansible commands above, take this time to make sure authentication is working. Check out [the Ansible docs][1] for more help if you get stuck.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/ansible-automation-framework-thinks-sysadmin
|
||||
|
||||
作者:[Shawn Powers][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/users/shawn-powers
|
||||
[1]:http://docs.ansible.com
|
43
sources/tech/20180111 What is the deal with GraphQL.md
Normal file
43
sources/tech/20180111 What is the deal with GraphQL.md
Normal file
@ -0,0 +1,43 @@
|
||||
translating---geekpi
|
||||
|
||||
What is the deal with GraphQL?
|
||||
======
|
||||
|
||||
![](https://ryanmccue.ca/content/images/2018/01/Copy-of-Copy-of-Copy-of-Copy-of-Copy-of-Copy-of-Electric-Love.png)
|
||||
|
||||
There has been lots of talks lately about this thing called [GraphQL][1]. It is a relatively new technology coming out of Facebook and is starting to be widely adopted by large companies like [Github][2], Facebook, Twitter, Yelp, and many others. Basically, GraphQL is an alternative to REST, it replaces many dumb endpoints, `/user/1`, `/user/1/comments` with `/graphql` and you use the post body or query string to request the data you need, like, `/graphql?query={user(id:1){id,username,comments{text}}}`. You pick the pieces of data you need and can nest down to relations to avoid multiple calls. This is a different way of thinking about a backend, but in some situations, it makes practical sense.
|
||||
|
||||
### My Experience with GraphQL
|
||||
|
||||
Originally when I heard about it I was very skeptical, after dabbling in [Apollo Server][3] I was not convinced. Why would you use some silly new technology when you can simply build REST endpoints! But after digging deeper and learning more about its use cases, I came around. I still think REST has a place and will be important for the foreseeable future, but with how bad many APIs and their documentation are, this can be a breath of fresh air...
|
||||
|
||||
### Why Use GraphQL Over REST?
|
||||
|
||||
Although I have used GraphQL, and think it is a compelling and exciting technology, I believe it does not replace REST. That being said there are compelling reasons to pick GraphQL over REST in some situations. When you are building mobile apps or web apps which are made with high mobile traffic in mind GraphQL really shines. The reason for this is mobile data. REST uses many calls and often returns unused data whereas, with GraphQL, you can define precisely what you want to be returned for minimal data usage.
|
||||
|
||||
You can get do all the above with REST by making multiple endpoints available, but that also adds complexity to the project. It also means there will be back and forth between the front and backend teams.
|
||||
|
||||
### What Should You Use?
|
||||
|
||||
GraphQL is a new technology which is now mainstream. But many developers are not aware of it or choose not to learn it because they think it's a fad. I feel like for most projects you can get away using either REST or GraphQL. Developing using GraphQL has great benefits like enforcing documentation, which helps teams work better together, and provides clear expectations for each query. This will likely speed up development after the initial hurdle of wrapping your head around GraphQL.
|
||||
|
||||
Although I have been comparing GraphQL and REST, I think in most cases a mixture of the two will produce the best results. Combine the strengths of both instead of seeing it strightly as just using GraphQL or just using REST.
|
||||
|
||||
### Final Thoughts
|
||||
|
||||
Both technologies are here to stay. And done right both technologies can make fast and efficient backends. GraphQL has an edge up because it allows the client to query only the data they need by default, but that is at a potential sacrifice of endpoint speed. Ultimately, if I were starting a new project, I would go with a mix of both GraphQL and REST.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://ryanmccue.ca/what-is-the-deal-with-graphql/
|
||||
|
||||
作者:[Ryan McCue][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ryanmccue.ca/author/ryan/
|
||||
[1]:http://graphql.org/
|
||||
[2]:https://developer.github.com/v4/
|
||||
[3]:https://github.com/apollographql/apollo-server
|
@ -1,84 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Configuring MSMTP On Ubuntu 16.04 (Again)
|
||||
======
|
||||
This post exists as a copy of what I had on my previous blog about configuring MSMTP on Ubuntu 16.04; I'm posting it as-is for posterity, and have no idea if it'll work on later versions. As I'm not hosting my own Ubuntu/MSMTP server anymore I can't see any updates being made to this, but if I ever do have to set this up again I'll create an updated post! Anyway, here's what I had…
|
||||
|
||||
I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in a previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you're using Apache as the web server, but I'm sure it shouldn't be too different if your web server of choice is something else.
|
||||
|
||||
I use [msmtp][1] for sending emails from this blog to notify me of comments and upgrades etc. Here I'm going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too.
|
||||
|
||||
To begin, we need to install 3 packages:
|
||||
`sudo apt-get install msmtp msmtp-mta ca-certificates`
|
||||
Once these are installed, a default config is required. By default msmtp will look at `/etc/msmtprc`, so I created that using vim, though any text editor will do the trick. This file looked something like this:
|
||||
```
|
||||
# Set defaults.
|
||||
defaults
|
||||
# Enable or disable TLS/SSL encryption.
|
||||
tls on
|
||||
tls_starttls on
|
||||
tls_trust_file /etc/ssl/certs/ca-certificates.crt
|
||||
# Setup WP account's settings.
|
||||
account
|
||||
host smtp.gmail.com
|
||||
port 587
|
||||
auth login
|
||||
user
|
||||
password
|
||||
from
|
||||
logfile /var/log/msmtp/msmtp.log
|
||||
|
||||
account default :
|
||||
|
||||
```
|
||||
|
||||
Any of the uppercase items (i.e. ``) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to.
|
||||
|
||||
Once that file is saved, we'll update the permissions on the above configuration file -- msmtp won't run if the permissions on that file are too open -- and create the directory for the log file.
|
||||
```
|
||||
sudo mkdir /var/log/msmtp
|
||||
sudo chown -R www-data:adm /var/log/msmtp
|
||||
sudo chmod 0600 /etc/msmtprc
|
||||
|
||||
```
|
||||
|
||||
Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don't get too large as well as keeping the log directory a little tidier. To do this, we create `/etc/logrotate.d/msmtp` and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently.
|
||||
```
|
||||
/var/log/msmtp/*.log {
|
||||
rotate 12
|
||||
monthly
|
||||
compress
|
||||
missingok
|
||||
notifempty
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Now that the logging is configured, we need to tell PHP to use msmtp by editing `/etc/php/7.0/apache2/php.ini` and updating the sendmail path from
|
||||
`sendmail_path =`
|
||||
to
|
||||
`sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a -t"`
|
||||
Here I did run into an issue where even though I specified the account name it wasn't sending emails correctly when I tested it. This is why the line `account default : ` was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run `sudo service apache2 restart`, then run `php -a` and execute the following
|
||||
```
|
||||
mail ('personal@email.com', 'Test Subject', 'Test body text');
|
||||
exit();
|
||||
|
||||
```
|
||||
|
||||
Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps).
|
||||
|
||||
I make no claims that this is the most secure configuration, so if you come across this and realise it's grossly insecure or something is drastically wrong please let me know and I'll update it accordingly.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://codingproductivity.wordpress.com/2018/01/18/configuring-msmtp-on-ubuntu-16-04-again/
|
||||
|
||||
作者:[JOE][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://codingproductivity.wordpress.com/author/joeb454/
|
||||
[1]:http://msmtp.sourceforge.net/
|
@ -1,214 +0,0 @@
|
||||
leemeans translating
|
||||
Getting Started with ncurses
|
||||
======
|
||||
How to use curses to draw to the terminal screen.
|
||||
|
||||
While graphical user interfaces are very cool, not every program needs to run with a point-and-click interface. For example, the venerable vi editor ran in plain-text terminals long before the first GUI.
|
||||
|
||||
The vi editor is one example of a screen-oriented program that draws in "text" mode, using a library called curses, which provides a set of programming interfaces to manipulate the terminal screen. The curses library originated in BSD UNIX, but Linux systems provide this functionality through the ncurses library.
|
||||
|
||||
[For a "blast from the past" on ncurses, see ["ncurses: Portable Screen-Handling for Linux"][1], September 1, 1995, by Eric S. Raymond.]
|
||||
|
||||
Creating programs that use curses is actually quite simple. In this article, I show an example program that leverages curses to draw to the terminal screen.
|
||||
|
||||
### Sierpinski's Triangle
|
||||
|
||||
One simple way to demonstrate a few curses functions is by generating Sierpinski's Triangle. If you aren't familiar with this method to generate Sierpinski's Triangle, here are the rules:
|
||||
|
||||
1. Set three points that define a triangle.
|
||||
|
||||
2. Randomly select a point anywhere (x,y).
|
||||
|
||||
Then:
|
||||
|
||||
1. Randomly select one of the triangle's points.
|
||||
|
||||
2. Set the new x,y to be the midpoint between the previous x,y and the triangle point.
|
||||
|
||||
3. Repeat.
|
||||
|
||||
So with those instructions, I wrote this program to draw Sierpinski's Triangle to the terminal screen using the curses functions:
|
||||
|
||||
```
|
||||
|
||||
1 /* triangle.c */
|
||||
2
|
||||
3 #include
|
||||
4 #include
|
||||
5
|
||||
6 #include "getrandom_int.h"
|
||||
7
|
||||
8 #define ITERMAX 10000
|
||||
9
|
||||
10 int main(void)
|
||||
11 {
|
||||
12 long iter;
|
||||
13 int yi, xi;
|
||||
14 int y[3], x[3];
|
||||
15 int index;
|
||||
16 int maxlines, maxcols;
|
||||
17
|
||||
18 /* initialize curses */
|
||||
19
|
||||
20 initscr();
|
||||
21 cbreak();
|
||||
22 noecho();
|
||||
23
|
||||
24 clear();
|
||||
25
|
||||
26 /* initialize triangle */
|
||||
27
|
||||
28 maxlines = LINES - 1;
|
||||
29 maxcols = COLS - 1;
|
||||
30
|
||||
31 y[0] = 0;
|
||||
32 x[0] = 0;
|
||||
33
|
||||
34 y[1] = maxlines;
|
||||
35 x[1] = maxcols / 2;
|
||||
36
|
||||
37 y[2] = 0;
|
||||
38 x[2] = maxcols;
|
||||
39
|
||||
40 mvaddch(y[0], x[0], '0');
|
||||
41 mvaddch(y[1], x[1], '1');
|
||||
42 mvaddch(y[2], x[2], '2');
|
||||
43
|
||||
44 /* initialize yi,xi with random values */
|
||||
45
|
||||
46 yi = getrandom_int() % maxlines;
|
||||
47 xi = getrandom_int() % maxcols;
|
||||
48
|
||||
49 mvaddch(yi, xi, '.');
|
||||
50
|
||||
51 /* iterate the triangle */
|
||||
52
|
||||
53 for (iter = 0; iter < ITERMAX; iter++) {
|
||||
54 index = getrandom_int() % 3;
|
||||
55
|
||||
56 yi = (yi + y[index]) / 2;
|
||||
57 xi = (xi + x[index]) / 2;
|
||||
58
|
||||
59 mvaddch(yi, xi, '*');
|
||||
60 refresh();
|
||||
61 }
|
||||
62
|
||||
63 /* done */
|
||||
64
|
||||
65 mvaddstr(maxlines, 0, "Press any key to quit");
|
||||
66
|
||||
67 refresh();
|
||||
68
|
||||
69 getch();
|
||||
70 endwin();
|
||||
71
|
||||
72 exit(0);
|
||||
73 }
|
||||
|
||||
```
|
||||
|
||||
Let me walk through that program by way of explanation. First, the getrandom_int() is my own wrapper to the Linux getrandom() system call, but it's guaranteed to return a positive integer value. Otherwise, you should be able to identify the code lines that initialize and then iterate Sierpinski's Triangle, based on the above rules. Aside from that, let's look at the curses functions I used to draw the triangle on a terminal.
|
||||
|
||||
Most curses programs will start with these four instructions. 1) The initscr() function determines the terminal type, including its size and features, and sets up the curses environment based on what the terminal can support. The cbreak() function disables line buffering and sets curses to take one character at a time. The noecho() function tells curses not to echo the input back to the screen, and the clear() function clears the screen:
|
||||
|
||||
```
|
||||
|
||||
20 initscr();
|
||||
21 cbreak();
|
||||
22 noecho();
|
||||
23
|
||||
24 clear();
|
||||
|
||||
```
|
||||
|
||||
The program then sets a few variables to define the three points that define a triangle. Note the use of LINES and COLS here, which were set by initscr(). These values tell the program how many lines and columns exist on the terminal. Screen coordinates start at zero, so the top-left of the screen is row 0, column 0\. The bottom-right of the screen is row LINES - 1, column COLS - 1\. To make this easy to remember, my program sets these values in the variables maxlines and maxcols, respectively.
|
||||
|
||||
Two simple methods to draw text on the screen are the addch() and addstr() functions. To put text at a specific screen location, use the related mvaddch() and mvaddstr() functions. My program uses these functions in several places. First, the program draws the three points that define the triangle, labeled "0", "1" and "2":
|
||||
|
||||
```
|
||||
|
||||
40 mvaddch(y[0], x[0], '0');
|
||||
41 mvaddch(y[1], x[1], '1');
|
||||
42 mvaddch(y[2], x[2], '2');
|
||||
|
||||
```
|
||||
|
||||
To draw the random starting point, the program makes a similar call:
|
||||
|
||||
```
|
||||
|
||||
49 mvaddch(yi, xi, '.');
|
||||
|
||||
```
|
||||
|
||||
And to draw each successive point in Sierpinski's Triangle iteration:
|
||||
|
||||
```
|
||||
|
||||
59 mvaddch(yi, xi, '*');
|
||||
|
||||
```
|
||||
|
||||
When the program is done, it displays a helpful message at the lower-left corner of the screen (at row maxlines, column 0):
|
||||
|
||||
```
|
||||
|
||||
65 mvaddstr(maxlines, 0, "Press any key to quit");
|
||||
|
||||
```
|
||||
|
||||
It's important to note that curses maintains a version of the screen in memory and updates the screen only when you ask it to. This provides greater performance, especially if you want to display a lot of text to the screen. This is because curses can update only those parts of the screen that changed since the last update. To cause curses to update the terminal screen, use the refresh() function.
|
||||
|
||||
In my example program, I've chosen to update the screen after "drawing" each successive point in Sierpinski's Triangle. By doing so, users should be able to observe each iteration in the triangle.
|
||||
|
||||
Before exiting, I use the getch() function to wait for the user to press a key. Then I call endwin() to exit the curses environment and return the terminal screen to normal control:
|
||||
|
||||
```
|
||||
|
||||
69 getch();
|
||||
70 endwin();
|
||||
|
||||
```
|
||||
|
||||
### Compiling and Sample Output
|
||||
|
||||
Now that you have your first sample curses program, it's time to compile and run it. Remember that Linux systems implement the curses functionality via the ncurses library, so you need to link with -lncurses when you compile—for example:
|
||||
|
||||
```
|
||||
|
||||
$ ls
|
||||
getrandom_int.c getrandom_int.h triangle.c
|
||||
|
||||
$ gcc -Wall -lncurses -o triangle triangle.c getrandom_int.c
|
||||
|
||||
```
|
||||
|
||||
Running the triangle program on a standard 80x24 terminal is not very interesting. You just can't see much detail in Sierpinski's Triangle at that resolution. If you run a terminal window and set a very small font size, you can see the fractal nature of Sierpinski's Triangle more easily. On my system, the output looks like Figure 1.
|
||||
|
||||
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/triangle.png)
|
||||
|
||||
Figure 1. Output of the triangle Program
|
||||
|
||||
Despite the random nature of the iteration, every run of Sierpinski's Triangle will look pretty much the same. The only difference will be where the first few points are drawn to the screen. In this example, you can see the single dot that starts the triangle, near point 1\. It looks like the program picked point 2 next, and you can see the asterisk halfway between the dot and the "2". And it looks like the program randomly picked point 2 for the next random number, because you can see the asterisk halfway between the first asterisk and the "2". From there, it's impossible to tell how the triangle was drawn, because all of the successive dots fall within the triangle area.
|
||||
|
||||
### Starting to Learn ncurses
|
||||
|
||||
This program is a simple example of how to use the curses functions to draw characters to the screen. You can do so much more with curses, depending on what you need your program to do. In a follow up article, I will show how to use curses to allow the user to interact with the screen. If you are interested in getting a head start with curses, I encourage you to read Pradeep Padala's ["NCURSES Programming HOWTO"][2], at the Linux Documentation Project.
|
||||
|
||||
### About the author
|
||||
|
||||
Jim Hall is an advocate for free and open-source software, best known for his work on the FreeDOS Project, and he also focuses on the usability of open-source software. Jim is the Chief Information Officer at Ramsey County, Minn.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/getting-started-ncurses
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/users/jim-hall
|
||||
[1]:http://www.linuxjournal.com/article/1124
|
||||
[2]:http://tldp.org/HOWTO/NCURSES-Programming-HOWTO
|
@ -1,174 +0,0 @@
|
||||
Translating by yizhuoyan
|
||||
|
||||
Linux rm Command Explained for Beginners (8 Examples)
|
||||
======
|
||||
|
||||
Deleting files is a fundamental operation, just like copying files or renaming/moving them. In Linux, there's a dedicated command - dubbed **rm** \- that lets you perform all deletion-related operations. In this tutorial, we will discuss the basics of this tool along with some easy to understand examples.
|
||||
|
||||
But before we do that, it's worth mentioning that all examples mentioned in the article have been tested on Ubuntu 16.04 LTS.
|
||||
|
||||
#### Linux rm command
|
||||
|
||||
So in layman's terms, we can simply say the rm command is used for removing/deleting files and directories. Following is the syntax of the command:
|
||||
|
||||
```
|
||||
rm [OPTION]... [FILE]...
|
||||
```
|
||||
|
||||
And here's how the tool's man page describes it:
|
||||
```
|
||||
This manual page documents the GNU version of rm. rm removes each specified file. By default, it
|
||||
does not remove directories.
|
||||
|
||||
If the -I or --interactive=once option is given, and there are more than three files or the -r,
|
||||
-R, or --recursive are given, then rm prompts the user for whether to proceed with the entire
|
||||
operation. If the response is not affirmative, the entire command is aborted.
|
||||
|
||||
Otherwise, if a file is unwritable, standard input is a terminal, and the -f or --force option is
|
||||
not given, or the -i or --interactive=always option is given, rm prompts the user for whether to
|
||||
remove the file. If the response is not affirmative, the file is skipped.
|
||||
```
|
||||
|
||||
The following Q&A-styled examples will give you a better idea on how the tool works.
|
||||
|
||||
#### Q1. How to remove files using rm command?
|
||||
|
||||
That's pretty easy and straightforward. All you have to do is to pass the name of the files (along with paths if they are not in the current working directory) as input to the rm command.
|
||||
|
||||
```
|
||||
rm [filename]
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
rm testfile.txt
|
||||
```
|
||||
|
||||
[![How to remove files using rm command][1]][2]
|
||||
|
||||
#### Q2. How to remove directories using rm command?
|
||||
|
||||
If you are trying to remove a directory, then you need to use the **-r** command line option. Otherwise, rm will throw an error saying what you are trying to delete is a directory.
|
||||
|
||||
```
|
||||
rm -r [dir name]
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
rm -r testdir
|
||||
```
|
||||
|
||||
[![How to remove directories using rm command][3]][4]
|
||||
|
||||
#### Q3. How to make rm prompt before every removal?
|
||||
|
||||
If you want rm to prompt before each delete action it performs, then use the **-i** command line option.
|
||||
|
||||
```
|
||||
rm -i [file or dir]
|
||||
```
|
||||
|
||||
For example, suppose you want to delete a directory 'testdir' and all its contents, but want rm to prompt before every deletion, then here's how you can do that:
|
||||
|
||||
```
|
||||
rm -r -i testdir
|
||||
```
|
||||
|
||||
[![How to make rm prompt before every removal][5]][6]
|
||||
|
||||
#### Q4. How to force rm to ignore nonexistent files?
|
||||
|
||||
The rm command lets you know through an error message if you try deleting a non-existent file or directory.
|
||||
|
||||
[![Linux rm command example][7]][8]
|
||||
|
||||
However, if you want, you can make rm suppress such error/notifications - all you have to do is to use the **-f** command line option.
|
||||
|
||||
```
|
||||
rm -f [filename]
|
||||
```
|
||||
|
||||
[![How to force rm to ignore nonexistent files][9]][10]
|
||||
|
||||
#### Q5. How to make rm prompt only in some scenarios?
|
||||
|
||||
There exists a command line option **-I** , which when used, makes sure the command only prompts once before removing more than three files, or when removing recursively.
|
||||
|
||||
For example, the following screenshot shows this option in action - there was no prompt when two files were deleted, but the command prompted when more than three files were deleted.
|
||||
|
||||
[![How to make rm prompt only in some scenarios][11]][12]
|
||||
|
||||
#### Q6. How rm works when dealing with root directory?
|
||||
|
||||
Of course, deleting root directory is the last thing a Linux user would want. That's why, the rm command doesn't let you perform a recursive delete operation on this directory by default.
|
||||
|
||||
[![How rm works when dealing with root directory][13]][14]
|
||||
|
||||
However, if you want to go ahead with this operation for whatever reason, then you need to tell this to rm by using the **\--no-preserve-root** option. When this option is enabled, rm doesn't treat the root directory (/) specially.
|
||||
|
||||
In case you want to know the scenarios in which a user might want to delete the root directory of their system, head [here][15].
|
||||
|
||||
#### Q7. How to make rm only remove empty directories?
|
||||
|
||||
In case you want to restrict rm's directory deletion ability to only empty directories, then you can use the -d command line option.
|
||||
|
||||
```
|
||||
rm -d [dir]
|
||||
```
|
||||
|
||||
The following screenshot shows the -d command line option in action - only empty directory got deleted.
|
||||
|
||||
[![How to make rm only remove empty directories][16]][17]
|
||||
|
||||
#### Q8. How to force rm to emit details of operation it is performing?
|
||||
|
||||
If you want rm to display detailed information of the operation being performed, then this can be done by using the **-v** command line option.
|
||||
|
||||
```
|
||||
rm -v [file or directory name]
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
[![How to force rm to emit details of operation it is performing][18]][19]
|
||||
|
||||
#### Conclusion
|
||||
|
||||
Given the kind of functionality it offers, rm is one of the most frequently used commands in Linux (like [cp][20] and mv). Here, in this tutorial, we have covered almost all major command line options this tool provides. rm has a bit of learning curve associated with, so you'll have to spent some time practicing its options before you start using the tool in your day to day work. For more information, head to the command's [man page][21].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-rm-command/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/images/command-tutorial/rm-basic-usage.png
|
||||
[2]:https://www.howtoforge.com/images/command-tutorial/big/rm-basic-usage.png
|
||||
[3]:https://www.howtoforge.com/images/command-tutorial/rm-r.png
|
||||
[4]:https://www.howtoforge.com/images/command-tutorial/big/rm-r.png
|
||||
[5]:https://www.howtoforge.com/images/command-tutorial/rm-i-option.png
|
||||
[6]:https://www.howtoforge.com/images/command-tutorial/big/rm-i-option.png
|
||||
[7]:https://www.howtoforge.com/images/command-tutorial/rm-non-ext-error.png
|
||||
[8]:https://www.howtoforge.com/images/command-tutorial/big/rm-non-ext-error.png
|
||||
[9]:https://www.howtoforge.com/images/command-tutorial/rm-f-option.png
|
||||
[10]:https://www.howtoforge.com/images/command-tutorial/big/rm-f-option.png
|
||||
[11]:https://www.howtoforge.com/images/command-tutorial/rm-I-option.png
|
||||
[12]:https://www.howtoforge.com/images/command-tutorial/big/rm-I-option.png
|
||||
[13]:https://www.howtoforge.com/images/command-tutorial/rm-root-default.png
|
||||
[14]:https://www.howtoforge.com/images/command-tutorial/big/rm-root-default.png
|
||||
[15]:https://superuser.com/questions/742334/is-there-a-scenario-where-rm-rf-no-preserve-root-is-needed
|
||||
[16]:https://www.howtoforge.com/images/command-tutorial/rm-d-option.png
|
||||
[17]:https://www.howtoforge.com/images/command-tutorial/big/rm-d-option.png
|
||||
[18]:https://www.howtoforge.com/images/command-tutorial/rm-v-option.png
|
||||
[19]:https://www.howtoforge.com/images/command-tutorial/big/rm-v-option.png
|
||||
[20]:https://www.howtoforge.com/linux-cp-command/
|
||||
[21]:https://linux.die.net/man/1/rm
|
193
sources/tech/20180123 Migrating to Linux- The Command Line.md
Normal file
193
sources/tech/20180123 Migrating to Linux- The Command Line.md
Normal file
@ -0,0 +1,193 @@
|
||||
Migrating to Linux: The Command Line
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/migrate.jpg?itok=2PBkvV7s)
|
||||
|
||||
This is the fourth article in our series on migrating to Linux. If you missed the previous installments, we've covered [Linux for new users][1], [files and filesystems][2], and [graphical environments][3]. Linux is everywhere. It's used to run most Internet services like web servers, email servers, and others. It's also used in your cell phone, your car console, and a whole lot more. So, you might be curious to try out Linux and learn more about how it works.
|
||||
|
||||
Under Linux, the command line is very useful. On desktop Linux systems, although the command line is optional, you will often see people have a command line window open alongside other application windows. On Internet servers, and when Linux is running in a device, the command line is often the only way to interact directly with the system. So, it's good to know at least some command line basics.
|
||||
|
||||
In the command line (often called a shell in Linux), everything is done by entering commands. You can list files, move files, display the contents of files, edit files, and more, even display web pages, all from the command line.
|
||||
|
||||
If you are already familiar with using the command line in Windows (either CMD.EXE or PowerShell), you may want to jump down to the section titled Familiar with Windows Command Line? and read that first.
|
||||
|
||||
### Navigating
|
||||
|
||||
In the command line, there is the concept of the current working directory (Note: A folder and a directory are synonymous, and in Linux they're usually called directories). Many commands will look in this directory by default if no other directory path is specified. For example, typing ls to list files, will list files in this working directory. For example:
|
||||
```
|
||||
$ ls
|
||||
Desktop Documents Downloads Music Pictures README.txt Videos
|
||||
```
|
||||
|
||||
The command, ls Documents, will instead list files in the Documents directory:
|
||||
```
|
||||
$ ls Documents
|
||||
report.txt todo.txt EmailHowTo.pdf
|
||||
```
|
||||
|
||||
You can display the current working directory by typing pwd. For example:
|
||||
```
|
||||
$ pwd
|
||||
/home/student
|
||||
```
|
||||
|
||||
You can change the current directory by typing cd and then the directory you want to change to. For example:
|
||||
```
|
||||
$ pwd
|
||||
/home/student
|
||||
$ cd Downloads
|
||||
$ pwd
|
||||
/home/student/Downloads
|
||||
```
|
||||
|
||||
A directory path is a list of directories separated by a / (slash) character. The directories in a path have an implied hierarchy, for example, where the path /home/student expects there to be a directory named home in the top directory, and a directory named student to be in that directory home.
|
||||
|
||||
Directory paths are either absolute or relative. Absolute directory paths start with the / character.
|
||||
|
||||
Relative paths start with either . (dot) or .. (dot dot). In a path, a . (dot) means the current directory, and .. (dot dot) means one directory up from the current one. For example, ls ../Documents means look in the directory up one from the current one and show the contents of the directory named Documents in there:
|
||||
```
|
||||
$ pwd
|
||||
/home/student
|
||||
$ ls
|
||||
Desktop Documents Downloads Music Pictures README.txt Videos
|
||||
$ cd Downloads
|
||||
$ pwd
|
||||
/home/student/Downloads
|
||||
$ ls ../Documents
|
||||
report.txt todo.txt EmailHowTo.pdf
|
||||
```
|
||||
|
||||
When you first open a command line window on a Linux system, your current working directory is set to your home directory, usually: /home/<your login name here>. Your home directory is dedicated to your login where you can store your own files.
|
||||
|
||||
The environment variable $HOME expands to the directory path to your home directory. For example:
|
||||
```
|
||||
$ echo $HOME
|
||||
/home/student
|
||||
```
|
||||
|
||||
The following table shows a summary of some of the common commands used to navigate directories and manage simple text files.
|
||||
|
||||
### Searching
|
||||
|
||||
Sometimes I forget where a file resides, or I forget the name of the file I am looking for. There are a couple of commands in the Linux command line that you can use to help you find files and search the contents of files.
|
||||
|
||||
The first command is find. You can use find to search for files and directories by name or other attribute. For example, if I forgot where I kept my todo.txt file, I can run the following:
|
||||
```
|
||||
$ find $HOME -name todo.txt
|
||||
/home/student/Documents/todo.txt
|
||||
```
|
||||
|
||||
The find program has a lot of features and options. A simple form of the command is:
|
||||
find <directory to search> -name <filename>
|
||||
|
||||
If there is more than one file named todo.txt from the example above, it will show me all the places where it found a file by that name. The find command has many options to search by type (file, directory, or other), by date, newer than date, by size, and more. You can type:
|
||||
```
|
||||
man find
|
||||
```
|
||||
|
||||
to get help on how to use the find command.
|
||||
|
||||
You can also use a command called grep to search inside files for specific contents. For example:
|
||||
```
|
||||
grep "01/02/2018" todo.txt
|
||||
```
|
||||
|
||||
will show me all the lines that have the January 2, 2018 date in them.
|
||||
|
||||
### Getting Help
|
||||
|
||||
There are a lot of commands in Linux, and it would be too much to describe all of them here. So the next best step to show how to get help on commands.
|
||||
|
||||
The command apropos helps you find commands that do certain things. Maybe you want to find out all the commands that operate on directories or get a list of open files, but you don't know what command to run. So, you can try:
|
||||
```
|
||||
apropos directory
|
||||
```
|
||||
|
||||
which will give a list of commands and have the word "directory" in their help text. Or, you can do:
|
||||
```
|
||||
apropos "list open files"
|
||||
```
|
||||
|
||||
which will show one command, lsof, that you can use to list open files.
|
||||
|
||||
If you know the command you need to use but aren't sure which options to use to get it to behave the way you want, you can use the command called man, which is short for manual. You would use man <command>, for example:
|
||||
```
|
||||
man ls
|
||||
```
|
||||
|
||||
You can try man ls on your own. It will give several pages of information.
|
||||
|
||||
The man command explains all the options and parameters you can give to a command, and often will even give an example.
|
||||
|
||||
Many commands often also have a help option (e.g., ls --help), which will give information on how to use a command. The man pages are usually more detailed, while the --help option is useful for a quick lookup.
|
||||
|
||||
### Scripts
|
||||
|
||||
One of the best things about the Linux command line is that the commands that are typed in can be scripted, and run over and over again. Commands can be placed as separate lines in a file. You can put #!/bin/sh as the first line in the file, followed by the commands. Then, once the file is marked as executable, you can run the script as if it were its own command. For example,
|
||||
```
|
||||
--- contents of get_todays_todos.sh ---
|
||||
#!/bin/sh
|
||||
todays_date=`date +"%m/%d/%y"`
|
||||
grep $todays_date $HOME/todos.txt
|
||||
```
|
||||
|
||||
Scripts help automate certain tasks in a set of repeatable steps. Scripts can also get very sophisticated if needed, with loops, conditional statements, routines, and more. There's not space here to go into detail, but you can find more information about Linux bash scripting online.
|
||||
|
||||
Familiar with Windows Command Line?
|
||||
|
||||
If you are familiar with the Windows CMD or PowerShell program, typing commands at a command prompt should feel familiar. However, several things work differently in Linux and if you don't understand those differences, it may be confusing.
|
||||
|
||||
First, under Linux, the PATH environment variable works different than it does under Windows. In Windows, the current directory is assumed to be the first directory on the path, even though it's not listed in the list of directories in PATH. Under Linux, the current directory is not assumed to be on the path, and it is not explicitly put on the path either. Putting . in the PATH environment variable is considered to be a security risk under Linux. In Linux, to run a program in the current directory, you need to prefix it with ./ (which is the file's relative path from the current directory). This trips up a lot of CMD users. For example:
|
||||
```
|
||||
./my_program
|
||||
```
|
||||
|
||||
rather than
|
||||
```
|
||||
my_program
|
||||
```
|
||||
|
||||
In addition, in Windows paths are separated by a ; (semicolon) character in the PATH environment variable. On Linux, in PATH, directories are separated by a : (colon) character. Also in Linux, directories in a single path are separated by a / (slash) character while under Windows directories in a single path are separated by a \ (backslash) character. So a typical PATH environment variable in Windows might look like:
|
||||
```
|
||||
PATH="C:\Program Files;C:\Program Files\Firefox;"
|
||||
while on Linux it might look like:
|
||||
PATH="/usr/bin:/opt/mozilla/firefox"
|
||||
```
|
||||
|
||||
Also note that environment variables are expanded with a $ on Linux, so $PATH expands to the contents of the PATH environment variable whereas in Windows you need to enclose the variable in percent symbols (e.g., %PATH%).
|
||||
|
||||
In Linux, options are commonly passed to programs using a - (dash) character in front of the option, while under Windows options are passed by preceding options with a / (slash) character. So, under Linux, you would do:
|
||||
```
|
||||
a_prog -h
|
||||
```
|
||||
|
||||
rather than
|
||||
```
|
||||
a_prog /h
|
||||
```
|
||||
|
||||
Under Linux, file extensions generally don't signify anything. For example, renaming myscript to myscript.bat doesn't make it executable. Instead to make a file executable, the file's executable permission flag needs to be set. File permissions are covered in more detail next time.
|
||||
|
||||
Under Linux when file and directory names start with a . (dot) character they are hidden. So, for example, if you're told to edit the file, .bashrc, and you don't see it in your home directory, it probably really is there. It's just hidden. In the command line, you can use option -a on the command ls to see hidden files. For example:
|
||||
```
|
||||
ls -a
|
||||
```
|
||||
|
||||
Under Linux, common commands are also different from those in the Windows command line. The following table that shows a mapping from common items used under CMD and the alternative used under Linux.
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/table-2_0.png?itok=NNc8TZFZ)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2018/1/migrating-linux-command-line
|
||||
|
||||
作者:[John Bonesio][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/johnbonesio
|
||||
[1]:https://www.linux.com/blog/learn/intro-to-linux/2017/10/migrating-linux-introduction
|
||||
[2]:https://www.linux.com/blog/learn/intro-to-linux/2017/11/migrating-linux-disks-files-and-filesystems
|
||||
[3]:https://www.linux.com/blog/learn/2017/12/migrating-linux-graphical-environments
|
@ -1,418 +0,0 @@
|
||||
translating by heart4lor
|
||||
|
||||
How to Make a Minecraft Server – ThisHosting.Rocks
|
||||
======
|
||||
We’ll show you how to make a Minecraft server with beginner-friendly step-by-step instructions. It will be a persistent multiplayer server that you can play on with your friends from all around the world. You don’t have to be in a LAN.
|
||||
|
||||
### How to Make a Minecraft Server – Quick Guide
|
||||
|
||||
This is our “Table of contents” if you’re in a hurry and want to go straight to the point. We recommend reading everything though.
|
||||
|
||||
* [Learn stuff][1] (optional)
|
||||
|
||||
* [Learn more stuff][2] (optional)
|
||||
|
||||
* [Requirements][3] (required)
|
||||
|
||||
* [Install and start the Minecraft server][4] (required)
|
||||
|
||||
* [Run the server even after you log out of your VPS][5] (optional)
|
||||
|
||||
* [Make the server automatically start at boot][6] (optional)
|
||||
|
||||
* [Configure your Minecraft server][7] (required)
|
||||
|
||||
* [FAQs][8] (optional)
|
||||
|
||||
Before going into the actual instructions, a few things you should know:
|
||||
|
||||
#### Reasons why you would NOT use a specialized Minecraft server hosting provider
|
||||
|
||||
Since you’re here, you’re obviously interested in hosting your own Minecraft server. There are more reasons why you would not use a specialized Minecraft hosting provider, but here are a few:
|
||||
|
||||
* They’re slow most of the time. This is because you actually share the resources with multiple users. It becomes overloaded at some point. Most of them oversell their servers too.
|
||||
|
||||
* You don’t have full control over the Minecraft server or the actual server. You cannot customize anything you want to.
|
||||
|
||||
* You’re limited. Those kinds of hosting plans are always limited in one way or another.
|
||||
|
||||
Of course, there are positives to using a Minecraft hosting provider. The best upside is that you don’t actually have to do all the stuff we’ll write about below. But where’s the fun in that?
|
||||
![🙂](https://s.w.org/images/core/emoji/2.3/svg/1f642.svg)
|
||||
|
||||
#### Why you should NOT use your personal computer to make a Minecraft server
|
||||
|
||||
We noticed lots of tutorials showing you how to host a server on your own computer. There are downsides to doing that, like:
|
||||
|
||||
* Your home internet is not secured enough to handle DDoS attacks. Game servers are often prone to DDoS attacks, and your home network setup is most probably not secured enough to handle them. It’s most likely not powerful enough to handle a small attack.
|
||||
|
||||
* You’ll need to handle port forwarding. If you’ve tried making a Minecraft server on your home network, you’ve surely stumbled upon port forwarding and had issues with it.
|
||||
|
||||
* You’ll need to keep your computer on at all times. Your electricity bill will sky-rocket and you’ll add unnecessary load to your hardware. The hardware most servers use is enterprise-grade and designed to handle loads, with improved stability and longevity.
|
||||
|
||||
* Your home internet is not fast enough. Home networks are not designed to handle multiplayer games. You’ll need a much larger internet plan to even consider making a small server. Luckily, data centers have multiple high-speed, enterprise-grade internet connections making sure they have (or strive to have) 100% uptime.
|
||||
|
||||
* Your hardware is most likely not good enough. Again, servers use enterprise-grade hardware, latest and fastest CPUs, SSDs, and much more. Your personal computer most likely does not.
|
||||
|
||||
* You probably use Windows/MacOS on your personal computer. Though this is debatable, we believe that Linux is much better for game hosting. Don’t worry, you don’t really need to know everything about Linux to make a Minecraft server (though it’s recommended). We’ll show you everything you need to know.
|
||||
|
||||
Our tip is not to use your personal computer, though technically you can. It’s not expensive to buy a cloud server. We’ll show you how to make a Minecraft server on cloud hosting below. It’s easy if you carefully follow the steps.
|
||||
|
||||
### Making a Minecraft Server – Requirements
|
||||
|
||||
There are a few requirements. You should have and know all of this before continuing to the tutorial:
|
||||
|
||||
* You’ll need a [Linux cloud server][9]. We recommend [Vultr][10]. Their prices are cheap, services are high-quality, customer support is great, all server hardware is high-end. Check the [Minecraft server requirements][11] to find out what kind of server you should get (resources like RAM and Disk space). We recommend getting the $20 per month server. They support hourly pricing so if you only need the server temporary for playing with friends, you’ll pay less. Choose the Ubuntu 16.04 distro during signup. Choose the closest server location to where your players live during the signup process. Keep in mind that you’ll be responsible for your server. So you’ll have to secure it and manage it. If you don’t want to do that, you can get a [managed server][12], in which case the hosting provider will likely make a Minecraft server for you.
|
||||
|
||||
* You’ll need an SSH client to connect to the Linux cloud server. [PuTTy][13] is often recommended for beginners, but we also recommend [MobaXTerm][14]. There are many other SSH clients to choose from, so pick your favorite.
|
||||
|
||||
* You’ll need to setup your server (basic security setup at least). Google it and you’ll find many tutorials. You can use [Linode’s Security Guide][15] and follow the exact steps on your [Vultr][16] server.
|
||||
|
||||
* We’ll handle the software requirements like Java below.
|
||||
|
||||
And finally, onto our actual tutorial:
|
||||
|
||||
### How to Make a Minecraft Server on Ubuntu (Linux)
|
||||
|
||||
These instructions are written for and tested on an Ubuntu 16.04 server from [Vultr][17]. Though they’ll also work on Ubuntu 14.04, [Ubuntu 18.04][18], and any other Ubuntu-based distro, and any other server provider.
|
||||
|
||||
We’re using the default Vanilla server from Minecraft. You can use alternatives like CraftBukkit or Spigot that allow more customizations and plugins. Though if you use too many plugins you’ll essentially ruin the server. There are pros and cons to each one. Nevertheless, the instructions below are for the default Vanilla server to keep things simple and beginner-friendly. We may publish a tutorial for CraftBukkit soon if there’s an interest.
|
||||
|
||||
#### 1. Login to your server
|
||||
|
||||
We’ll use the root user. If you use a limited-user, you’ll have to execute most commands with ‘sudo’. You’ll get a warning if you’re doing something you don’t have enough permissions for.
|
||||
|
||||
You can login to your server via your SSH client. Use your server IP and your port (most likely 22).
|
||||
|
||||
After you log in, make sure you [secure your server][19].
|
||||
|
||||
#### 2. Update Ubuntu
|
||||
|
||||
You should always first update your Ubuntu before you do anything else. You can update it with the following commands:
|
||||
|
||||
```
|
||||
apt-get update && apt-get upgrade
|
||||
```
|
||||
|
||||
Hit “enter” and/or “y” when prompted.
|
||||
|
||||
#### 3. Install necessary tools
|
||||
|
||||
You’ll need a few packages and tools for various things in this tutorial like text editing, making your server persistent etc. Install them with the following command:
|
||||
|
||||
```
|
||||
apt-get install nano wget screen bash default-jdk ufw
|
||||
```
|
||||
|
||||
Some of them may already be installed.
|
||||
|
||||
#### 4. Download Minecraft Server
|
||||
|
||||
First, create a directory where you’ll store your Minecraft server and all other files:
|
||||
|
||||
```
|
||||
mkdir /opt/minecraft
|
||||
```
|
||||
|
||||
And navigate to the new directory:
|
||||
|
||||
```
|
||||
cd /opt/minecraft
|
||||
```
|
||||
|
||||
Now you can download the Minecraft Server file. Go to the [download page][20] and get the link there. Download the file with wget:
|
||||
|
||||
```
|
||||
wget https://s3.amazonaws.com/Minecraft.Download/versions/1.12.2/minecraft_server.1.12.2.jar
|
||||
```
|
||||
|
||||
#### 5. Install the Minecraft server
|
||||
|
||||
Once you’ve downloaded the server .jar file, you need to run it once and it will generate some files, including an eula.txt license file. The first time you run it, it will return an error and exit. That’s supposed to happen. Run in with the following command:
|
||||
|
||||
```
|
||||
java -Xms2048M -Xmx3472M -jar minecraft_server.1.12.2.jar nogui
|
||||
```
|
||||
|
||||
“-Xms2048M” is the minimum RAM that your Minecraft server can use and “-Xmx3472M” is the maximum. [Adjust][21] this based on your server’s resources. If you got the 4GB RAM server from [Vultr][22] you can leave them as-is, if you don’t use the server for anything else other than Minecraft.
|
||||
|
||||
After that command ends and returns an error, a new eula.txt file will be generated. You need to accept the license in that file. You can do that by adding “eula=true” to the file with the following command:
|
||||
|
||||
```
|
||||
sed -i.orig 's/eula=false/eula=true/g' eula.txt
|
||||
```
|
||||
|
||||
You can now start the server again and access the Minecraft server console with that same java command from before:
|
||||
|
||||
```
|
||||
java -Xms2048M -Xmx3472M -jar minecraft_server.1.12.2.jar nogui
|
||||
```
|
||||
|
||||
Make sure you’re in the /opt/minecraft directory, or the directory where you installed your MC server.
|
||||
|
||||
You’re free to stop here if you’re just testing this and need it for the short-term. If you’re having trouble loggin into the server, you’ll need to [configure your firewall][23].
|
||||
|
||||
The first time you successfully start the server it will take a bit longer to generate
|
||||
|
||||
We’ll show you how to create a script so you can start the server with it.
|
||||
|
||||
#### 6. Start the Minecraft server with a script, make it persistent, and enable it at boot
|
||||
|
||||
To make things easier, we’ll create a bash script that will start the server automatically.
|
||||
|
||||
So first, create a bash script with nano:
|
||||
|
||||
```
|
||||
nano /opt/minecraft/startminecraft.sh
|
||||
```
|
||||
|
||||
A new (blank) file will open. Paste the following:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
cd /opt/minecraft/ && java -Xms2048M -Xmx3472M -jar minecraft_server.1.12.2.jar nogui
|
||||
```
|
||||
|
||||
If you’re new to nano – you can save and close the file with “CTRL + X”, then “Y”, and hitting enter. This script navigates to your Minecraft server directory you created previously and runs the java command for starting the server. You need to make it executable with the following command:
|
||||
|
||||
```
|
||||
chmod +x startminecraft.sh
|
||||
```
|
||||
|
||||
Then, you can start the server anytime with the following command:
|
||||
|
||||
```
|
||||
/opt/minecraft/startminecraft.sh
|
||||
```
|
||||
|
||||
But, if/when you log out of the SSH session the server will turn off. To keep the server up without being logged in all the time, you can use a screen session. A screen session basically means that it will keep running until the actual server reboots or turns off.
|
||||
|
||||
Start a screen session with this command:
|
||||
|
||||
```
|
||||
screen -S minecraft
|
||||
```
|
||||
|
||||
Once you’re in the screen session (looks like you would start a new ssh session), you can use the bash script from earlier to start the server:
|
||||
|
||||
```
|
||||
/opt/minecraft/startminecraft.sh
|
||||
```
|
||||
|
||||
To get out of the screen session, you should press CTRL + A-D. Even after you get out of the screen session (detach), the server will keep running. You can safely log off your Ubuntu server now, and the Minecraft server you created will keep running.
|
||||
|
||||
But, if the Ubuntu server reboots or shuts off, the screen session won’t work anymore. So **to do everything we did before automatically at boot** , do the following:
|
||||
|
||||
Open the /etc/rc.local file:
|
||||
|
||||
```
|
||||
nano /etc/rc.local
|
||||
```
|
||||
|
||||
and add the following line above the “exit 0” line:
|
||||
|
||||
```
|
||||
screen -dm -S minecraft /opt/minecraft/startminecraft.sh
|
||||
exit 0
|
||||
```
|
||||
|
||||
Save and close the file.
|
||||
|
||||
To access the Minecraft server console, just run the following command to attach to the screen session:
|
||||
|
||||
```
|
||||
screen -r minecraft
|
||||
```
|
||||
|
||||
That’s it for now. Congrats and have fun! You can now connect to your Minecraft server or configure/modify it.
|
||||
|
||||
### Configure your Ubuntu Server
|
||||
|
||||
You’ll, of course, need to set up your Ubuntu server and secure it if you haven’t already done so. Follow the [guide we mentioned earlier][24] and google it for more info. The configurations you need to do for your Minecraft server on your Ubuntu server are:
|
||||
|
||||
#### Enable and configure the firewall
|
||||
|
||||
First, if it’s not already enabled, you should enable UFW that you previously installed:
|
||||
|
||||
```
|
||||
ufw enable
|
||||
```
|
||||
|
||||
You should allow the default Minecraft server port:
|
||||
|
||||
```
|
||||
ufw allow 25565/tcp
|
||||
```
|
||||
|
||||
You should allow and deny other rules depending on how you use your server. You should deny ports like 80 and 443 if you don’t use the server for hosting websites. Google a UFW/Firewall guide for Ubuntu and you’ll get recommendations. Be careful when setting up your firewall, you may lock yourself out of your server if you block the SSH port.
|
||||
|
||||
Since this is the default port, it often gets automatically scanned and attacked. You can prevent attacks by blocking access to anyone that’s not of your whitelist.
|
||||
|
||||
First, you need to enable the whitelist mode in your [server.properties][25] file. To do that, open the file:
|
||||
|
||||
```
|
||||
nano /opt/minecraft/server.properties
|
||||
```
|
||||
|
||||
And change “white-list” line to “true”:
|
||||
|
||||
```
|
||||
white-list=true
|
||||
```
|
||||
|
||||
Save and close the file.
|
||||
|
||||
Then restart your server (either by restarting your Ubuntu server or by running the start bash script again):
|
||||
|
||||
```
|
||||
/opt/minecraft/startminecraft.sh
|
||||
```
|
||||
|
||||
Access the Minecraft server console:
|
||||
|
||||
```
|
||||
screen -r minecraft
|
||||
```
|
||||
|
||||
And if you want someone to be able to join your server, you need to add them to the whitelist with the following command:
|
||||
|
||||
```
|
||||
whitelist add PlayerUsername
|
||||
```
|
||||
|
||||
To remove them from the whitelist, use:
|
||||
|
||||
```
|
||||
whitelist remove PlayerUsername
|
||||
```
|
||||
|
||||
Exit the screen session (server console) with CTRL + A-D. It’s worth noting that this will deny access to everyone but the whitelisted usernames.
|
||||
|
||||
[![how to create a minecraft server](https://thishosting.rocks/wp-content/uploads/2018/01/create-a-minecraft-server.jpg)][26]
|
||||
|
||||
### How to Make a Minecraft Server – FAQs
|
||||
|
||||
We’ll answer some frequently asked questions about Minecraft Servers and our guide.
|
||||
|
||||
#### How do I restart the Minecraft server?
|
||||
|
||||
If you followed every step from our tutorial, including enabling the server to start on boot, you can just reboot your Ubuntu server. If you didn’t set it up to start at boot, you can just run the start script again which will restart the Minecraft server:
|
||||
|
||||
```
|
||||
/opt/minecraft/startminecraft.sh
|
||||
```
|
||||
|
||||
#### How do I configure my Minecraft server?
|
||||
|
||||
You can configure your server using the [server.properties][27] file. Check the Minecraft Wiki for more info, though you can leave everything as-is and it will work perfectly fine.
|
||||
|
||||
If you want to change the game mode, difficulty and stuff like that, you can use the server console. Access the server console by running:
|
||||
|
||||
```
|
||||
screen -r minecraft
|
||||
```
|
||||
|
||||
And execute [commands][28] there. Commands like:
|
||||
|
||||
```
|
||||
difficulty hard
|
||||
```
|
||||
|
||||
```
|
||||
gamemode survival @a
|
||||
```
|
||||
|
||||
You may need to restart the server depending on what command you used. There are many more commands you can use, check the [wiki][29] for more.
|
||||
|
||||
#### How do I upgrade my Minecraft server?
|
||||
|
||||
If there’s a new release, you need to do this:
|
||||
|
||||
Navigate to the minecraft directory:
|
||||
|
||||
```
|
||||
cd /opt/minecraft
|
||||
```
|
||||
|
||||
Download the latest version, example 1.12.3 with wget:
|
||||
|
||||
```
|
||||
wget https://s3.amazonaws.com/Minecraft.Download/versions/1.12.3/minecraft_server.1.12.3.jar
|
||||
```
|
||||
|
||||
Next, run and build the new server:
|
||||
|
||||
```
|
||||
java -Xms2048M -Xmx3472M -jar minecraft_server.1.12.3.jar nogui
|
||||
```
|
||||
|
||||
Finally, update your start script:
|
||||
|
||||
```
|
||||
nano /opt/minecraft/startminecraft.sh
|
||||
```
|
||||
|
||||
And update the version number accordingly:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
cd /opt/minecraft/ && java -Xms2048M -Xmx3472M -jar minecraft_server.1.12.3.jar nogui
|
||||
```
|
||||
|
||||
Now you can restart the server and everything should go well.
|
||||
|
||||
#### Why is your Minecraft server tutorial so long, and yet others are only 2 lines long?!
|
||||
|
||||
We tried to make this beginner-friendly and be as detailed as possible. We also showed you how to make the Minecraft server persistent and start it automatically at boot, we showed you how to configure your server and everything. I mean, sure, you can start a Minecraft server with a couple of lines, but it would definitely suck, for more than one reason.
|
||||
|
||||
#### I don’t know Linux or anything you wrote about here, how do I make a Minecraft server?
|
||||
|
||||
Just read all of our article and copy and paste the commands. If you really don’t know how to do it all, [we can do it for you][30], or just get a [managed][31] server [provider][32] and let them do it for you.
|
||||
|
||||
#### How do I install mods on my server? How do I install plugins?
|
||||
|
||||
Our article is intended to be a starting guide. You should check the [Minecraft wiki][33] for more info, or just google it. There are plenty of tutorials online
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://thishosting.rocks/how-to-make-a-minecraft-server/
|
||||
|
||||
作者:[ThisHosting.Rocks][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://thishosting.rocks
|
||||
[1]:https://thishosting.rocks/how-to-make-a-minecraft-server/#reasons
|
||||
[2]:https://thishosting.rocks/how-to-make-a-minecraft-server/#not-pc
|
||||
[3]:https://thishosting.rocks/how-to-make-a-minecraft-server/#requirements
|
||||
[4]:https://thishosting.rocks/how-to-make-a-minecraft-server/#make-minecraft-server
|
||||
[5]:https://thishosting.rocks/how-to-make-a-minecraft-server/#persistent
|
||||
[6]:https://thishosting.rocks/how-to-make-a-minecraft-server/#boot
|
||||
[7]:https://thishosting.rocks/how-to-make-a-minecraft-server/#configure-minecraft-server
|
||||
[8]:https://thishosting.rocks/how-to-make-a-minecraft-server/#faqs
|
||||
[9]:https://thishosting.rocks/cheap-cloud-hosting-providers-comparison/
|
||||
[10]:https://thishosting.rocks/go/vultr/
|
||||
[11]:https://minecraft.gamepedia.com/Server/Requirements/Dedicated
|
||||
[12]:https://thishosting.rocks/best-cheap-managed-vps/
|
||||
[13]:https://www.chiark.greenend.org.uk/~sgtatham/putty/
|
||||
[14]:https://mobaxterm.mobatek.net/
|
||||
[15]:https://www.linode.com/docs/security/securing-your-server/
|
||||
[16]:https://thishosting.rocks/go/vultr/
|
||||
[17]:https://thishosting.rocks/go/vultr/
|
||||
[18]:https://thishosting.rocks/ubuntu-18-04-new-features-release-date/
|
||||
[19]:https://www.linode.com/docs/security/securing-your-server/
|
||||
[20]:https://minecraft.net/en-us/download/server
|
||||
[21]:https://minecraft.gamepedia.com/Commands
|
||||
[22]:https://thishosting.rocks/go/vultr/
|
||||
[23]:https://thishosting.rocks/how-to-make-a-minecraft-server/#configure-minecraft-server
|
||||
[24]:https://www.linode.com/docs/security/securing-your-server/
|
||||
[25]:https://minecraft.gamepedia.com/Server.properties
|
||||
[26]:https://thishosting.rocks/wp-content/uploads/2018/01/create-a-minecraft-server.jpg
|
||||
[27]:https://minecraft.gamepedia.com/Server.properties
|
||||
[28]:https://minecraft.gamepedia.com/Commands
|
||||
[29]:https://minecraft.gamepedia.com/Commands
|
||||
[30]:https://thishosting.rocks/support/
|
||||
[31]:https://thishosting.rocks/best-cheap-managed-vps/
|
||||
[32]:https://thishosting.rocks/best-cheap-managed-vps/
|
||||
[33]:https://minecraft.gamepedia.com/Minecraft_Wiki
|
@ -1,62 +0,0 @@
|
||||
Linux Kernel 4.15: 'An Unusual Release Cycle'
|
||||
============================================================
|
||||
|
||||
|
||||
![Linux](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/background-penguin.png?itok=g8NBQs24 "Linux")
|
||||
Linus Torvalds released version 4.15 of the Linux Kernel on Sunday, a week later than originally scheduled. Learn about key updates in this latest release.[Creative Commons Zero][1]Pixabay
|
||||
|
||||
Linus Torvalds [released version 4.15 of the Linux Kernel][7] on Sunday, again, and for a second version in a row, a week later than scheduled. The culprits for the late release were the Meltdown and Spectre bugs, as these two vulnerabilities forced developers to submit major patches well into what should have been the last cycle. Torvalds was not comfortable rushing the release, so he gave it another week.
|
||||
|
||||
Unsurprisingly, the first big bunch of patches worth mentioning were those designed to sidestep [Meltdown and Spectre][8]. To avoid Meltdown, a problem that affects Intel chips, [developers have implemented _Page Table Isolation_ (PTI)][9] for the x86 architecture. If for any reason you want to turn this off, you can use the `pti=off` kernel boot option.
|
||||
|
||||
Spectre v2 affects both Intel and AMD chips and, to avoid it, [the kernel now comes with the _retpoline_ mechanism][10]. Retpoline requires a version of GCC that supports the `-mindirect-branch=thunk-extern` functionality. As with PTI, the Spectre-inhibiting mechanism can be turned of. To do so, use the `spectre_v2=off` option at boot time. Although developers are working to address Spectre v1, at the moment of writing there is still not a solution, so there is no patch for this bug in 4.15.
|
||||
|
||||
The solution for Meltdown on ARM has also been pushed to the next development cycle, but there is [a remedy for the bug on PowerPC with the _RFI flush of L1-D cache_ feature][11] included in this release.
|
||||
|
||||
An interesting side affect of all of the above is that new kernels now come with a _/sys/devices/system/cpu/vulnerabilities/_ virtual directory. This directory shows the vulnerabilities affecting your CPU and the remedies being currently applied.
|
||||
|
||||
The issues with buggy chips (and the manufacturers that keep things like this secret) has revived the call for the development of viable open source alternatives. This brings us to the partial support for [RISC-V][12] chips that has now been merged into the mainline kernel. RISC-V is an open instruction set architecture that allows manufacturers to create their own implementation of RISC-V chips, and it has resulted in several open sourced chips. While RISC-V chips are currently used mainly in embedded devices, powering things like smart hard disks or Arduino-like development boards, RISC-V proponents argue that the architecture is also well-suited for use on personal computers and even in multi-node supercomputers.
|
||||
|
||||
[The support for RISC-V][13], as mentioned above, is still incomplete, and includes the architecture code but no device drivers. This means that, although a Linux kernel will run on RISC-V, there is no significant way to actually interact with the underlying hardware. That said, RISC-V is not vulnerable to any of the bugs that have dogged other closed architectures, and development for its support is progressing at a brisk pace, as [the RISC-V Foundation has the support of some of the industries biggest heavyweights][14].
|
||||
|
||||
### Other stuff that's new in kernel 4.15
|
||||
|
||||
Torvalds has often declared he likes things boring. Fortunately for him, he says, apart from the Spectre and Meltdown messes, most of the other things that happened in 4.15 were very much run of the mill, such as incremental improvements for drivers, support for new devices, and so on. However, there were a few more things worth pointing out:
|
||||
|
||||
* [AMD got support for Secure Encrypted Virtualization][3]. This allows the kernel to fence off the memory a virtual machine is using by encrypting it. The encrypted memory can only be decrypted by the virtual machine that is using it. Not even the hypervisor can see inside it. This means that data being worked on by VMs in the cloud, for example, is safe from being spied on by any other process outside the VM.
|
||||
|
||||
* AMD GPUs get a substantial boost thanks to [the inclusion of _display code_][4] . This gives mainline support to Radeon RX Vega and Raven Ridge cards and also implements HDMI/DP audio for AMD cards.
|
||||
|
||||
* Raspberry Pi aficionados will be glad to know that [the 7'' touchscreen is now natively supported][5], which is guaranteed to lead to hundreds of fun projects.
|
||||
|
||||
To find out more, you can check out the write-ups at [Kernel Newbies][15] and [Phoronix][16].
|
||||
|
||||
_Learn more about Linux through the free ["Introduction to Linux" ][6]course from The Linux Foundation and edX._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/intro-to-linux/2018/1/linux-kernel-415-unusual-release-cycle
|
||||
|
||||
作者:[PAUL BROWN ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/bro66
|
||||
[1]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[2]:https://www.linux.com/files/images/background-penguinpng
|
||||
[3]:https://git.kernel.org/linus/33e63acc119d15c2fac3e3775f32d1ce7a01021b
|
||||
[4]:https://git.kernel.org/torvalds/c/f6705bf959efac87bca76d40050d342f1d212587
|
||||
[5]:https://git.kernel.org/linus/2f733d6194bd58b26b705698f96b0f0bd9225369
|
||||
[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
||||
[7]:https://lkml.org/lkml/2018/1/28/173
|
||||
[8]:https://meltdownattack.com/
|
||||
[9]:https://git.kernel.org/linus/5aa90a84589282b87666f92b6c3c917c8080a9bf
|
||||
[10]:https://git.kernel.org/linus/76b043848fd22dbf7f8bf3a1452f8c70d557b860
|
||||
[11]:https://git.kernel.org/linus/aa8a5e0062ac940f7659394f4817c948dc8c0667
|
||||
[12]:https://riscv.org/
|
||||
[13]:https://git.kernel.org/torvalds/c/b293fca43be544483b6488d33ad4b3ed55881064
|
||||
[14]:https://riscv.org/membership/
|
||||
[15]:https://kernelnewbies.org/Linux_4.15
|
||||
[16]:https://www.phoronix.com/scan.php?page=search&q=Linux+4.15
|
@ -1,121 +0,0 @@
|
||||
translated by cyleft
|
||||
|
||||
Linux ln Command Tutorial for Beginners (5 Examples)
|
||||
======
|
||||
|
||||
Sometimes, while working on the command line, you need to create links between files. This can be achieved using a dedicated command, dubbed **ln**. In this tutorial, we will discuss the basics of this tool using some easy to understand examples. But before we do that, it's worth mentioning that all examples here have been tested on an Ubuntu 16.04 machine.
|
||||
|
||||
### Linux ln command
|
||||
|
||||
As you'd have understood by now, the ln command lets you make links between files. Following is the syntax (or rather different syntax available) for this tool:
|
||||
|
||||
```
|
||||
ln [OPTION]... [-T] TARGET LINK_NAME (1st form)
|
||||
ln [OPTION]... TARGET (2nd form)
|
||||
ln [OPTION]... TARGET... DIRECTORY (3rd form)
|
||||
ln [OPTION]... -t DIRECTORY TARGET... (4th form)
|
||||
```
|
||||
|
||||
And here's how the tool's man page explains it:
|
||||
```
|
||||
In the 1st form, create a link to TARGET with the name LINK_NAME. In the 2nd form, create a link
|
||||
to TARGET in the current directory. In the 3rd and 4th forms, create links to each TARGET in
|
||||
DIRECTORY. Create hard links by default, symbolic links with --symbolic. By default, each
|
||||
destination (name of new link) should not already exist. When creating hard links, each TARGET
|
||||
must exist. Symbolic links can hold arbitrary text; if later resolved, a relative link is
|
||||
interpreted in relation to its parent directory.
|
||||
```
|
||||
|
||||
The following Q&A-styled examples will give you a better idea on how the ln command works. But before that, it's good you get a understanding of what's the [difference between hard links and soft links][1].
|
||||
|
||||
### Q1. How to create a hard link using ln?
|
||||
|
||||
That's pretty straightforward - all you have to do is to use the ln command in the following way:
|
||||
|
||||
```
|
||||
ln [file] [hard-link-to-file]
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
ln test.txt test_hard_link.txt
|
||||
```
|
||||
|
||||
[![How to create a hard link using ln][2]][3]
|
||||
|
||||
So you can see a hard link was created with the name test_hard_link.txt.
|
||||
|
||||
### Q2. How to create soft/symbolic link using ln?
|
||||
|
||||
For this, use the -s command line option.
|
||||
|
||||
```
|
||||
ln -s [file] [soft-link-to-file]
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
ln -s test.txt test_soft_link.txt
|
||||
```
|
||||
|
||||
[![How to create soft/symbolic link using ln][4]][5]
|
||||
|
||||
The test_soft_link.txt file is a soft/symbolic link, as [confirmed][6] by its sky blue text color.
|
||||
|
||||
### Q3. How to make ln remove existing destination files of same name?
|
||||
|
||||
By default, ln won't let you create a link if a file of the same name already exists in the destination directory.
|
||||
|
||||
[![ln command example][7]][8]
|
||||
|
||||
However, if you want, you can make ln override this behavior by using the **-f** command line option.
|
||||
|
||||
[![How to make ln remove existing destination files of same name][9]][10]
|
||||
|
||||
**Note** : You can use the **-i** command line option if you want to make all this deletion process interactive.
|
||||
|
||||
### Q4. How to make ln create backup of existing files with same name?
|
||||
|
||||
If you don't want ln to delete existing files of same name, you can make it create backup of these files. This can be achieved using the **-b** command line option. Backup files created this way will contain a tilde (~) towards the end of their name.
|
||||
|
||||
[![How to make ln create backup of existing files with same name][11]][12]
|
||||
|
||||
A particular destination directory (other than the current one) can be specified using the **-t** command line option. For example:
|
||||
|
||||
```
|
||||
ls test* | xargs ln -s -t /home/himanshu/Desktop/
|
||||
```
|
||||
|
||||
The aforementioned command will create links to all test* files (present in the current directory) and put them in the Desktop directory.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Agreed, **ln** isn't something that you'll require on daily basis, especially if you're a newbie. But it's a helpful command to know about, as you never know when it'd save your day. We've discussed some useful command line options the tool offers. Once you're done with these, you can learn more about ln by heading to its [man page][13].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-ln-command/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://medium.com/meatandmachines/explaining-the-difference-between-hard-links-symbolic-links-using-bruce-lee-32828832e8d3
|
||||
[2]:https://www.howtoforge.com/images/command-tutorial/ln-hard-link.png
|
||||
[3]:https://www.howtoforge.com/images/command-tutorial/big/ln-hard-link.png
|
||||
[4]:https://www.howtoforge.com/images/command-tutorial/ln-soft-link.png
|
||||
[5]:https://www.howtoforge.com/images/command-tutorial/big/ln-soft-link.png
|
||||
[6]:https://askubuntu.com/questions/17299/what-do-the-different-colors-mean-in-ls
|
||||
[7]:https://www.howtoforge.com/images/command-tutorial/ln-file-exists.png
|
||||
[8]:https://www.howtoforge.com/images/command-tutorial/big/ln-file-exists.png
|
||||
[9]:https://www.howtoforge.com/images/command-tutorial/ln-f-option.png
|
||||
[10]:https://www.howtoforge.com/images/command-tutorial/big/ln-f-option.png
|
||||
[11]:https://www.howtoforge.com/images/command-tutorial/ln-b-option.png
|
||||
[12]:https://www.howtoforge.com/images/command-tutorial/big/ln-b-option.png
|
||||
[13]:https://linux.die.net/man/1/ln
|
@ -1,133 +0,0 @@
|
||||
Custom Embedded Linux Distributions
|
||||
======
|
||||
### Why Go Custom?
|
||||
|
||||
In the past, many embedded projects used off-the-shelf distributions and stripped them down to bare essentials for a number of reasons. First, removing unused packages reduced storage requirements. Embedded systems are typically shy of large amounts of storage at boot time, and the storage available, in non-volatile memory, can require copying large amounts of the OS to memory to run. Second, removing unused packages reduced possible attack vectors. There is no sense hanging on to potentially vulnerable packages if you don't need them. Finally, removing unused packages reduced distribution management overhead. Having dependencies between packages means keeping them in sync if any one package requires an update from the upstream distribution. That can be a validation nightmare.
|
||||
|
||||
Yet, starting with an existing distribution and removing packages isn't as easy as it sounds. Removing one package might break dependencies held by a variety of other packages, and dependencies can change in the upstream distribution management. Additionally, some packages simply cannot be removed without great pain due to their integrated nature within the boot or runtime process. All of this takes control of the platform outside the project and can lead to unexpected delays in development.
|
||||
|
||||
A popular alternative is to build a custom distribution using build tools available from an upstream distribution provider. Both Gentoo and Debian provide options for this type of bottom-up build. The most popular of these is probably the Debian debootstrap utility. It retrieves prebuilt core components and allows users to cherry-pick the packages of interest in building their platforms. But, debootstrap originally was only for x86 platforms. Although there are ARM (and possibly other) options now, debootstrap and Gentoo's catalyst still take dependency management away from the local project.
|
||||
|
||||
Some people will argue that letting someone else manage the platform software (like Android) is much easier than doing it yourself. But, those distributions are general-purpose, and when you're sitting on a lightweight, resource-limited IoT device, you may think twice about any any advantage that is taken out of your hands.
|
||||
|
||||
### System Bring-Up Primer
|
||||
|
||||
A custom Linux distribution requires a number of software components. The first is the toolchain. A toolchain is a collection of tools for compiling software, including (but not limited to) a compiler, linker, binary manipulation tools and standard C library. Toolchains are built specifically for a target hardware device. A toolchain built on an x86 system that is intended for use with a Raspberry Pi is called a cross-toolchain. When working with small embedded devices with limited memory and storage, it's always best to use a cross-toolchain. Note that even applications written for a specific purpose in a scripted language like JavaScript will need to run on a software platform that needs to be compiled with a cross-toolchain.
|
||||
|
||||
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12278f1.png)
|
||||
|
||||
Figure 1\. Compile Dependencies and Boot Order
|
||||
|
||||
The cross-toolchain is used to build software components for the target hardware. The first component needed is a bootloader. When power is applied to a board, the processor (depending on design) attempts to jump to a specific memory location to start running software. That memory location is where a bootloader is stored. Hardware can have a built-in bootloader that can be run directly from its storage location or it may be copied into memory first before it is run. There also can be multiple bootloaders. A first-stage bootloader would reside on the hardware in NAND or NOR flash, for example. Its sole purpose would be to set up the hardware so a second-stage bootloader, such as one stored on an SD card, can be loaded and run.
|
||||
|
||||
Bootloaders have enough knowledge to get the hardware to the point where it can load Linux into memory and jump to it, effectively handing control over to Linux. Linux is an operating system. This means that, by design, it doesn't actually do anything other than monitor the hardware and provide services to higher layer software—aka applications. The [Linux kernel][1] often is accompanied by a variety of firmware blobs. These are software objects that have been precompiled, often containing proprietary IP (intellectual property) for devices used with the hardware platform. When building a custom distribution, it may be necessary to acquire any firmware blobs not provided by the Linux kernel source tree before beginning compilation of the kernel.
|
||||
|
||||
Applications are stored in the root filesystem. The root filesystem is constructed by compiling and collecting a variety of software libraries, tools, scripts and configuration files. Collectively, these all provide the services, such as network configuration and USB device mounting, required by applications the project will run.
|
||||
|
||||
In summary, a complete system build requires the following components:
|
||||
|
||||
1. A cross-toolchain.
|
||||
|
||||
2. One or more bootloaders.
|
||||
|
||||
3. The Linux kernel and associated firmware blobs.
|
||||
|
||||
4. A root filesystem populated with libraries, tools and utilities.
|
||||
|
||||
5. Custom applications.
|
||||
|
||||
### Start with the Right Tools
|
||||
|
||||
The components of the cross-toolchain can be built manually, but it's a complex process. Fortunately, tools exist that make this process easier. The best of them is probably [Crosstool-NG][2]. This project utilizes the same kconfig menu system used by the Linux kernel to configure the bits and pieces of the toolchain. The key to using this tool is finding the correct configuration items for the target platform. This typically includes the following items:
|
||||
|
||||
1. The target architecture, such as ARM or x86.
|
||||
|
||||
2. Endianness: little (typically Intel) or big (typically ARM or others).
|
||||
|
||||
3. CPU type as it's known to the compiler, such as GCC's use of either -mcpu or --with-cpu.
|
||||
|
||||
4. The floating point type supported, if any, by the CPU, such as GCC's use of either -mfpu or --with-fpu.
|
||||
|
||||
5. Specific version information for the binutils package, the C library and the C compiler.
|
||||
|
||||
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12278f2.png)
|
||||
|
||||
Figure 2. Crosstool-NG Configuration Menu
|
||||
|
||||
The first four are typically available from the processor maker's documentation. It can be hard to find these for relatively new processors, but for the Raspberry Pi or BeagleBoards (and their offspring and off-shoots), you can find the information online at places like the [Embedded Linux Wiki][3].
|
||||
|
||||
The versions of the binutils, C library and C compiler are what will separate the toolchain from any others that might be provided from third parties. First, there are multiple providers of each of these things. Linaro provides bleeding-edge versions for newer processor types, while working to merge support into upstream projects like the GNU C Library. Although you can use a variety of providers, you may want to stick to the stock GNU toolchain or the Linaro versions of the same.
|
||||
|
||||
Another important selection in Crosstool-NG is the version of the Linux kernel. This selection gets headers for use with various toolchain components, but it is does not have to be the same as the Linux kernel you will boot on the target hardware. It's important to choose a kernel that is not newer than the target hardware's kernel. When possible, pick a long-term support kernel that is older than the kernel that will be used on the target hardware.
|
||||
|
||||
For most developers new to custom distribution builds, the toolchain build is the most complex process. Fortunately, binary toolchains are available for many target hardware platforms. If building a custom toolchain becomes problematic, search online at places like the [Embedded Linux Wiki][4] for links to prebuilt toolchains.
|
||||
|
||||
### Booting Options
|
||||
|
||||
The next component to focus on after the toolchain is the bootloader. A bootloader sets up hardware so it can be used by ever more complex software. A first-stage bootloader is often provided by the target platform maker, burned into on-hardware storage like an EEPROM or NOR flash. The first-stage bootloader will make it possible to boot from, for example, an SD card. The Raspberry Pi has such a bootloader, which makes creating a custom bootloader unnecessary.
|
||||
|
||||
Despite that, many projects add a secondary bootloader to perform a variety of tasks. One such task could be to provide a splash animation without using the Linux kernel or userspace tools like plymouth. A more common secondary bootloader task is to make network-based boot or PCI-connected disks available. In those cases, a tertiary bootloader, such as GRUB, may be necessary to get the system running.
|
||||
|
||||
Most important, bootloaders load the Linux kernel and start it running. If the first-stage bootloader doesn't provide a mechanism for passing kernel arguments at boot time, a second-stage bootloader may be necessary.
|
||||
|
||||
A number of open-source bootloaders are available. The [U-Boot project][5] often is used for ARM platforms like the Raspberry Pi. CoreBoot typically is used for x86 platform like the Chromebook. Bootloaders can be very specific to target hardware. The choice of bootloader will depend on overall project requirements and target hardware (search for lists of open-source bootloaders be online).
|
||||
|
||||
### Now Bring the Penguin
|
||||
|
||||
The bootloader will load the Linux kernel into memory and start it running. Linux is like an extended bootloader: it continues hardware setup and prepares to load higher-level software. The core of the kernel will set up and prepare memory for sharing between applications and hardware, prepare task management to allow multiple applications to run at the same time, initialize hardware components that were not configured by the bootloader or were configured incompletely and begin interfaces for human interaction. The kernel may not be configured to do this on its own, however. It may include an embedded lightweight filesystem, known as the initramfs or initrd, that can be created separately from the kernel to assist in hardware setup.
|
||||
|
||||
Another thing the kernel handles is downloading binary blobs, known generically as firmware, to hardware devices. Firmware is pre-compiled object files in formats specific to a particular device that is used to initialize hardware in places that the bootloader and kernel cannot access. Many such firmware objects are available from the Linux kernel source repositories, but many others are available only from specific hardware vendors. Examples of devices that often provide their own firmware include digital TV tuners or WiFi network cards.
|
||||
|
||||
Firmware may be loaded from the initramfs or may be loaded after the kernel starts the init process from the root filesystem. However, creating the kernel often will be the process where obtaining firmware will occur when creating a custom Linux distribution.
|
||||
|
||||
### Lightweight Core Platforms
|
||||
|
||||
The last thing the Linux kernel does is to attempt to run a specific program called the init process. This can be named init or linuxrc or the name of the program can be passed to the kernel by the bootloader. The init process is stored in a file system that the kernel can access. In the case of the initramfs, the file system is stored in memory (either by the kernel itself or by the bootloader placing it there). But the initramfs is not typically complete enough to run more complex applications. So another file system, known as the root file system, is required.
|
||||
|
||||
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12278f3.png)
|
||||
|
||||
Figure 3\. Buildroot Configuration Menu
|
||||
|
||||
The initramfs filesystem can be built using the Linux kernel itself, but more commonly, it is created using a project called [BusyBox][6]. BusyBox combines a collection of GNU utilities, such as grep or awk, into a single binary in order to reduce the size of the filesystem itself. BusyBox often is used to jump-start the root filesystem's creation.
|
||||
|
||||
But, BusyBox is purposely lightweight. It isn't intended to provide every tool that a target platform will need, and even those it does provide can be feature-reduced. BusyBox has a sister project known as [Buildroot][7], which can be used to get a complete root filesystem, providing a variety of libraries, utilities and scripting languages. Like Crosstool-NG and the Linux kernel, both BusyBox and Buildroot allow custom configuration using the kconfig menu system. More important, the Buildroot system handles dependencies automatically, so selection of a given utility will guarantee that any software it requires also will be built and installed in the root filesystem.
|
||||
|
||||
Buildroot can generate a root filesystem archive in a variety of formats. However, it is important to note that the filesystem only is archived. Individual utilities and libraries are not packaged in either Debian or RPM formats. Using Buildroot will generate a root filesystem image, but its contents are not managed packages. Despite this, Buildroot does provide support for both the opkg and rpm package managers. This means custom applications that will be installed on the root filesystem can be package-managed, even if the root filesystem itself is not.
|
||||
|
||||
### Cross-Compiling and Scripting
|
||||
|
||||
One of Buildroot's features is the ability to generate a staging tree. This directory contains libraries and utilities that can be used to cross-compile other applications. With a staging tree and the cross toolchain, it becomes possible to compile additional applications outside Buildroot on the host system instead of on the target platform. Using rpm or opkg, those applications then can be installed to the root filesystem on the target at runtime using package management software.
|
||||
|
||||
Most custom systems are built around the idea of building applications with scripting languages. If scripting is required on the target platform, a variety of choices are available from Buildroot, including Python, PHP, Lua and JavaScript via Node.js. Support also exists for applications requiring encryption using OpenSSL.
|
||||
|
||||
### What's Next
|
||||
|
||||
The Linux kernel and bootloaders are compiled like most applications. Their build systems are designed to build a specific bit of software. Crosstool-NG and Buildroot are metabuilds. A metabuild is a wrapper build system around a collection of software, each with their own build systems. Alternatives to these include [Yocto][8] and [OpenEmbedded][9]. The benefit of Buildroot is the ease with which it can be wrapped by an even higher-level metabuild to automate customized Linux distribution builds. Doing this opens the option of pointing Buildroot to project-specific cache repositories. Using cache repositories can speed development and offers snapshot builds without worrying about changes to upstream repositories.
|
||||
|
||||
An example implementation of a higher-level build system is [PiBox][10]. PiBox is a metabuild wrapped around all of the tools discussed in this article. Its purpose is to add a common GNU Make target construction around all the tools in order to produce a core platform on which additional software can be built and distributed. The PiBox Media Center and kiosk projects are implementations of application-layer software installed on top of the core platform to produce a purpose-built platform. The [Iron Man project][11] is intended to extend these applications for home automation, integrated with voice control and IoT management.
|
||||
|
||||
But PiBox is nothing without these core software tools and could never run without an in-depth understanding of a complete custom distribution build process. And, PiBox could not exist without the long-term dedication of the teams of developers for these projects who have made custom-distribution-building a task for the masses.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/custom-embedded-linux-distributions
|
||||
|
||||
作者:[Michael J.Hammel][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/user/1000879
|
||||
[1]:https://www.kernel.org
|
||||
[2]:http://crosstool-ng.github.io
|
||||
[3]:https://elinux.org/Main_Page
|
||||
[4]:https://elinux.org/Main_Page
|
||||
[5]:https://www.denx.de/wiki/U-Boot
|
||||
[6]:https://busybox.net
|
||||
[7]:https://buildroot.org
|
||||
[8]:https://www.yoctoproject.org
|
||||
[9]:https://www.openembedded.org/wiki/Main_Page
|
||||
[10]:https://www.piboxproject.com
|
||||
[11]:http://redmine.graphics-muse.org/projects/ironman/wiki/Getting_Started
|
@ -0,0 +1,62 @@
|
||||
How to Check Your Linux PC for Meltdown or Spectre Vulnerability
|
||||
======
|
||||
|
||||
![](https://www.maketecheasier.com/assets/uploads/2018/01/lmc-feat.jpg)
|
||||
|
||||
One of the scariest realities of the Meltdown and Spectre vulnerabilities is just how widespread they are. Virtually every modern computer is affected in some way. The real question is how exactly are _you_ affected? Every system is at a different state of vulnerability depending on which software has and hasn’t been patched.
|
||||
|
||||
Since Meltdown and Spectre are both fairly new and things are moving quickly, it’s not all that easy to tell what you need to look out for or what’s been fixed on your system. There are a couple of tools available that can help. They’re not perfect, but they can help you figure out what you need to know.
|
||||
|
||||
### Simple Test
|
||||
|
||||
One of the top Linux kernel developers provided a simple way of checking the status of your system in regards to the Meltdown and Spectre vulnerabilities. This one is the easiest, and is most concise, but it doesn’t work on every system. Some distributions decided not to include support for this report. Even still, it’s worth a shot to check.
|
||||
```
|
||||
grep . /sys/devices/system/cpu/vulnerabilities/*
|
||||
|
||||
```
|
||||
|
||||
![Kernel Vulnerability Check][1]
|
||||
|
||||
You should see output similar to the image above. Chances are, you’ll see that at least one of the vulnerabilities remains unchecked on your system. This is especially true since Linux hasn’t made any progress in mitigating Spectre v1 yet.
|
||||
|
||||
### The Script
|
||||
|
||||
If the above method didn’t work for you, or you want a more detailed report of your system, a developer has created a shell script that will check your system to see what exactly it is susceptible to and what has been done to mitigate Meltdown and Spectre.
|
||||
|
||||
In order to get the script, make sure you have Git installed on your system, and then clone the script’s repository into a directory that you don’t mind running it out of.
|
||||
```
|
||||
cd ~/Downloads
|
||||
git clone https://github.com/speed47/spectre-meltdown-checker.git
|
||||
|
||||
```
|
||||
|
||||
It’s not a large repository, so it should only take a few seconds to clone. When it’s done, enter the newly created directory and run the provided script.
|
||||
```
|
||||
cd spectre-meltdown-checker
|
||||
./spectre-meltdown-checker.sh
|
||||
|
||||
```
|
||||
|
||||
You’ll see a bunch of junk spit out into the terminal. Don’t worry, its not too hard to follow. First, the script checks your hardware, and then it runs through the three vulnerabilities: Spectre v1, Spectre v2, and Meltdown. Each gets its own section. In between, the script tells you plainly whether you are vulnerable to each of the three.
|
||||
|
||||
![Meltdown Spectre Check Script Ubuntu][2]
|
||||
|
||||
Each section provides you with a breakdown of potential mitigation and whether or not they have been applied. Here’s where you need to exercise a bit of common sense. The determinations that it gives might seem like they’re in conflict. Do a bit of digging to see if the fixes that it says are applied actually do fully mitigate the problem or not.
|
||||
|
||||
### What This Means
|
||||
|
||||
So, what’s the takeaway? Most Linux systems have been patched against Meltdown. If you haven’t updated yet for that, you should. Spectre v1 is still a big problem, and not a lot of progress has been made there as of yet. Spectre v2 will depend a lot on your distribution and what patches it’s chosen to apply. Regardless of what either tool says, nothing is perfect. Do your research and stay on the lookout for information coming straight from the kernel and distribution developers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/check-linux-meltdown-spectre-vulnerability/
|
||||
|
||||
作者:[Nick Congleton][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/nickcongleton/
|
||||
[1]:https://www.maketecheasier.com/assets/uploads/2018/01/lmc-kernel-check.jpg (Kernel Vulnerability Check)
|
||||
[2]:https://www.maketecheasier.com/assets/uploads/2018/01/lmc-script.jpg (Meltdown Spectre Check Script Ubuntu)
|
@ -1,85 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
How to reload .vimrc file without restarting vim on Linux/Unix
|
||||
======
|
||||
|
||||
I am a new vim text editor user. I usually load ~/.vimrc usingfor configuration. Once edited my .vimrc file I need to reload it without having to quit Vim session. How do I edit my .vimrc file and reload it without having to restart Vim on Linux or Unix-like system?
|
||||
|
||||
Vim is free and open source text editor that is upwards compatible to Vi. It can be used to edit all kinds of good old text. It is especially useful for editing programs written in C/Perl/Python. One can use it for editing Linux/Unix configuration files. ~/.vimrc is your personal Vim initializations and customization file.
|
||||
|
||||
### How to reload .vimrc file without restarting vim session
|
||||
|
||||
The procedure to reload .vimrc in Vim without restart:
|
||||
|
||||
1. Start vim text editor by typing: `vim filename`
|
||||
2. Load vim config file by typing vim command: `Esc` followed by `:vs ~/.vimrc`
|
||||
3. Add customization like:
|
||||
```
|
||||
filetype indent plugin on
|
||||
set number
|
||||
syntax on
|
||||
```
|
||||
4. Use `:wq` to save a file and exit from ~/.vimrc window.
|
||||
5. Reload ~/.vimrc by typing any one of the following command:
|
||||
```
|
||||
:so $MYVIMRC
|
||||
```
|
||||
OR
|
||||
```
|
||||
:source ~/.vimrc
|
||||
```
|
||||
|
||||
[![How to reload .vimrc file without restarting vim][1]][1]
|
||||
Fig.01: Editing ~/.vimrc and reloading it when needed without quiting vim so that you can continuing editing program
|
||||
|
||||
The `:so[urce]! {file}` vim command read vim configfileor ommands from given file such as ~/.vimrc. These are commands that are executed from Normal mode, like you type them. When used after :global, :argdo, :windo, :bufdo, in a loop or when another command follows the display won't be updated while executing the commands.
|
||||
|
||||
### How to may keys for edit and reload ~/.vimrc
|
||||
|
||||
Append the following in your ~/.vimrc file
|
||||
```
|
||||
" Edit vimr configuration file
|
||||
nnoremap confe :e $MYVIMRC<CR>
|
||||
"
|
||||
|
||||
Reload vims configuration file
|
||||
nnoremap confr :source $MYVIMRC<CR>
|
||||
```
|
||||
|
||||
" Edit vimr configuration file nnoremap confe :e $MYVIMRC<CR> " Reload vims configuration file nnoremap confr :source $MYVIMRC<CR>
|
||||
|
||||
Now just press `Esc` followed by `confe` to edit ~/.vimrc file. To reload type `Esc` followed by `confr`. Some people like to use the <Leader> key in a .vimrc file. So above mapping becomes:
|
||||
```
|
||||
" Edit vimr configuration file
|
||||
nnoremap <Leader>ve :e $MYVIMRC<CR>
|
||||
"
|
||||
" Reload vimr configuration file
|
||||
nnoremap <Leader>vr :source $MYVIMRC<CR>
|
||||
```
|
||||
|
||||
" Edit vimr configuration file nnoremap <Leader>ve :e $MYVIMRC<CR> " " Reload vimr configuration file nnoremap <Leader>vr :source $MYVIMRC<CR>
|
||||
|
||||
The <Leader> key is mapped to \ by default. So you just press \ followed by ve to edit the file. To reload the ~/vimrc file you press \ followed by vr
|
||||
And there you have it, you just reload .vimrc file without restarting vim ever.
|
||||
|
||||
|
||||
### About the author
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][2], [Facebook][3], [Google+][4]. Get the **latest tutorials on SysAdmin, Linux/Unix and open source topics via[my RSS/XML feed][5]**.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/how-to-reload-vimrc-file-without-restarting-vim-on-linux-unix/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/media/new/faq/2018/02/How-to-reload-.vimrc-file-without-restarting-vim.jpg
|
||||
[2]:https://twitter.com/nixcraft
|
||||
[3]:https://facebook.com/nixcraft
|
||||
[4]:https://plus.google.com/+CybercitiBiz
|
||||
[5]:https://www.cyberciti.biz/atom/atom.xml
|
@ -1,191 +0,0 @@
|
||||
How do I edit files on the command line?
|
||||
======
|
||||
|
||||
In this tutorial, we will show you how to edit files on the command line. This article covers three command line editors, vi (or vim), nano, and emacs.
|
||||
|
||||
#### Editing Files with Vi or Vim Command Line Editor
|
||||
|
||||
To edit files on the command line, you can use an editor such as vi. To open the file, run
|
||||
|
||||
```
|
||||
vi /path/to/file
|
||||
```
|
||||
|
||||
Now you see the contents of the file (if there is any. Please note that the file is created if it does not exist yet.).
|
||||
|
||||
The most important commands in vi are these:
|
||||
|
||||
Press `i` to enter the `Insert` mode. Now you can type in your text.
|
||||
|
||||
To leave the Insert mode press `ESC`.
|
||||
|
||||
To delete the character that is currently under the cursor you must press `x` (and you must not be in Insert mode because if you are you will insert the character `x` instead of deleting the character under the cursor). So if you have just opened the file with vi, you can immediately use `x` to delete characters. If you are in Insert mode you have to leave it first with `ESC`.
|
||||
|
||||
If you have made changes and want to save the file, press `:x` (again you must not be in Insert mode. If you are, press `ESC` to leave it).
|
||||
|
||||
If you haven't made any changes, press `:q` to leave the file (but you must not be in Insert mode).
|
||||
|
||||
If you have made changes, but want to leave the file without saving the changes, press `:q!` (but you must not be in Insert mode).
|
||||
|
||||
Please note that during all these operations you can use your keyboard's arrow keys to navigate the cursor through the text.
|
||||
|
||||
So that was all about the vi editor. Please note that the vim editor also works more or less in the same way, although if you'd like to know vim in depth, head [here][1].
|
||||
|
||||
#### Editing Files with Nano Command Line Editor
|
||||
|
||||
Next up is the Nano editor. You can invoke it simply by running the 'nano' command:
|
||||
|
||||
```
|
||||
nano
|
||||
```
|
||||
|
||||
Here's how the nano UI looks like:
|
||||
|
||||
[![Nano command line editor][2]][3]
|
||||
|
||||
You can also launch the editor directly with a file.
|
||||
|
||||
```
|
||||
nano [filename]
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
nano test.txt
|
||||
```
|
||||
|
||||
[![Open a file in nano][4]][5]
|
||||
|
||||
The UI, as you can see, is broadly divided into four parts. The line at the top shows editor version, file being edited, and the editing status. Then comes the actual edit area where you'll see the contents of the file. The highlighted line below the edit area shows important messages, and the last two lines are really helpful for beginners as they show keyboard shortcuts that you use to perform basic tasks in nano.
|
||||
|
||||
So here's a quick list of some of the shortcuts that you should know upfront.
|
||||
|
||||
Use arrow keys to navigate the text, the Backspace key to delete text, and **Ctrl+o** to save the changes you make. When you try saving the changes, nano will ask you for confirmation (see the line below the main editor area in screenshot below):
|
||||
|
||||
[![Save file in nano][6]][7]
|
||||
|
||||
Note that at this stage, you also have an option to save in different OS formats. Pressing **Altd+d** enables the DOS format, while **Atl+m** enables the Mac format.
|
||||
|
||||
[![Save file ind DOS format][8]][9]
|
||||
|
||||
Press enter and your changes will be saved.
|
||||
|
||||
[![File has been saved][10]][11]
|
||||
|
||||
Moving on, to cut and paste lines of text use **Ctrl+k** and **Ctrl+u**. These keyboard shortcuts can also be used to cut and paste individual words, but you'll have to select the words first, something you can do by pressing **Alt+A** (with the cursor under the first character of the word) and then using the arrow to key select the complete word.
|
||||
|
||||
Now comes search operations. A simple search can be initiated using **Ctrl+w** , while a search and replace operation can be done using **Ctrl+\**.
|
||||
|
||||
[![Search in files with nano][12]][13]
|
||||
|
||||
So those were some of the basic features of nano that should give you a head start if you're new to the editor. For more details, read our comprehensive coverage [here][14].
|
||||
|
||||
#### Editing Files with Emacs Command Line Editor
|
||||
|
||||
Next comes **Emacs**. If not already, you can install the editor on your system using the following command:
|
||||
|
||||
```
|
||||
sudo apt-get install emacs
|
||||
```
|
||||
|
||||
Like nano, you can directly open a file to edit in emacs in the following way:
|
||||
|
||||
```
|
||||
emacs -nw [filename]
|
||||
```
|
||||
|
||||
**Note** : The **-nw** flag makes sure emacs launches in bash itself, instead of a separate window which is the default behavior.
|
||||
|
||||
For example:
|
||||
```
|
||||
emacs -nw test.txt
|
||||
|
||||
```
|
||||
|
||||
Here's the editor's UI:
|
||||
|
||||
[![Open file in emacs][15]][16]
|
||||
|
||||
Like nano, the emacs UI is also divided into several parts. The first part is the top menu area, which is similar to the one you'd see in graphical applications. Then comes the main edit area, where the text (of the file you've opened) is displayed.
|
||||
|
||||
Below the edit area sits another highlighted bar that shows things like name of the file, editing mode ('Text' in screenshot above), and status (** for modified, - for non-modified, and %% for read only). Then comes the final area where you provide input instructions, see output as well.
|
||||
|
||||
Now coming to basic operations, after making changes, if you want to save them, use **Ctrl+x** followed by **Ctrl+s**. The last section will show you a message saying something on the lines of '**Wrote ........' . **Here's an example:
|
||||
|
||||
[![Save file in emacs][17]][18]
|
||||
|
||||
Now, if you want to discard changes and quit the editor, use **Ctrl+x** followed by **Ctrl+c**. The editor will confirm this through a prompt - see screenshot below:
|
||||
|
||||
[![Discard changes in emacs][19]][20]
|
||||
|
||||
Type 'n' followed by a 'yes' and the editor will quit without saving the changes.
|
||||
|
||||
Please note that Emacs represents 'Ctrl' as 'C' and 'Alt' as 'M'. So, for example, whenever you see something like C-x, it means Ctrl+x.
|
||||
|
||||
As for other basic editing operations, deleting is simple, as it works through the Backspace/Delete keys that most of us are already used to. However, there are shortcuts that make your deleting experience smooth. For example, use **Ctrl+k** for deleting complete line, **Alt+d** for deleting a word, and **Alt+k** for a sentence.
|
||||
|
||||
Undoing is achieved through ' **Ctrl+x** ' followed by ' **u** ', and to re-do, press **Ctrl+g** followed by **Ctrl+_**. Use **Ctrl+s** for forward search and **Ctrl+r** for reverse search.
|
||||
|
||||
[![Search in files with emacs][21]][22]
|
||||
|
||||
Moving on, to launch a replace operation, use the Alt+Shift+% keyboard shortcut. You'll be asked for the word you want to replace. Enter it. Then the editor will ask you for the replacement. For example, the following screenshot shows emacs asking user about the replacement for the word 'This'.
|
||||
|
||||
[![Replace text with emacs][23]][24]
|
||||
|
||||
Input the replacement text and press Enter. For each replacement operation emacs will carry, it'll seek your permission first:
|
||||
|
||||
[![Confirm text replacement][25]][26]
|
||||
|
||||
Press 'y' and the word will be replaced.
|
||||
|
||||
[![Press y to confirm][27]][28]
|
||||
|
||||
So that's pretty much all the basic editing operations that you should know to start using emacs. Oh, and yes, those menus at the top - we haven't discussed how to access them. Well, those can be accessed using the F10 key.
|
||||
|
||||
[![Basic editing operations][29]][30]
|
||||
|
||||
To come out of these menus, press the Esc key three times.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/faq/how-to-edit-files-on-the-command-line
|
||||
|
||||
作者:[falko][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/vim-basics
|
||||
[2]:https://www.howtoforge.com/images/command-tutorial/nano-basic-ui.png
|
||||
[3]:https://www.howtoforge.com/images/command-tutorial/big/nano-basic-ui.png
|
||||
[4]:https://www.howtoforge.com/images/command-tutorial/nano-file-open.png
|
||||
[5]:https://www.howtoforge.com/images/command-tutorial/big/nano-file-open.png
|
||||
[6]:https://www.howtoforge.com/images/command-tutorial/nano-save-changes.png
|
||||
[7]:https://www.howtoforge.com/images/command-tutorial/big/nano-save-changes.png
|
||||
[8]:https://www.howtoforge.com/images/command-tutorial/nano-mac-format.png
|
||||
[9]:https://www.howtoforge.com/images/command-tutorial/big/nano-mac-format.png
|
||||
[10]:https://www.howtoforge.com/images/command-tutorial/nano-changes-saved.png
|
||||
[11]:https://www.howtoforge.com/images/command-tutorial/big/nano-changes-saved.png
|
||||
[12]:https://www.howtoforge.com/images/command-tutorial/nano-search-replace.png
|
||||
[13]:https://www.howtoforge.com/images/command-tutorial/big/nano-search-replace.png
|
||||
[14]:https://www.howtoforge.com/linux-nano-command/
|
||||
[15]:https://www.howtoforge.com/images/command-tutorial/nano-file-open1.png
|
||||
[16]:https://www.howtoforge.com/images/command-tutorial/big/nano-file-open1.png
|
||||
[17]:https://www.howtoforge.com/images/command-tutorial/emacs-save.png
|
||||
[18]:https://www.howtoforge.com/images/command-tutorial/big/emacs-save.png
|
||||
[19]:https://www.howtoforge.com/images/command-tutorial/emacs-quit-without-saving.png
|
||||
[20]:https://www.howtoforge.com/images/command-tutorial/big/emacs-quit-without-saving.png
|
||||
[21]:https://www.howtoforge.com/images/command-tutorial/emacs-search.png
|
||||
[22]:https://www.howtoforge.com/images/command-tutorial/big/emacs-search.png
|
||||
[23]:https://www.howtoforge.com/images/command-tutorial/emacs-search-replace.png
|
||||
[24]:https://www.howtoforge.com/images/command-tutorial/big/emacs-search-replace.png
|
||||
[25]:https://www.howtoforge.com/images/command-tutorial/emacs-replace-prompt.png
|
||||
[26]:https://www.howtoforge.com/images/command-tutorial/big/emacs-replace-prompt.png
|
||||
[27]:https://www.howtoforge.com/images/command-tutorial/emacs-replaced.png
|
||||
[28]:https://www.howtoforge.com/images/command-tutorial/big/emacs-replaced.png
|
||||
[29]:https://www.howtoforge.com/images/command-tutorial/emacs-accessing-menus.png
|
||||
[30]:https://www.howtoforge.com/images/command-tutorial/big/emacs-accessing-menus.png
|
@ -0,0 +1,176 @@
|
||||
Managing network connections using IFCONFIG & NMCLI commands
|
||||
======
|
||||
Earlier we have discussed how we can configure network connections using three different methods i.e. by editing network interface file, by using GUI & by using nmtui command ([ **READ ARTICLE HERE**][1]). In this tutorial, we are going to use two other methods to configure network connections on our RHEL/CentOS machines.
|
||||
First utility that we will be using is ‘ifconfig’ & we can configure network on almost any Linux distribution using this method.
|
||||
|
||||
### Using Ifconfig
|
||||
|
||||
#### View current network settings
|
||||
|
||||
To view network settings for all the active network interfaces, run
|
||||
|
||||
```
|
||||
$ ifconfig
|
||||
```
|
||||
|
||||
To view network settings all active, inactive interfaces, run
|
||||
|
||||
```
|
||||
$ ifconfig -a
|
||||
```
|
||||
|
||||
|
||||
Or to view network settings for a particular interface, run
|
||||
|
||||
```
|
||||
$ ifconfig enOs3
|
||||
```
|
||||
|
||||
#### Assigning IP address to an interface
|
||||
|
||||
To assign network information on an interface i.e. IP address, netmask & broadcast address, syntax is
|
||||
ifconfig enOs3 IP_ADDRESS netmask SUBNET broadcast BROADCAST_ADDRESS
|
||||
here, we need to pass information as per our network configurations. An example would be
|
||||
|
||||
```
|
||||
$ ifconfig enOs3 192.168.1.100 netmask 255.255.255.0 broadcast 192.168.1.255
|
||||
```
|
||||
|
||||
|
||||
This will assign IP 192.168.1.100 on our network interface enOs3. We can also just modify IP or subnet or broadcast address by running the above command with only that parameter like,
|
||||
|
||||
```
|
||||
$ ifconfig enOs3 192.168.1.100
|
||||
$ ifconfig enOs3 netmask 255.255.255.0
|
||||
$ ifconfig enOs3 broadcast 192.168.1.255
|
||||
```
|
||||
|
||||
|
||||
#### Enabling or disabling a network interface
|
||||
|
||||
To enable a network interface, run
|
||||
|
||||
```
|
||||
$ ifconfig enOs3 up
|
||||
```
|
||||
|
||||
|
||||
To disable a network interface, run
|
||||
|
||||
```
|
||||
$ ifconfig enOs3 down
|
||||
```
|
||||
|
||||
|
||||
( **Recommended read** :- [**Assigning multiple IP addresses to a single NIC**][2])
|
||||
|
||||
**Note:-** When using ifconfig , entries for the gateway address are to be made in /etc/network file or use the following ‘route’ command to add a default gateway,
|
||||
|
||||
```
|
||||
$ route add default gw 192.168.1.1 enOs3
|
||||
```
|
||||
|
||||
|
||||
For adding DNS, make an entry in /etc/resolv.conf.
|
||||
|
||||
### Using NMCLI
|
||||
|
||||
NetworkManager is used as default networking service on RHEL/CentOS 7 versions. It is a very powerful & useful utility for configuring and maintaining network connections. & to control the NetworkManager daemon we can use ‘nmcli’.
|
||||
|
||||
**Syntax** for using nmcli is,
|
||||
```
|
||||
$ nmcli [ OPTIONS ] OBJECT { COMMAND | help }
|
||||
```
|
||||
|
||||
#### Viewing current network settings
|
||||
|
||||
To display the status of NetworkManager, run
|
||||
|
||||
```
|
||||
$ nmcli general status
|
||||
```
|
||||
|
||||
|
||||
to display only the active connections,
|
||||
|
||||
```
|
||||
$ nmcli connection show -a
|
||||
```
|
||||
|
||||
|
||||
to display all active and inactive connections, run
|
||||
|
||||
```
|
||||
$ nmcli connection show
|
||||
```
|
||||
|
||||
|
||||
to display a list of devices recognized by NetworkManager and their current status, run
|
||||
|
||||
```
|
||||
$ nmcli device status
|
||||
```
|
||||
|
||||
|
||||
#### Assigning IP address to an interface
|
||||
|
||||
To assign IP address & default gateway to a network interface, syntax for command is as follows,
|
||||
|
||||
```
|
||||
$ nmcli connection add type ethernet con-name CONNECTION_name ifname INTERFACE_name ip4 IP_address gw4 GATEWAY_address
|
||||
```
|
||||
|
||||
|
||||
Change the fields as per you network information, an example would be
|
||||
|
||||
```
|
||||
$ nmcli connection add type ethernet con-name office ifname enOs3 ip4 192.168.1.100 gw4 192.168.1.1
|
||||
```
|
||||
|
||||
|
||||
Unlike ifconfig command , we can set up a DNS address using nmcli command. To assign a DNS server to an interface, run
|
||||
|
||||
```
|
||||
$ nmcli connection modify office ipv4.dns “8.8.8.8”
|
||||
```
|
||||
|
||||
|
||||
Lastly, we will bring up the newly added connection,
|
||||
|
||||
```
|
||||
$ nmcli connection up office ifname enOs3
|
||||
```
|
||||
|
||||
|
||||
#### Enabling or disabling a network interface
|
||||
|
||||
For enabling an interface using nnmcli, run
|
||||
|
||||
```
|
||||
$ nmcli device connect enOs3
|
||||
```
|
||||
|
||||
|
||||
To disable an interface, run
|
||||
|
||||
```
|
||||
$ nmcli device disconnect enOs3
|
||||
```
|
||||
|
||||
|
||||
That’s it guys. There are many other uses for both of these commands but examples mentioned here should get you started. If having any issues/queries, please mention them in the comment box down below.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/managing-network-using-ifconfig-nmcli-commands/
|
||||
|
||||
作者:[SHUSAIN][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/configuring-ip-address-rhel-centos/
|
||||
[2]:http://linuxtechlab.com/ip-aliasing-multiple-ip-single-nic/
|
@ -1,59 +0,0 @@
|
||||
Which Linux Kernel Version Is ‘Stable’?
|
||||
============================================================
|
||||
|
||||
|
||||
![Linux kernel ](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/apple1.jpg?itok=PGRxOQz_ "Linux kernel")
|
||||
Konstantin Ryabitsev explains which Linux kernel versions are considered "stable" and how to choose what's right for you.[Creative Commons Zero][1]
|
||||
|
||||
Almost every time Linus Torvalds releases [a new mainline Linux kernel][4], there's inevitable confusion about which kernel is the "stable" one now. Is it the brand new X.Y one, or the previous X.Y-1.Z one? Is the brand new kernel too new? Should you stick to the previous release?
|
||||
|
||||
The [kernel.org][5] page doesn't really help clear up this confusion. Currently, right at the top of the page. we see that 4.15 is the latest stable kernel -- but then in the table below, 4.14.16 is listed as "stable," and 4.15 as "mainline." Frustrating, eh?
|
||||
|
||||
Unfortunately, there are no easy answers. We use the word "stable" for two different things here: as the name of the Git tree where the release originated, and as indicator of whether the kernel should be considered “stable” as in “production-ready.”
|
||||
|
||||
Due to the distributed nature of Git, Linux development happens in a number of [various forked repositories][6]. All bug fixes and new features are first collected and prepared by subsystem maintainers and then submitted to Linus Torvalds for inclusion into [his own Linux tree][7], which is considered the “master” Git repository. We call this the “mainline” Linux tree.
|
||||
|
||||
### Release Candidates
|
||||
|
||||
Before each new kernel version is released, it goes through several “release candidate” cycles, which are used by developers to test and polish all the cool new features. Based on the feedback he receives during this cycle, Linus decides whether the final version is ready to go yet or not. Usually, there are 7 weekly pre-releases, but that number routinely goes up to -rc8, and sometimes even up to -rc9 and above. When Linus is convinced that the new kernel is ready to go, he makes the final release, and we call this release “stable” to indicate that it’s not a “release candidate.”
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
As any kind of complex software written by imperfect human beings, each new version of the Linux kernel contains bugs, and those bugs require fixing. The rule for bug fixes in the Linux Kernel is very straightforward: all fixes must first go into Linus’s tree. Once the bug is fixed in the mainline repository, it may then be applied to previously released kernels that are still maintained by the Kernel development community. All fixes backported to stable releases must meet a [set of important criteria][8] before they are considered -- and one of them is that they “must already exist in Linus’s tree.” There is a [separate Git repository][9] used for the purpose of maintaining backported bug fixes, and it is called the “stable” tree -- because it is used to track previously released stable kernels. It is maintained and curated by Greg Kroah-Hartman.
|
||||
|
||||
### Latest Stable Kernel
|
||||
|
||||
So, whenever you visit kernel.org looking for the latest stable kernel, you should use the version that is in the Big Yellow Button that says “Latest Stable Kernel.”
|
||||
|
||||
![sWnmAYf0BgxjGdAHshK61CE9GdQQCPBkmSF9MG8s](https://lh6.googleusercontent.com/sWnmAYf0BgxjGdAHshK61CE9GdQQCPBkmSF9MG8sYqZsmL6e0h8AiyJwqtWYC-MoxWpRWHpdIEpKji0hJ5xxeYshK9QkbTfubFb2TFaMeFNmtJ5ypQNt8lAHC2zniEEe8O4v7MZh)
|
||||
|
||||
Ah, but now you may wonder -- if both 4.15 and 4.14.16 are stable, then which one is more stable? Some people avoid using ".0" releases of kernel because they think a particular version is not stable enough until there is at least a ".1". It's hard to either prove or disprove this, and there are pro and con arguments for both, so it's pretty much up to you to decide which you prefer.
|
||||
|
||||
On the one hand, anything that goes into a stable tree release must first be accepted into the mainline kernel and then backported. This means that mainline kernels will always have fresher bug fixes than what is released in the stable tree, and therefore you should always use mainline “.0” releases if you want fewest “known bugs.”
|
||||
|
||||
On the other hand, mainline is where all the cool new features are added -- and new features bring with them an unknown quantity of “new bugs” that are not in the older stable releases. Whether new, unknown bugs are more worrisome than older, known, but yet unfixed bugs -- well, that is entirely your call. However, it is worth pointing out that many bug fixes are only thoroughly tested against mainline kernels. When patches are backported into older kernels, chances are they will work just fine, but there are fewer integration tests performed against older stable releases. More often than not, it is assumed that "previous stable" is close enough to current mainline that things will likely "just work." And they usually do, of course, but this yet again shows how hard it is to say "which kernel is actually more stable."
|
||||
|
||||
So, basically, there is no quantitative or qualitative metric we can use to definitively say which kernel is more stable -- 4.15 or 4.14.16\. The most we can do is to unhelpfully state that they are "differently stable.”
|
||||
|
||||
_Learn more about Linux through the free ["Introduction to Linux" ][3]course from The Linux Foundation and edX._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2018/2/which-linux-kernel-version-stable
|
||||
|
||||
作者:[KONSTANTIN RYABITSEV ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/mricon
|
||||
[1]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[2]:https://www.linux.com/files/images/apple1jpg
|
||||
[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
||||
[4]:https://www.linux.com/blog/intro-to-linux/2018/1/linux-kernel-415-unusual-release-cycle
|
||||
[5]:https://www.kernel.org/
|
||||
[6]:https://git.kernel.org/pub/scm/linux/kernel/git/
|
||||
[7]:https://git.kernel.org/torvalds/c/v4.15
|
||||
[8]:https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
|
||||
[9]:https://git.kernel.org/stable/linux-stable/c/v4.14.16
|
@ -0,0 +1,39 @@
|
||||
3 Ways to Extend the Power of Kubernetes
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/chen-goldberg-kubecon.png?itok=WR_4i31u)
|
||||
|
||||
The ability to extend Kubernetes is its secret superpower, said Chen Goldberg, Director of Engineering at Google, speaking at the recent [KubeCon + CloudNativeCon][1] in Austin.
|
||||
|
||||
In the race to build tools that help engineers become more productive, Goldberg talked about how she once led a team that developed a platform that did just that. Despite the fact the platform initially worked, it was not extensible, and it was also difficult to modify.
|
||||
|
||||
Fortunately, said Goldberg, Kubernetes suffers from neither of these problems. To begin with, Kubernetes is self-healing system, as it uses controllers that implement what is called a " _Reconciliation Loop._ " In a reconciliation loop, a controller observes the current state of the system and compares it to its desired state. Once it has established the difference between each of these two states, it works towards achieving the desired state. This makes Kubernetes well-adapted to dynamic environments.
|
||||
|
||||
### 3 Ways to Extend Kubernetes
|
||||
|
||||
Goldberg then explained that to build the controllers, you need resources, that is, you need to extend Kubernetes. There are three ways to do that and, from the most flexible (but also more difficult) to the easiest they are: using a Kube aggregator, using an API server builder, or creating a Custom Resource Definition (or CRD).
|
||||
|
||||
The latter allows to extend Kubernetes' functionality even with minimal coding. To demonstrate how it is done, Goggle Software Engineer Anthony Yeh came on stage and showcased adding a stateful set to Kubernetes. (Stateful sets objects used to manage stateful applications, that is applications that need to store the state of the application, keeping track of, for example, a user's identity and their personal settings.) Using _catset_ , a CRD implemented in 100 lines of JavaScript in one single file, Yeh showed how you can add a stateful set to a Kubernetes deployment. A prior extension that was not a CRD, required 24 files and over 3,000 lines of code.
|
||||
|
||||
Addressing the issue of reliability of CRDs, Goldberg said Kubernetes had started a certification program that allows companies to register and certify their extensions for the Kubernetes community. In one month over 30 companies had signed up for the program.
|
||||
|
||||
Goldberg went on to explain how the extensibility of Kubernetes was a hot topic in this year's KubeCon, and how Google and IBM were building a platform to manage and secure microservices using CRDs. Or how some developers were bringing machine-learning to Kubernetes, and others were demonstrating open service broker and the consumption of services on hybrid settings.
|
||||
|
||||
In conclusion, Goldberg said, extensibility is about empowerment. And, the extensibility of Kubernetes makes it a general purpose and easy to use platform for developers, which allows them to run any application.
|
||||
|
||||
You can watch the entire video below:
|
||||
|
||||
https://www.youtube.com/embed/1kjgwXP_N7A?enablejsapi=1
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/event/kubecon/2018/2/3-ways-extend-power-kubernetes
|
||||
|
||||
作者:[PAUL BROWN][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/bro66
|
||||
[1]:http://events17.linuxfoundation.org/events/kubecon-and-cloudnativecon-north-america
|
@ -1,3 +1,5 @@
|
||||
translated by cyleft
|
||||
|
||||
How to print filename with awk on Linux / Unix
|
||||
======
|
||||
|
||||
|
@ -0,0 +1,93 @@
|
||||
translating---geekpi
|
||||
|
||||
|
||||
How to Check if Your Computer Uses UEFI or BIOS
|
||||
======
|
||||
**Brief: A quick tutorial to tell you if your system uses the modern UEFI or the legacy BIOS. Instructions for both Windows and Linux have been provided.**
|
||||
|
||||
When you are trying to [dual boot Linux with Windows][1], you would want to know if you have UEFI or BIOS boot mode on your system. It helps you decide in partition making for installing Linux.
|
||||
|
||||
I am not going to discuss [what is BIOS][2] here. However, I would like to tell you a few advantages of [UEFI][3] over BIOS.
|
||||
|
||||
UEFI or Unified Extensible Firmware Interface was designed to overcome some of the limitations of BIOS. It added the ability to use larger than 2 TB disks and had a CPU independent architecture and drivers. With a modular design, it supported remote diagnostics and repairing even with no operating system installed and a flexible without-OS environment including networking capability.
|
||||
|
||||
### Advantage of UEFI over BIOS
|
||||
|
||||
* UEFI is faster in initializing your hardware.
|
||||
* Offer Secure Boot which means everything you load before an OS is loaded has to be signed. This gives your system an added layer of protection from running malware.
|
||||
* BIOS do not support a partition of over 2TB.
|
||||
* Most importantly, if you are dual booting it’s always advisable to install both the OS in the same booting mode.
|
||||
|
||||
|
||||
|
||||
![How to check if system has UEFI or BIOS][4]
|
||||
|
||||
If you are trying to find out whether your system runs UEFI or BIOS, it’s not that difficult. Let me start with Windows first and afterward, we’ll see how to check UEFI or BIOS on Linux systems.
|
||||
|
||||
### Check if you are using UEFI or BIOS on Windows
|
||||
|
||||
On Windows, “System Information” in Start panel and under BIOS Mode, you can find the boot mode. If it says Legacy, your system has BIOS. If it says UEFI, well it’s UEFI.
|
||||
|
||||
![][5]
|
||||
|
||||
**Alternative** : If you using Windows 10, you can check whether you are using UEFI or BIOS by opening File Explorer and navigating to C:\Windows\Panther. Open file setupact.log and search for the below string.
|
||||
```
|
||||
Detected boot environment
|
||||
|
||||
```
|
||||
|
||||
I would advise opening this file in notepad++, since its a huge text file and notepad may hang (at least it did for me with 6GB RAM).
|
||||
|
||||
You will find a couple of lines which will give you the information.
|
||||
```
|
||||
2017-11-27 09:11:31, Info IBS Callback_BootEnvironmentDetect:FirmwareType 1.
|
||||
2017-11-27 09:11:31, Info IBS Callback_BootEnvironmentDetect: Detected boot environment: BIOS
|
||||
|
||||
```
|
||||
|
||||
### Check if you are using UEFI or BIOS on Linux
|
||||
|
||||
The easiest way to find out if you are running UEFI or BIOS is to look for a folder /sys/firmware/efi. The folder will be missing if your system is using BIOS.
|
||||
|
||||
![Find if system uses UEFI or BIOS on Ubuntu Linux][6]
|
||||
|
||||
**Alternative** : The other method is to install a package called efibootmgr.
|
||||
|
||||
On Debian and Ubuntu based distributions, you can install the efibootmgr package using the command below:
|
||||
```
|
||||
sudo apt install efibootmgr
|
||||
|
||||
```
|
||||
|
||||
Once done, type the below command:
|
||||
```
|
||||
sudo efibootmgr
|
||||
|
||||
```
|
||||
|
||||
If your system supports UEFI, it will output different variables. If not you will see a message saying EFI variables are not supported.
|
||||
|
||||
![][7]
|
||||
|
||||
### Final Words
|
||||
|
||||
Finding whether your system is using UEFI or BIOS is easy. On one hand, features like faster and secure boot provide an upper hand to UEFI, there is not much that should bother you if you are using BIOS – unless you are planning to use a 2TB hard disk to boot.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/check-uefi-or-bios/
|
||||
|
||||
作者:[Ambarish Kumar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/ambarish/
|
||||
[1]:https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/
|
||||
[2]:https://www.lifewire.com/bios-basic-input-output-system-2625820
|
||||
[3]:https://www.howtogeek.com/56958/htg-explains-how-uefi-will-replace-the-bios/
|
||||
[4]:https://itsfoss.com/wp-content/uploads/2018/02/uefi-or-bios-800x450.png
|
||||
[5]:https://itsfoss.com/wp-content/uploads/2018/01/BIOS-800x491.png
|
||||
[6]:https://itsfoss.com/wp-content/uploads/2018/02/uefi-bios.png
|
||||
[7]:https://itsfoss.com/wp-content/uploads/2018/01/bootmanager.jpg
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
Python Hello World and String Manipulation
|
||||
======
|
||||
|
||||
|
@ -0,0 +1,98 @@
|
||||
A File Transfer Utility To Download Only The New Parts Of A File
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/02/Linux-1-720x340.png)
|
||||
|
||||
Just because Internet plans are getting cheaper every day, you shouldn’t waste your data by repeatedly downloading the same stuff over and over. The one fine example is downloading development version of Ubuntu or any Linux images. As you may know, Ubuntu developers releases daily builds, alpha, beta ISO images every few months for testing. In the past, I used to download those images whenever they are available to test and review each edition. Not anymore! Thanks to **Zsync** file transfer program. Now it is possible to download only the new parts of the ISO image. This will save you a lot of time and Internet bandwidth. Not just time and bandwidth, it will save you the resources on server side and client side.
|
||||
|
||||
Zsync uses the same algorithm as **Rsync** , but it only download the new parts of a file that you have a copy of an older version of the file on your computer already. Rsync is mainly for synchronizing data between computers, whereas Zsync is for distributing data. To put this simply, the one file on a central location can be distributed to thousands of downloaders using Zsync. It is completely free and open source released under the Artistic License V2.
|
||||
|
||||
### Installing Zsync
|
||||
|
||||
Zsync is available in the default repositories of most Linux distributions.
|
||||
|
||||
On **Arch Linux** and derivatives, install it using command:
|
||||
```
|
||||
$ sudo pacman -S zsync
|
||||
|
||||
```
|
||||
|
||||
On **Fedora** :
|
||||
|
||||
Enable Zsync repository:
|
||||
```
|
||||
$ sudo dnf copr enable ngompa/zsync
|
||||
|
||||
```
|
||||
|
||||
And install it using command:
|
||||
```
|
||||
$ sudo dnf install zsync
|
||||
|
||||
```
|
||||
|
||||
On **Debian, Ubuntu, Linux Mint** :
|
||||
```
|
||||
$ sudo apt-get install zsync
|
||||
|
||||
```
|
||||
|
||||
For other distributions, you can download the binary from the [**Zsync download page**][1] and manually compile and install it as shown below.
|
||||
```
|
||||
$ wget http://zsync.moria.org.uk/download/zsync-0.6.2.tar.bz2
|
||||
$ tar xjf zsync-0.6.2.tar.bz2
|
||||
$ cd zsync-0.6.2/
|
||||
$ configure
|
||||
$ make
|
||||
$ sudo make install
|
||||
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
Please be mindful that **zsync is only useful if people offer zsync downloads**. Currently, Debian, Ubuntu (all flavours) ISO images are available as .zsync downloads. For example, visit the following link.
|
||||
|
||||
As you may noticed, Ubuntu 18.04 LTS daily build is available as direct ISO and .zsync file. If you download .ISO file, you have to download the full ISO whenever the ISO gets new updates. But, if you download .zsync file, the Zsync will download only the new changes in future. You don’t need to download the whole ISO image each time.
|
||||
|
||||
A .zsync file contains a meta-data needed by zsync program. This file contains the pre-calculated checksums for the rsync algorithm; it is generated on the server, once, and is then used by any number of downloaders. To download a .zsync file using Zsync client program, all you have to do:
|
||||
```
|
||||
$ zsync <.zsync-file-URL>
|
||||
|
||||
```
|
||||
|
||||
Example:
|
||||
```
|
||||
$ zsync http://cdimage.ubuntu.com/ubuntu/daily-live/current/bionic-desktop-amd64.iso.zsync
|
||||
|
||||
```
|
||||
|
||||
If you already have the old image file on your system, Zsync will calculate the difference between the old and new file in the remote server and download only the new parts. You will see the calculation process as a series of dots or stars on your Terminal.
|
||||
|
||||
If there is an old version of the file you’re just downloading is available in the current working directory, Zsync will download only the new parts. Once the download is finished, you will get two images, the one you just downloaded and the old image with **.iso.zs-old** extension on its filename.
|
||||
|
||||
If there is no relevent local data found, Zsync will download the whole file.
|
||||
|
||||
![](http://www.ostechnix.com/wp-content/uploads/2018/02/Zsync-1.png)
|
||||
|
||||
You can cancel the download process at any time by pressing **CTRL-C**.
|
||||
|
||||
Just imagine if you use the direct .ISO file or torrent, you will lose around 1.4GB bandwidth whenever you download new image. So, instead of downloading entire alpha, beta and daily build images, Zsync just downloads the new parts of the ISO file that you already have a copy of an older version of it on your system.
|
||||
|
||||
And, that’s all for today. Hope this helps. I will be soon here with another useful guide. Until then stay tuned with OSTechNix!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/zsync-file-transfer-utility-download-new-parts-file/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:http://zsync.moria.org.uk/downloads
|
@ -0,0 +1,217 @@
|
||||
translating by wenwensnow
|
||||
Getting Started with the openbox windows manager in Fedora
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2017/10/openbox.png-945x400.jpg)
|
||||
|
||||
Openbox is [a lightweight, next generation window manager][1] for users who want a minimal enviroment for their [Fedora][2]desktop. It’s well known for its minimalistic appearance, low resource usage and the ability to run applications the way they were designed to work. Openbox is highly configurable. It allows you to change almost every aspect of how you interact with your desktop. This article covers a basic setup of Openbox on Fedora.
|
||||
|
||||
### Installing Openbox in Fedora
|
||||
|
||||
This tutorial assumes you’re already working in a traditional desktop environment like [GNOME][3] or [Plasma][4]over the [Wayland][5] compositor. First, open a terminal and run the following command [using sudo][6].
|
||||
```
|
||||
sudo dnf install openbox xbacklight feh conky xorg-x11-drv-libinput tint2 volumeicon xorg-x11-server-utils network-manager-applet
|
||||
|
||||
```
|
||||
|
||||
Curious about the packages this command installs? Here is the package-by-package breakdown.
|
||||
|
||||
* **openbox** is the main window manager package
|
||||
* **xbacklight** is a utility to set laptop screen brightness
|
||||
* **feh** is a utility to set a wallpaper for the desktop
|
||||
* **conky** is a utility to display system information
|
||||
* **tint2** is a system panel/taskbar
|
||||
* **xorg-x11-drv-libinput** is a driver that lets the system activate clicks on tap in a laptop touchpad
|
||||
* **volumeicon** is a volume control for the system tray
|
||||
* **xorg-x11-server-utils** provides the xinput tool
|
||||
* **network-manager-applet** provides the nm-applet tool for the system tray
|
||||
|
||||
|
||||
|
||||
Once you install these packages, restart your computer. After the system restarts, choose your user name to login. Before you enter your password, click the gear icon to select the Openbox session. Then enter your password to start Openbox.
|
||||
|
||||
If you ever want to switch back, simply use this gear icon to return to the selection for your desired desktop session.
|
||||
|
||||
### Using Openbox
|
||||
|
||||
The first time you login to your Openbox session, a mouse pointer appears over a black desktop. Don’t worry, this is the default look and feel of the desktop. First, right click your mouse to access a handy menu to launch your apps. You can use the shortcut **Ctrl + Alt + LeftArrow / RightArrow** to switch between four virtual screens.
|
||||
|
||||
![][7]
|
||||
|
||||
If your laptop has a touchpad, you may want to configure tap to click for an improved experience. Fedora features libinput to handle input from the touchpad. First, get a list of input devices in your computer:
|
||||
```
|
||||
$ xinput list
|
||||
⎡ Virtual core pointer id=2 [master pointer (3)]
|
||||
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
|
||||
⎜ ↳ ETPS/2 Elantech Touchpad id=11 [slave pointer (2)]
|
||||
⎣ Virtual core keyboard id=3 [master keyboard (2)]
|
||||
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
|
||||
↳ Power Button id=6 [slave keyboard (3)]
|
||||
↳ Video Bus id=7 [slave keyboard (3)]
|
||||
↳ Power Button id=8 [slave keyboard (3)]
|
||||
↳ WebCam SC-13HDL11939N: WebCam S id=9 [slave keyboard (3)]
|
||||
↳ AT Translated Set 2 keyboard id=10 [slave keyboard (3)]
|
||||
|
||||
```
|
||||
|
||||
In the example laptop, the touchpad is the device with ID 11. With this info you can list your trackpad properties:
|
||||
```
|
||||
$ xinput list-props 11
|
||||
Device 'ETPS/2 Elantech Touchpad':
|
||||
Device Enabled (141): 1
|
||||
Coordinate Transformation Matrix (143): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000
|
||||
libinput Tapping Enabled (278): 0
|
||||
libinput Tapping Enabled Default (279): 0
|
||||
droped
|
||||
|
||||
```
|
||||
|
||||
In this example, the touchpad has the Tapping Enabled property set to false (0).
|
||||
|
||||
Now you know your trackpad device ID (11) and the property to configure (278). This means you can enable tapping with the command:
|
||||
```
|
||||
xinput set-prop <device> <property> <value>
|
||||
|
||||
```
|
||||
|
||||
For the example above:
|
||||
```
|
||||
xinput set-prop 11 278 1
|
||||
|
||||
```
|
||||
|
||||
You should now be able to successfully click in your touchpad with a tap. Now configure this option at the Openbox session start. First, create the config file with an editor:
|
||||
```
|
||||
vi ~/.config/openbox/autostart
|
||||
|
||||
```
|
||||
|
||||
This example uses the vi text editor, but you can use any editor you want, like gedit or kwrite. In this file add the following lines:
|
||||
```
|
||||
# Set tapping on touchpad on:
|
||||
xinput set-prop 11 278 1 &
|
||||
|
||||
```
|
||||
|
||||
Save the file, logout of the current session, and login again to verify your touchpad works.
|
||||
|
||||
### Configuring the session
|
||||
|
||||
Here are some examples of how you can configure your Openbox session to your preferences. To use feh to set the desktop wallpaper at startup, just add these lines to your ~/.config/openbox/autostart file:
|
||||
```
|
||||
# Set desktop wallpaper:
|
||||
feh --bg-scale ~/path/to/wallpaper.png &
|
||||
|
||||
```
|
||||
|
||||
To use tint2 to show a task bar in the desktop, add these lines to the autostart file:
|
||||
```
|
||||
# Show system tray
|
||||
tint2 &
|
||||
|
||||
```
|
||||
|
||||
Add these lines to the autostart file to start conky when you login:
|
||||
```
|
||||
# Show system info
|
||||
conky &
|
||||
|
||||
```
|
||||
|
||||
Now you can add your own services to your Openbox session. Just add entries to your autostart file. For instance, add the NetworkManager applet and volume control with these lines:
|
||||
```
|
||||
#NetworkManager
|
||||
nm-applet &
|
||||
|
||||
#Volume control in system tray
|
||||
volumeicon &
|
||||
|
||||
```
|
||||
|
||||
The configuration file used in this post for conky is available [here][8] you can copy and paste the configuration in a file called .conkyrc in your home directory .
|
||||
|
||||
The conky utility is a highly configurable way to show system information. You can set up a preferred profile of settings in a ~/.conkyrc file. Here’s [an example conkyrc file][9]. You can find many more on the web.
|
||||
|
||||
You are now able to customize your Openbox installation in exciting ways. Here’s a screenshot of the author’s Openbox desktop:
|
||||
|
||||
![][10]
|
||||
|
||||
### Configuring tint2
|
||||
|
||||
You can also configure the look and feel of the panel with tint2. The configuration file is available in ~/.config/tint2/tint2rc. Use your favorite editor to open this file:
|
||||
```
|
||||
vi ~/.config/tint2/tint2rc
|
||||
|
||||
```
|
||||
|
||||
Look for these lines first:
|
||||
```
|
||||
#-------------------------------------
|
||||
#Panel
|
||||
panel_items = LTSCB
|
||||
|
||||
```
|
||||
|
||||
These are the elements than will be included in the bar where:
|
||||
|
||||
* **L** = Launchers
|
||||
* **T** = Task bar
|
||||
* **S** = Systray
|
||||
* **C** = Clock
|
||||
* **B** = Battery
|
||||
|
||||
|
||||
|
||||
Then look for those lines to configure the launchers in the task bar:
|
||||
```
|
||||
#-------------------------------------
|
||||
#Launcher
|
||||
launcher_padding = 2 4 2
|
||||
launcher_background_id = 0
|
||||
launcher_icon_background_id = 0
|
||||
launcher_icon_size = 24
|
||||
launcher_icon_asb = 100 0 0
|
||||
launcher_icon_theme_override = 0
|
||||
startup_notifications = 1
|
||||
launcher_tooltip = 1
|
||||
launcher_item_app = /usr/share/applications/tint2conf.desktop
|
||||
launcher_item_app = /usr/local/share/applications/tint2conf.desktop
|
||||
launcher_item_app = /usr/share/applications/firefox.desktop
|
||||
launcher_item_app = /usr/share/applications/iceweasel.desktop
|
||||
launcher_item_app = /usr/share/applications/chromium-browser.desktop
|
||||
launcher_item_app = /usr/share/applications/google-chrome.desktop
|
||||
|
||||
```
|
||||
|
||||
Here you can add shortcuts to your favorite launcher_item_app elements. This item accepts .desktop files and not executables. You can get a list of your system-wide desktop files with this command:
|
||||
```
|
||||
ls /usr/share/applications/
|
||||
|
||||
```
|
||||
|
||||
As an exercise for the reader, see if you can find and install a theme for either the Openbox [window manager][11] or [tint2][12]. Enjoy getting started with Openbox as a Fedora desktop.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/openbox-fedora/
|
||||
|
||||
作者:[William Moreno][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://williamjmorenor.id.fedoraproject.org/
|
||||
[1]:http://openbox.org/wiki/Main_Page
|
||||
[2]:https://getfedora.org/
|
||||
[3]:https://getfedora.org/es/workstation/
|
||||
[4]:https://spins.fedoraproject.org/kde/
|
||||
[5]:https://wayland.freedesktop.org/
|
||||
[6]:https://fedoramagazine.org/howto-use-sudo/
|
||||
[7]:https://fedoramagazine.org/wp-content/uploads/2017/10/openbox-01-300x169.png
|
||||
[8]:https://gist.github.com/williamjmorenor/96399defad35e24a8f1843e2c256b4a4
|
||||
[9]:https://github.com/zenzire/conkyrc/blob/master/conkyrc
|
||||
[10]:https://fedoramagazine.org/wp-content/uploads/2017/10/openbox-02-300x169.png
|
||||
[11]:https://www.deviantart.com/customization/skins/linuxutil/winmanagers/openbox/whats-hot/?order=9&offset=0
|
||||
[12]:https://github.com/addy-dclxvi/Tint2-Theme-Collections
|
@ -0,0 +1,130 @@
|
||||
How to Create, Revert and Delete KVM Virtual machine snapshot with virsh command
|
||||
======
|
||||
[![KVM-VirtualMachine-Snapshot][1]![KVM-VirtualMachine-Snapshot][2]][2]
|
||||
|
||||
While working on the virtualization platform system administrators usually take the snapshot of virtual machine before doing any major activity like deploying the latest patch and code.
|
||||
|
||||
Virtual machine **snapshot** is a copy of virtual machine’s disk at the specific point of time. In other words we can say snapshot keeps or preserve the state and data of a virtual machine at given point of time.
|
||||
|
||||
### Where we can use VM snapshots ..?
|
||||
|
||||
If you are working on **KVM** based **hypervisors** we can take virtual machines or domain snapshot using the virsh command. Snapshot becomes very helpful in a situation where you have installed or apply the latest patches on the VM but due to some reasons, application hosted in the VMs becomes unstable and application team wants to revert all the changes or patches. If you had taken the snapshot of the VM before applying patches then we can restore or revert the VM to its previous state using snapshot.
|
||||
|
||||
**Note:** We can only take the snapshot of the VMs whose disk format is **Qcow2** and raw disk format is not supported by kvm virsh command, Use below command to convert the raw disk format to qcow2
|
||||
```
|
||||
# qemu-img convert -f raw -O qcow2 image-name.img image-name.qcow2
|
||||
|
||||
```
|
||||
|
||||
### Create KVM Virtual Machine (domain) Snapshot
|
||||
|
||||
I am assuming KVM hypervisor is already configured on CentOS 7 / RHEL 7 box and VMs are running on it. We can list the all the VMs on hypervisor using below virsh command,
|
||||
```
|
||||
[root@kvm-hypervisor ~]# virsh list --all
|
||||
Id Name State
|
||||
----------------------------------------------------
|
||||
94 centos7.0 running
|
||||
101 overcloud-controller running
|
||||
102 overcloud-compute2 running
|
||||
103 overcloud-compute1 running
|
||||
114 webserver running
|
||||
115 Test-MTN running
|
||||
[root@kvm-hypervisor ~]#
|
||||
|
||||
```
|
||||
|
||||
Let’s suppose we want to create the snapshot of ‘ **webserver** ‘ VM, run the below command,
|
||||
|
||||
**Syntax :**
|
||||
|
||||
```
|
||||
# virsh snapshot-create-as –domain {vm_name} –name {snapshot_name} –description “enter description here”
|
||||
```
|
||||
```
|
||||
[root@kvm-hypervisor ~]# virsh snapshot-create-as --domain webserver --name webserver_snap --description "snap before patch on 4Feb2018"
|
||||
Domain snapshot webserver_snap created
|
||||
[root@kvm-hypervisor ~]#
|
||||
|
||||
```
|
||||
|
||||
Once the snapshot is created then we can list snapshots related to the VM using below command,
|
||||
```
|
||||
[root@kvm-hypervisor ~]# virsh snapshot-list webserver
|
||||
Name Creation Time State
|
||||
------------------------------------------------------------
|
||||
webserver_snap 2018-02-04 15:05:05 +0530 running
|
||||
[root@kvm-hypervisor ~]#
|
||||
|
||||
```
|
||||
|
||||
To list the detailed info of VM’s snapshot, run the beneath virsh command,
|
||||
```
|
||||
[root@kvm-hypervisor ~]# virsh snapshot-info --domain webserver --snapshotname webserver_snap
|
||||
Name: webserver_snap
|
||||
Domain: webserver
|
||||
Current: yes
|
||||
State: running
|
||||
Location: internal
|
||||
Parent: -
|
||||
Children: 0
|
||||
Descendants: 0
|
||||
Metadata: yes
|
||||
[root@kvm-hypervisor ~]#
|
||||
|
||||
```
|
||||
|
||||
We can view the size of snapshot using below qemu-img command,
|
||||
```
|
||||
[root@kvm-hypervisor ~]# qemu-img info /var/lib/libvirt/images/snaptestvm.img
|
||||
|
||||
```
|
||||
|
||||
[![qemu-img-command-output-kvm][1]![qemu-img-command-output-kvm][3]][3]
|
||||
|
||||
### Revert / Restore KVM virtual Machine to Snapshot
|
||||
|
||||
Let’s assume we want to revert or restore webserver VM to the snapshot that we have created in above step. Use below virsh command to restore Webserver VM to its snapshot “ **webserver_snap** ”
|
||||
|
||||
**Syntax :**
|
||||
|
||||
```
|
||||
# virsh snapshot-revert {vm_name} {snapshot_name}
|
||||
```
|
||||
```
|
||||
[root@kvm-hypervisor ~]# virsh snapshot-revert webserver webserver_snap
|
||||
[root@kvm-hypervisor ~]#
|
||||
|
||||
```
|
||||
|
||||
### Delete KVM virtual Machine Snapshots
|
||||
|
||||
To delete KVM virtual machine snapshots, first get the VM’s snapshot details using “ **virsh snapshot-list** ” command and then use “ **virsh snapshot-delete** ” command to delete the snapshot. Example is shown below:
|
||||
```
|
||||
[root@kvm-hypervisor ~]# virsh snapshot-list --domain webserver
|
||||
Name Creation Time State
|
||||
------------------------------------------------------------
|
||||
webserver_snap 2018-02-04 15:05:05 +0530 running
|
||||
[root@kvm-hypervisor ~]#
|
||||
|
||||
[root@kvm-hypervisor ~]# virsh snapshot-delete --domain webserver --snapshotname webserver_snap
|
||||
Domain snapshot webserver_snap deleted
|
||||
[root@kvm-hypervisor ~]#
|
||||
|
||||
```
|
||||
|
||||
That’s all from this article, I hope you guys get an idea on how to manage KVM virtual machine snapshots using virsh command. Please do share your feedback and don’t hesitate to share it among your technical friends 🙂
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/create-revert-delete-kvm-virtual-machine-snapshot-virsh-command/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linuxtechi.com/author/pradeep/
|
||||
[1]:https://www.linuxtechi.com/wp-content/plugins/lazy-load/images/1x1.trans.gif
|
||||
[2]:https://www.linuxtechi.com/wp-content/uploads/2018/02/KVM-VirtualMachine-Snapshot.jpg
|
||||
[3]:https://www.linuxtechi.com/wp-content/uploads/2018/02/qemu-img-command-output-kvm.jpg
|
@ -0,0 +1,51 @@
|
||||
Translating by Torival Linear Regression Classifier from scratch using Numpy and Stochastic gradient descent as an optimization technique
|
||||
======
|
||||
|
||||
![](https://process.filestackapi.com/cache=expiry:max/resize=width:700/compress/cKkX2ryQteXTdZYSR6t7)
|
||||
|
||||
In statistics, linear regression is a linear approach for modelling the relationship between a scalar dependent variable y and one or more explanatory variables (or independent variables) denoted X. The case of one explanatory variable is called simple linear regression. For more than one explanatory variable, the process is called multiple linear regression.
|
||||
|
||||
As you may know the equation of line with a slope **m** and intercept **c** is given by **y=mx+c** .Now in our dataset **x** is a feature and **y** is the label that is the output.
|
||||
|
||||
Now we will start with some random values of m and c and by using our classifier we will adjust their values so that we obtain a line with the best fit.
|
||||
|
||||
Suppose we have a dataset with a single feature given by **X=[1,2,3,4,5,6,7,8,9,10]** and label/output being **Y=[1,4,9,16,25,36,49,64,81,100]**.We start with random value of **m** being **1** and **c** being **0**. Now starting with the first data point which is **x=1** we will calculate its corresponding output which is **y=m*x+c** - > **y=1-1+0** - > **y=1** .
|
||||
|
||||
Now this is our guess for the given input.Now we will subtract the calculated y which is our guess whith the actual output which is **y(original)=1** to calculate the error which is **y(guess)-y(original)** which can also be termed as our cost function when we take the square of its mean and our aim is to minimize this cost.
|
||||
|
||||
After each iteration through the data points we will change our values of **m** and **c** such that the obtained m and c gives the line with the best fit.Now how we can do this?
|
||||
|
||||
The answer is using **Gradient Descent Technique**.
|
||||
|
||||
![Gd_demystified.png][1]
|
||||
|
||||
In gradient descent we look to minimize the cost function and in order to minimize the cost function we need to minimize the error which is given by **error=y(guess)-y(original)**.
|
||||
|
||||
|
||||
Now error depends on two values **m** and **c** . Now if we take the partial derivative of error with respect to **m** and **c** we can get to know the oreintation i.e whether we need to increase the values of m and c or decrease them in order to obtain the line of best fit.
|
||||
|
||||
Now error depends on two values **m** and **c**.So on taking partial derivative of error with respect to **m** we get **x** and taking partial derivative of error with repsect to **c** we get a constant.
|
||||
|
||||
So if we apply two changes that is **m=m-error*x** and **c=c-error*1** after every iteration we can adjust the value of m and c to obtain the line with the best fit.
|
||||
|
||||
Now error can be negative as well as positive.When the error is negative it means our **m** and **c** are smaller than the actual **m** and **c** and hence we would need to increase their values and if the error is positive we would need to decrease their values that is what we are doing.
|
||||
|
||||
But wait we also need a constant called the learning_rate so that we don't increase or decrease the values of **m** and **c** with a steep rate .so we need to multiply **m=m-error * x * learning_rate** and **c=c-error * 1 * learning_rate** so as to make the process smooth.
|
||||
|
||||
So we need to update **m** to **m=m-error * x * learning_rate** and **c** to **c=c-error * 1 * learning_rate** to obtain the line with the best fit and this is our linear regreesion model using stochastic gradient descent meaning of stochastic being that we are updating the values of m and c in every iteration.
|
||||
|
||||
You can check the full code in python :[https://github.com/assassinsurvivor/MachineLearning/blob/master/Regression.py][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.codementor.io/prakharthapak/linear-regression-classifier-from-scratch-using-numpy-and-stochastic-gradient-descent-as-an-optimization-technique-gf5gm9yti
|
||||
|
||||
作者:[Prakhar Thapak][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.codementor.io/prakharthapak
|
||||
[1]:https://process.filestackapi.com/cache=expiry:max/5TXRH28rSo27kTNZLgdN
|
||||
[2]:https://www.codementor.io/prakharthapak/here
|
@ -0,0 +1,191 @@
|
||||
Linux md5sum Command Explained For Beginners (5 Examples)
|
||||
======
|
||||
|
||||
When downloading files, particularly installation files from websites, it is a good idea to verify that the download is valid. A website will often display a hash value for each file so that you can make sure the download completed correctly. In this article, we will be discussing the md5sum tool that you can use to validate the download. Two other utilities, sha256sum and sha512sum, work the same way as md5sum.
|
||||
|
||||
### Linux md5sum command
|
||||
|
||||
The md5sum command prints a 32-character (128-bit) checksum of the given file, using the MD5 algorithm. Following is the command syntax of this command line tool:
|
||||
|
||||
```
|
||||
md5sum [OPTION]... [FILE]...
|
||||
```
|
||||
|
||||
And here's how md5sum's man page explains it:
|
||||
```
|
||||
Print or check MD5 (128-bit) checksums.
|
||||
|
||||
```
|
||||
|
||||
The following Q&A-styled examples will give you an even better idea of the basic usage of md5sum.
|
||||
|
||||
Note: We'll be using three files named file1.txt, file2.txt, and file3.txt as the input files in our examples. The text in each file is listed below.
|
||||
|
||||
file1.txt:
|
||||
```
|
||||
hi
|
||||
hello
|
||||
how are you
|
||||
thanks.
|
||||
|
||||
```
|
||||
|
||||
file2.txt:
|
||||
```
|
||||
hi
|
||||
hello to you
|
||||
I am fine
|
||||
Your welcome!
|
||||
|
||||
```
|
||||
|
||||
file3.txt:
|
||||
```
|
||||
hallo
|
||||
Guten Tag
|
||||
Wie geht es dir
|
||||
Danke.
|
||||
|
||||
```
|
||||
|
||||
### Q1. How to display the hash value?
|
||||
|
||||
Use the command without any options to display the hash value and the filename.
|
||||
|
||||
```
|
||||
md5sum file1.txt
|
||||
```
|
||||
|
||||
Here's the output this command produced on our system:
|
||||
```
|
||||
[Documents]$ md5sum file1.txt
|
||||
1ff38cc592c4c5d0c8e3ca38be8f1eb1 file1.txt
|
||||
[Documents]$
|
||||
|
||||
```
|
||||
|
||||
The output can also be displayed in a BSD-style format using the --tag option.
|
||||
|
||||
md5sum --tag file1.txt
|
||||
```
|
||||
[Documents]$ md5sum --tag file1.txt
|
||||
MD5 (file1.txt) = 1ff38cc592c4c5d0c8e3ca38be8f1eb1
|
||||
[Documents]$
|
||||
|
||||
```
|
||||
### Q2. How to validate multiple files?
|
||||
|
||||
The md5sum command can validate multiple files at one time. We will add file2.txt and file3.txt to demonstrate the capabilities.
|
||||
|
||||
If you write the hashes to a file, you can use that file to check whether any of the files have changed. Here we are writing the hashes of the files to the file hashes, and then using that to validate that none of the files have changed.
|
||||
|
||||
```
|
||||
md5sum file1.txt file2.txt file3.txt > hashes
|
||||
md5sum --check hashes
|
||||
```
|
||||
|
||||
```
|
||||
[Documents]$ md5sum file1.txt file2.txt file3.txt > hashes
|
||||
[Documents]$ md5sum --check hashes
|
||||
file1.txt: OK
|
||||
file2.txt: OK
|
||||
file3.txt: OK
|
||||
[Documents]$
|
||||
|
||||
```
|
||||
|
||||
Now we will change file3.txt, adding a single exclamation mark to the end of the file, and rerun the command.
|
||||
|
||||
```
|
||||
echo "!" >> file3.txt
|
||||
md5sum --check hashes
|
||||
```
|
||||
|
||||
```
|
||||
[Documents]$ md5sum --check hashes
|
||||
file1.txt: OK
|
||||
file2.txt: OK
|
||||
file3.txt: FAILED
|
||||
md5sum: WARNING: 1 computed checksum did NOT match
|
||||
[Documents]$
|
||||
|
||||
```
|
||||
|
||||
You can see that file3.txt has changed.
|
||||
|
||||
### Q3. How to display only modified files?
|
||||
|
||||
If you have many files to check, you may want to display only the files that have changed. Using the "\--quiet" option, md5sum will only list the files that have changed.
|
||||
|
||||
```
|
||||
md5sum --quiet --check hashes
|
||||
```
|
||||
|
||||
```
|
||||
[Documents]$ md5sum --quiet --check hashes
|
||||
file3.txt: FAILED
|
||||
md5sum: WARNING: 1 computed checksum did NOT match
|
||||
[Documents]$
|
||||
|
||||
```
|
||||
|
||||
### Q4. How to detect changes in a script?
|
||||
|
||||
You may want to use md5sum in a script. Using the "\--status" option, md5sum won't print any output. Instead, the status code returns 0 if there are no changes, and 1 if the files don't match. The following script hashes.sh will return a 1 in the status code, because the files have changed. The script file is below:
|
||||
|
||||
```
|
||||
sh hashes.sh
|
||||
```
|
||||
|
||||
```
|
||||
hashes.sh:
|
||||
#!/bin/bash
|
||||
md5sum --status --check hashes
|
||||
Result=$?
|
||||
echo "File check status is: $Result"
|
||||
exit $Result
|
||||
|
||||
[Documents]$ sh hashes.sh
|
||||
File check status is: 1
|
||||
[[email protected] Documents]$
|
||||
|
||||
```
|
||||
|
||||
### Q5. How to identify invalid hash values?
|
||||
|
||||
md5sum can let you know if you have invalid hashes when you compare files. To warn you if any hash values are incorrect, you can use the --warn option. For this last example, we will use sed to insert an extra character at the beginning of the third line. This will change the hash value in the file hashes, making it invalid.
|
||||
|
||||
```
|
||||
sed -i '3s/.*/a&/' hashes
|
||||
md5sum --warn --check hashes
|
||||
```
|
||||
|
||||
This shows that the third line has an invalid hash.
|
||||
```
|
||||
[Documents]$ sed -i '3s/.*/a&/' hashes
|
||||
[Documents]$ md5sum --warn --check hashes
|
||||
file1.txt: OK
|
||||
file2.txt: OK
|
||||
md5sum: hashes: 3: improperly formatted MD5 checksum line
|
||||
md5sum: WARNING: 1 line is improperly formatted
|
||||
[Documents]$
|
||||
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
The md5sum is a simple command which can quickly validate one or multiple files to determine whether any of them have changed from the original file. For more information on md5sum, see it's [man page.][1]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-md5sum-command/
|
||||
|
||||
作者:[David Paige][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com/
|
||||
[1]:https://linux.die.net/man/1/md5sum
|
262
sources/tech/20180205 Locust.io- Load-testing using vagrant.md
Normal file
262
sources/tech/20180205 Locust.io- Load-testing using vagrant.md
Normal file
@ -0,0 +1,262 @@
|
||||
Locust.io: Load-testing using vagrant
|
||||
======
|
||||
|
||||
![](https://process.filestackapi.com/cache=expiry:max/resize=width:700/compress/Rm2HlpyYQc6ma5BnUGRO)
|
||||
|
||||
What could possibly go wrong when you release an application to the public domain without testing? You could either wait to find out or you can just find out before releasing the product.
|
||||
|
||||
In this tutorial, we will be considering the art of load-testing, one of the several types of [non-functional test][1] required for a system.
|
||||
|
||||
According to wikipedia
|
||||
|
||||
> [Load testing][2] is the process of putting demand on a software system or computing device and measuring its response. Load testing is performed to determine a system's behavior under both normal and anticipated peak load conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation.
|
||||
|
||||
### What the heck is locust.io?
|
||||
[Locust][3] is an opensource load-testing tool that can be used to simulate millions of simultaneous users, it has other cool features that allows you to visualize the data generated from the test plus it has been proven & battle tested ![😃][4]
|
||||
|
||||
### Why Vagrant?
|
||||
Because [vagrant][5] allows us to build and maintain our near replica production environment with the right parameters for memory, CPU, storage, and disk i/o.
|
||||
|
||||
### Why VirtualBox?
|
||||
VirtualBox here will act as our hypervisor, the computer software that will create and run our virtual machine(s).
|
||||
|
||||
### So, what is the plan here?
|
||||
|
||||
* Download [Vagrant][6] and [VirtualBox][7]
|
||||
* Set up a near-production replica environment using ### vagrant** and **virtualbox [SOURCE_CODE_APPLICATION][8]
|
||||
* Set up locust to run our load test [SOURCE_CODE_LOCUST][9]
|
||||
* Execute test against our production replica environment and check performance
|
||||
|
||||
|
||||
|
||||
### Some context
|
||||
Vagrant uses "Provisioners" and "Providers" as building blocks to manage the development environments.
|
||||
|
||||
> Provisioners are tools that allow users to customize the configuration of virtual environments. Puppet and Chef are the two most widely used provisioners in the Vagrant ecosystem.
|
||||
> Providers are the services that Vagrant uses to set up and create virtual environments.
|
||||
|
||||
Reference can be found [here][10]
|
||||
|
||||
That said for our vagrant configuration we will be making use of the Vagrant Shell provisioner and VirtualBox for our provider, just a simple setup for now ![😉][11]
|
||||
|
||||
One more thing, the Machine, and software requirements are written in a file called "Vagrantfile" to execute necessary steps in order to create a development-ready box, so let's get down to business.
|
||||
|
||||
### A near production environment using Vagrant and Virtualbox
|
||||
I used a past project of mine, a very minimal Python/Django application I called Bookshelf to create a near-production environment. Here is the link to the [repository][8]
|
||||
|
||||
Let's create our environmnet using a vagrantfile.
|
||||
Use the command `vagrant init --minimal hashicorp/precise64` to create a vagrant file, where `hashicorp` is the username and `precise64` is the box name.
|
||||
|
||||
More about getting started with vagrant can be found [here][12]
|
||||
```
|
||||
# vagrant file
|
||||
|
||||
# set our environment to use our host private and public key to access the VM
|
||||
# as vagrant project provides an insecure key pair for SSH Public Key # Authentication so that vagrant ssh works
|
||||
# https://stackoverflow.com/questions/14715678/vagrant-insecure-by-default
|
||||
|
||||
private_key_path = File.join(Dir.home, ".ssh", "id_rsa")
|
||||
public_key_path = File.join(Dir.home, ".ssh", "id_rsa.pub")
|
||||
insecure_key_path = File.join(Dir.home, ".vagrant.d", "insecure_private_key")
|
||||
|
||||
private_key = IO.read(private_key_path)
|
||||
public_key = IO.read(public_key_path)
|
||||
|
||||
# Set the environment details here
|
||||
Vagrant.configure("2") do |config|
|
||||
config.vm.box = "hashicorp/precise64"
|
||||
config.vm.hostname = "bookshelf-dev"
|
||||
# using a private network here, so don't forget to update your /etc/host file.
|
||||
# 192.168.50.4 bookshelf.example
|
||||
config.vm.network "private_network", ip: "192.168.50.4"
|
||||
|
||||
config.ssh.insert_key = false
|
||||
config.ssh.private_key_path = [
|
||||
private_key_path,
|
||||
insecure_key_path # to provision the first time
|
||||
]
|
||||
|
||||
# reference: https://github.com/hashicorp/vagrant/issues/992 @dwickern
|
||||
# use host/personal public and private key for security reasons
|
||||
config.vm.provision :shell, :inline => <<-SCRIPT
|
||||
set -e
|
||||
mkdir -p /vagrant/.ssh/
|
||||
|
||||
echo '#{private_key}' > /vagrant/.ssh/id_rsa
|
||||
chmod 600 /vagrant/.ssh/id_rsa
|
||||
|
||||
echo '#{public_key}' > /vagrant/.ssh/authorized_keys
|
||||
chmod 600 /vagrant/.ssh/authorized_keys
|
||||
SCRIPT
|
||||
|
||||
# Use a shell provisioner here
|
||||
config.vm.provision "shell" do |s|
|
||||
s.path = ".provision/setup_env.sh"
|
||||
s.args = ["set_up_python"]
|
||||
end
|
||||
|
||||
|
||||
config.vm.provision "shell" do |s|
|
||||
s.path = ".provision/setup_nginx.sh"
|
||||
s.args = ["set_up_nginx"]
|
||||
end
|
||||
|
||||
if Vagrant.has_plugin?("vagrant-vbguest")
|
||||
config.vbguest.auto_update = false
|
||||
end
|
||||
|
||||
# set your environment parameters here
|
||||
config.vm.provider 'virtualbox' do |v|
|
||||
v.memory = 2048
|
||||
v.cpus = 2
|
||||
end
|
||||
|
||||
config.vm.post_up_message = "At this point use `vagrant ssh` to ssh into the development environment"
|
||||
end
|
||||
|
||||
```
|
||||
|
||||
Something to note here, notice the config `config.vm.network "private_network", ip: "192.168.50.4"` where I configured the Virtual machine network to use a private network "192.168.59.4", I edited my `/etc/hosts` file to map that IP address to the fully qualified domain name (FQDN) of the application called `bookshelf.example`. So, don't forget to edit your `/etc/hosts/` as well it should look like this
|
||||
```
|
||||
##
|
||||
# /etc/host
|
||||
# Host Database
|
||||
#
|
||||
# localhost is used to configure the loopback interface
|
||||
# when the system is booting. Do not change this entry.
|
||||
##
|
||||
127.0.0.1 localhost
|
||||
255.255.255.255 broadcasthost
|
||||
::1 localhost
|
||||
192.168.50.4 bookshelf.example
|
||||
|
||||
```
|
||||
|
||||
The provision scripts can be found in the `.provision` [folder][13] of the repository
|
||||
![provision_sd.png][14]
|
||||
|
||||
There you would see all the scripts used in the setup, the `start_app.sh` is used to run the application once you are in the virtual machine via ssh.
|
||||
|
||||
To start the process run `vagrant up && vagrant ssh`, this will start the application and take you via ssh into the VM, inside the VM navigate to the `/vagrant/` folder to start the app via the command `./start_app.sh`
|
||||
|
||||
With our application up and running, next would be to create a load testing script to run against our setup.
|
||||
|
||||
### NB: The current application setup here makes use of sqlite3 for the database config, you can change that to Postgres by uncommenting that in the settings file. Also, `setup_env.sh` provisions the environment to use Postgres.
|
||||
|
||||
To set up a more comprehensive and robust production replica environment I would suggest you reference the docs [here][15], you can also check out [vagrant][5] to understand and play with vagrant.
|
||||
|
||||
### Set up locust for load-testing
|
||||
In other to perform load testing we are going to make use of locust. Source code can be found [here][9]
|
||||
|
||||
First, we create our locust file
|
||||
```
|
||||
# locustfile.py
|
||||
|
||||
# script used against vagrant set up on bookshelf git repo
|
||||
# url to repo: https://github.com/andela-sjames/bookshelf
|
||||
|
||||
from locust import HttpLocust, TaskSet, task
|
||||
|
||||
class SampleTrafficTask(TaskSet):
|
||||
|
||||
@task(2)
|
||||
def index(self):
|
||||
self.client.get("/")
|
||||
|
||||
@task(1)
|
||||
def search_for_book_that_contains_string_space(self):
|
||||
self.client.get("/?q=space")
|
||||
|
||||
@task(1)
|
||||
def search_for_book_that_contains_string_man(self):
|
||||
self.client.get("/?q=man")
|
||||
|
||||
class WebsiteUser(HttpLocust):
|
||||
host = "http://bookshelf.example"
|
||||
task_set = SampleTrafficTask
|
||||
min_wait = 5000
|
||||
max_wait = 9000
|
||||
|
||||
```
|
||||
|
||||
Here is a simple locust file called `locustfile.py`, where we define a number of locust task grouped under the `TaskSet class`. Then we have the `HttpLocust class` which represents a user, where we define how long a simulated user should wait between executing tasks, as well as what TaskSet class should define the user’s “behavior”.
|
||||
|
||||
using the filename locustfile.py allows us to start the process by simply running the command `locust`. If you choose to give your file a different name then you just need to reference the path using `locust -f /path/to/the/locust/file` to start the script.
|
||||
|
||||
If you're getting excited and want to know more then the [quick start][16] guide will get up to speed.
|
||||
|
||||
### Execute test and check perfomance
|
||||
|
||||
It's time to see some action ![😮][17]
|
||||
|
||||
Bookshelf app:
|
||||
Run the application via `vagrant up && vagrant ssh` navigate to the `/vagrant` and run `./start_app.sh`
|
||||
|
||||
Vagrant allows you to shut down the running machine using `vagrant halt` and to destroy the machine and all the resources that were created with it using `vagrant destroy`. Use this [link][18] to know more about the vagrant command line.
|
||||
|
||||
![bookshelf_str.png][14]
|
||||
|
||||
Go to your browser and use the private_ip address `192.168.50.4` or preferably `http://bookshelf.example` what we set in our `/etc/host` file of the system
|
||||
`192.168.50.4 bookshelf.example`
|
||||
|
||||
![bookshelf_app_web.png][14]
|
||||
|
||||
Locust Swarm:
|
||||
Within your load-testing folder, activate your `virtualenv`, get your dependencies down via `pip install -r requirements.txt` and run `locust`
|
||||
|
||||
![locust_str.png][14]
|
||||
|
||||
We're almost done:
|
||||
Now got to `http://127.0.0.1:8089/` on your browser
|
||||
![locust_rate.png][14]
|
||||
|
||||
Enter the number of users you want to simulate and the hatch rate (i.e how many users you want to be generated per second) and start swarming your development environment
|
||||
|
||||
**NB: You can also run locust against a development environment hosted via a cloud service if that is your use case. You don't have to confine yourself to vagrant.**
|
||||
|
||||
With the generated report and metric from the process, you should be able to make a well-informed decision on with regards to your system architecture or at least know the limit of your system and prepare for an anticipated event.
|
||||
|
||||
![locust_1.png][14]
|
||||
|
||||
![locust_a.png][14]
|
||||
|
||||
![locust_b.png][14]
|
||||
|
||||
![locust_error.png][14]
|
||||
|
||||
### Conclusion
|
||||
Congrats!!! if you made it to the end. As a recap, we were able to talk about what load-testing is, why you would want to perform a load test on your application and how to do it using locust and vagrant with a VirtualBox provider and a Shell provisioner. We also looked at the metrics and data generated from the test.
|
||||
|
||||
**NB: If you want a more concise vagrant production environment you can reference the docs [here][15].**
|
||||
|
||||
Thanks for reading and feel free to like/share this post.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.codementor.io/samueljames/locust-io-load-testing-using-vagrant-ffwnjger9
|
||||
|
||||
作者:[Samuel James][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[1]:https://en.wikipedia.org/wiki/Non-functional_testing
|
||||
[2]:https://en.wikipedia.org/wiki/Load_testing
|
||||
[3]:https://locust.io/
|
||||
[4]:https://twemoji.maxcdn.com/2/72x72/1f603.png
|
||||
[5]:https://www.vagrantup.com/intro/index.html
|
||||
[6]:https://www.vagrantup.com/downloads.html
|
||||
[7]:https://www.virtualbox.org/wiki/Downloads
|
||||
[8]:https://github.com/andela-sjames/bookshelf
|
||||
[9]:https://github.com/andela-sjames/load-testing
|
||||
[10]:https://en.wikipedia.org/wiki/Vagrant_(software)
|
||||
[11]:https://twemoji.maxcdn.com/2/72x72/1f609.png
|
||||
[12]:https://www.vagrantup.com/intro/getting-started/install.html
|
||||
[13]:https://github.com/andela-sjames/bookshelf/tree/master/.provision
|
||||
[14]:data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==
|
||||
[15]:http://vagrant-django.readthedocs.io/en/latest/intro.html
|
||||
[16]:https://docs.locust.io/en/latest/quickstart.html
|
||||
[17]:https://twemoji.maxcdn.com/2/72x72/1f62e.png
|
||||
[18]:https://www.vagrantup.com/docs/cli/
|
@ -0,0 +1,97 @@
|
||||
New Linux User? Try These 8 Great Essential Linux Apps
|
||||
======
|
||||
|
||||
![](https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-00-Featured.png)
|
||||
|
||||
When you are new to Linux, even if you are not new to computers in general, one of the problems you will face is which apps to use. With millions of Linux apps, the choice is certainly not easy. Below you will find eight (out of millions) essential Linux apps to get you settled in quickly.
|
||||
|
||||
Most of these apps are not exclusive to Linux. If you have used Windows/Mac before, chances are you are familiar with some of them. Depending on what your needs and interests are, you might not need all these apps, but in my opinion, most or all of the apps on this list are useful for newbies who are just starting out on Linux.
|
||||
|
||||
**Related** : [11 Portable Apps Every Linux User Should Use][1]
|
||||
|
||||
### 1. Chromium Web Browser
|
||||
|
||||
![linux-apps-01-chromium][2]
|
||||
|
||||
There is hardly a user who doesn’t need a web browser. While you can find good old Firefox for almost any Linux distro, and there is also a bunch of other [Linux browsers][3], a browser you should definitely try is [Chromium][4]. It’s the open source counterpart of Google’s Chrome browser. The main advantages of Chromium is that it is secure and fast. There are also tons of add-ons for it.
|
||||
|
||||
### 2. LibreOffice
|
||||
|
||||
![linux-apps-02-libreoffice][5]
|
||||
|
||||
[LibreOffice][6] is an open source Office suite that comes with word processor (Writer), spreadsheet (Calc), presentation (Impress), database (Base), formula editor (Math), and vector graphics and flowcharts (Draw) applications. It’s compatible with Microsoft Office documents, and there are even [LibreOffice extensions][7] if the default functionality isn’t enough for you.
|
||||
|
||||
LibreOffice is definitely one essential Linux app that you should have on your Linux computer.
|
||||
|
||||
### 3. GIMP
|
||||
|
||||
![linux-apps-03-gimp][8]
|
||||
|
||||
[GIMP][9] is a very powerful open-source image editor. It’s similar to Photoshop. With GIMP you can edit photos and create and edit raster images for the Web and print. It’s true there are simpler image editors for Linux, so if you have no idea about image processing at all, GIMP might look too complicated to you. GIMP goes way beyond simple image crop and resize – it offers layers, filters, masks, paths, etc.
|
||||
|
||||
### 4. VLC Media Player
|
||||
|
||||
![linux-apps-04-vlc][10]
|
||||
|
||||
[VLC][11] is probably the best movie player. It’s cross-platform, so you might know it from Windows. What’s really special about VLC is that it comes with lots of codecs (not all of which are open source, though), so it will play (almost) any music or video file.
|
||||
|
||||
### 5. Jitsy
|
||||
|
||||
![linux-apps-05-jitsi][12]
|
||||
|
||||
[Jitsy][13] is all about communication. You can use it for Google Talk, Facebook chat, Yahoo, ICQ and XMPP. It’s a multi-user tool for audio and video calls (including conference calls), as well as desktop streaming and group chats. Conversations are encrypted. With Jitsy you can also transfer files and record your calls.
|
||||
|
||||
### 6. Synaptic
|
||||
|
||||
![linux-apps-06-synaptic][14]
|
||||
|
||||
[Synaptic][15] is an alternative app installer for Debian-based distros. It comes with some distros but not all, so if you are using a Debian-based Linux, but there is no Synaptic in it, you might want to give it a try. Synaptic is a GUI tool for adding and removing apps from your system, and typically veteran Linux users favor it over the [Software Center package manager][16] that comes with many distros as a default.
|
||||
|
||||
**Related** : [10 Free Linux Productivity Apps You Haven’t Heard Of][17]
|
||||
|
||||
### 7. VirtualBox
|
||||
|
||||
![linux-apps-07-virtualbox][18]
|
||||
|
||||
[VirtualBox][19] allows you to run a virtual machine on your computer. A virtual machine comes in handy when you want to install another Linux distro or operating system from within your current Linux distro. You can use it to run Windows apps as well. Performance will be slower, but if you have a powerful computer, it won’t be that bad.
|
||||
|
||||
### 8. AisleRiot Solitaire
|
||||
|
||||
![linux-apps-08-aisleriot][20]
|
||||
|
||||
A solitaire pack is hardly an absolute necessity for a new Linux user, but since it’s so fun. If you are into solitaire games, this is a great solitaire pack. [AisleRiot][21] is one of the emblematic Linux apps, and this is for a reason – it comes with more than eighty solitaire games, including the popular Klondike, Bakers Dozen, Camelot, etc. Just be warned – it’s addictive and you might end up spending long hours playing with it!
|
||||
|
||||
Depending on the distro you are using, the way to install these apps is not the same. However, most, if not all, of these apps will be available for install with a package manager for your distro, or even come pre-installed with your distro. The best thing is, you can install and try them out and easily remove them if you don’t like them.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/essential-linux-apps/
|
||||
|
||||
作者:[Ada Ivanova][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/adaivanoff/
|
||||
[1]:https://www.maketecheasier.com/portable-apps-for-linux/ (11 Portable Apps Every Linux User Should Use)
|
||||
[2]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-01-Chromium.jpg (linux-apps-01-chromium)
|
||||
[3]:https://www.maketecheasier.com/linux-browsers-you-probably-havent-heard-of/
|
||||
[4]:http://www.chromium.org/
|
||||
[5]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-02-LibreOffice.jpg (linux-apps-02-libreoffice)
|
||||
[6]:https://www.libreoffice.org/
|
||||
[7]:https://www.maketecheasier.com/best-libreoffice-extensions/
|
||||
[8]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-03-GIMP.jpg (linux-apps-03-gimp)
|
||||
[9]:https://www.gimp.org/
|
||||
[10]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-04-VLC.jpg (linux-apps-04-vlc)
|
||||
[11]:http://www.videolan.org/
|
||||
[12]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-05-Jitsi.jpg (linux-apps-05-jitsi)
|
||||
[13]:https://jitsi.org/
|
||||
[14]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-06-Synaptic.jpg (linux-apps-06-synaptic)
|
||||
[15]:http://www.nongnu.org/synaptic/
|
||||
[16]:https://www.maketecheasier.com/are-linux-gui-software-centers-any-good/
|
||||
[17]:https://www.maketecheasier.com/free-linux-productivity-apps-you-havent-heard-of/ (10 Free Linux Productivity Apps You Haven’t Heard Of)
|
||||
[18]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-07-VirtualBox.jpg (linux-apps-07-virtualbox)
|
||||
[19]:https://www.virtualbox.org/
|
||||
[20]:https://www.maketecheasier.com/assets/uploads/2018/01/Linux-apps-08-AisleRiot.jpg (linux-apps-08-aisleriot)
|
||||
[21]:https://wiki.gnome.org/Aisleriot
|
@ -0,0 +1,221 @@
|
||||
Rancher - Container Management Application
|
||||
======
|
||||
Docker is a cutting-edge software used for containerization, that is used in most of IT companies to reduce infrastructure cost.
|
||||
|
||||
By default docker comes without any GUI, which is easy for Linux administrator to manage it and it’s very difficult for developers to manage. When it’s come to production then it’s very difficult for Linux admin too. So, what would be the best solution to manage the docker without any trouble.
|
||||
|
||||
The only way is GUI. The Docker API has allowed third party applications to interfacing with Docker. There are many docker GUI applications available in the market. We already wrote an article about Portainer application. Today we are going to discuss about Rancher.
|
||||
|
||||
Containers make software development easier, enabling you to write code faster and run it better. However, running containers in production can be hard.
|
||||
|
||||
**Suggested Read :** [Portainer – A Simple Docker Management GUI][1]
|
||||
|
||||
### What is Rancher
|
||||
|
||||
[Rancher][2] is a complete container management platform that makes it easy to deploy and run containers in production on any infrastructure. It provides infrastructure services such as multi-host networking, global and local load balancing, and volume snapshots. It integrates native Docker management capabilities such as Docker Machine and Docker Swarm. It offers a rich user experience that enables devops admins to operate Docker in production at large scale.
|
||||
|
||||
Navigate to following article for docker installation on Linux.
|
||||
|
||||
**Suggested Read :**
|
||||
**(#)** [How to install Docker in Linux][3]
|
||||
**(#)** [How to play with Docker images on Linux][4]
|
||||
**(#)** [How to play with Docker containers on Linux][5]
|
||||
**(#)** [How to Install, Run Applications inside Docker Containers][6]
|
||||
|
||||
### Rancher Features
|
||||
|
||||
* Set up Kubernetes in two minutes
|
||||
* Launch apps with single click (90 popular Docker applications)
|
||||
* Deploy and manage Docker easily
|
||||
* complete container management platform for production environment
|
||||
* Quickly deploy containers in production
|
||||
* Automate container deployment and operations with a robust technology
|
||||
* Modular infrastructure services
|
||||
* Rich set of orchestration tools
|
||||
* Rancher supports multiple authentication mechanisms
|
||||
|
||||
|
||||
|
||||
### How to install Rancher
|
||||
|
||||
Rancher installation is very simple since it’s runs as a lightweight Docker containers. Rancher is deployed as a set of Docker containers. Running Rancher is as simple as launching two containers. One container as the management server and another container on a node as an agent. Simple run the following command to deploy rancher on Linux.
|
||||
|
||||
Rancher server offers two different package tags like `stable` & `latest`. The below commands will pull appropriate build rancher image and install on your system. It will only take a couple of minutes for Rancher server to start up.
|
||||
|
||||
* `stable` : This tag will be their latest development builds. These builds will have been validated through rancher CI automation framework which is not advisable for deployment in production.
|
||||
* `latest` : It’s a latest stable release version which is recommend for production environment.
|
||||
|
||||
|
||||
|
||||
Rancher installation comes with many varieties. In this tutorial we are going to discuss about two variants.
|
||||
|
||||
* Install rancher server in a single container (Inbuilt Rancher Database)
|
||||
* Install rancher server in a single container (External Database)
|
||||
|
||||
|
||||
|
||||
### Method-1
|
||||
|
||||
Run the following commands to install rancher server in a single container (Inbuilt Rancher Database).
|
||||
```
|
||||
$ sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:stable
|
||||
|
||||
$ sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:latest
|
||||
|
||||
```
|
||||
|
||||
### Method-2
|
||||
|
||||
Instead of using the internal database that comes with Rancher server, you can start Rancher server pointing to an external database. First create required database, database user for the same.
|
||||
```
|
||||
> CREATE DATABASE IF NOT EXISTS cattle COLLATE = 'utf8_general_ci' CHARACTER SET = 'utf8';
|
||||
> GRANT ALL ON cattle.* TO 'cattle'@'%' IDENTIFIED BY 'cattle';
|
||||
> GRANT ALL ON cattle.* TO 'cattle'@'localhost' IDENTIFIED BY 'cattle';
|
||||
|
||||
```
|
||||
|
||||
Run the following command to start Rancher connecting to an external database.
|
||||
```
|
||||
$ sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server \
|
||||
--db-host myhost.example.com --db-port 3306 --db-user username --db-pass password --db-name cattle
|
||||
|
||||
```
|
||||
|
||||
If you want to test Rancher 2.0 use the following command to start.
|
||||
```
|
||||
$ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/server:preview
|
||||
|
||||
```
|
||||
|
||||
### Access & Setup Rancher Through GUI
|
||||
|
||||
Navigate to the following URL `http://hostname:8080` or `http://server_ip:8080` to access rancher GUI.
|
||||
[![][7]![][7]][8]
|
||||
|
||||
### How To Register the Host
|
||||
|
||||
Register your host URL which allow hosts to connect to the Rancher API. It’s one time setup.
|
||||
|
||||
To do, Click “Add a Host” link under the main menu or Go to >> Infrastructure >> Add Hosts then hit `save` button.
|
||||
[![][7]![][7]][9]
|
||||
|
||||
By default access control authentication is disabled in rancher so first we have to enable the access control authentication through available method, otherwise anyone can access the GUI.
|
||||
|
||||
Go to >> Admin >> Access Control and input the following values and finally hit `Enable Authentication` button to enable it. In my case i’m enabling via `local authentication`
|
||||
|
||||
* **`Login UserName`** Input your descried login username
|
||||
* **`Full Name`** Input your full name
|
||||
* **`Password`** Input your descried password
|
||||
* **`Confirm Password`**Confirm the password once again
|
||||
|
||||
|
||||
|
||||
[![][7]![][7]][10]
|
||||
|
||||
Logout and login back with your new login credential.
|
||||
[![][7]![][7]][11]
|
||||
|
||||
Now, i can see the local authentication is enabled.
|
||||
[![][7]![][7]][12]
|
||||
|
||||
### How To Add Hosts
|
||||
|
||||
After register your host, it will take you to next page where you can choose Linux machines from varies cloud providers. We are going to add the host that is running Rancher server, so select the `custom` option and input the required information.
|
||||
|
||||
Enter your server public IP address in the 4th step and run the command which is displaying in the 5th step into your terminal then finally hit `close` button.
|
||||
```
|
||||
$ sudo docker run -e CATTLE_AGENT_IP="192.168.1.112" --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.9 http://192.168.1.112:8080/v1/scripts/3F8217A1DCF01A7B7F8A:1514678400000:D7WeLUcEUnqZOt8rWjrvoaUE
|
||||
|
||||
INFO: Running Agent Registration Process, CATTLE_URL=http://192.168.1.112:8080/v1
|
||||
INFO: Attempting to connect to: http://66.70.189.137:8080/v1
|
||||
INFO: http://192.168.1.112:8080/v1 is accessible
|
||||
INFO: Inspecting host capabilities
|
||||
INFO: Boot2Docker: false
|
||||
INFO: Host writable: true
|
||||
INFO: Token: xxxxxxxx
|
||||
INFO: Running registration
|
||||
INFO: Printing Environment
|
||||
INFO: ENV: CATTLE_ACCESS_KEY=A35151AB87C15633DFB4
|
||||
INFO: ENV: CATTLE_AGENT_IP=192.168.1.112
|
||||
INFO: ENV: CATTLE_HOME=/var/lib/cattle
|
||||
INFO: ENV: CATTLE_REGISTRATION_ACCESS_KEY=registrationToken
|
||||
INFO: ENV: CATTLE_REGISTRATION_SECRET_KEY=xxxxxxx
|
||||
INFO: ENV: CATTLE_SECRET_KEY=xxxxxxx
|
||||
INFO: ENV: CATTLE_URL=http://192.168.1.112:8080/v1
|
||||
INFO: ENV: DETECTED_CATTLE_AGENT_IP=172.17.0.1
|
||||
INFO: ENV: RANCHER_AGENT_IMAGE=rancher/agent:v1.2.9
|
||||
INFO: Deleting container rancher-agent
|
||||
INFO: Launched Rancher Agent: 3415a1fd101f3c57d9cff6aef373c0ce66a3e20772122d2ca832039dcefd92fd
|
||||
|
||||
```
|
||||
|
||||
[![][7]![][7]][13]
|
||||
|
||||
Wait few seconds then the newly added host will be visible. To bring this Go to Infrastructure >> Hosts page.
|
||||
[![][7]![][7]][14]
|
||||
|
||||
### How To View Containers
|
||||
|
||||
Just navigate the following location to view a list of running containers. Go to >> Infrastructure >> Containers.
|
||||
[![][7]![][7]][15]
|
||||
|
||||
### How To Create Container
|
||||
|
||||
It’s very simple, just navigate the following location to create a container.
|
||||
|
||||
Go to >> Infrastructure >> Containers >> “Add Container” and input the required information as per your requirement. To test this, i’m going to create Centos container with latest OS.
|
||||
[![][7]![][7]][16]
|
||||
|
||||
The same has been listed here. Infrastructure >> Containers
|
||||
[![][7]![][7]][17]
|
||||
|
||||
Hit on the `Container` name to view the container performances information like CPU, memory, network and storage.
|
||||
[![][7]![][7]][18]
|
||||
|
||||
To manage the container such as stop, start, clone, restart, etc. Choose the particular container then hit `Three dot's` button in the left side of the container or `Actions` button to perform.
|
||||
[![][7]![][7]][19]
|
||||
|
||||
If you want console access of the container, just hit `Execute Shell` option in the action button.
|
||||
[![][7]![][7]][20]
|
||||
|
||||
### How To Deploy Container From Application Catalog
|
||||
|
||||
Rancher provides a catalog of application templates that make it easy to deploy in single click. It’s maintain popular applications (nearly 90) contributed by the Rancher community.
|
||||
[![][7]![][7]][21]
|
||||
|
||||
Go to >> Catalog >> All >> Choose the required application >> Finally hit “Launch” button to deploy.
|
||||
[![][7]![][7]][22]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/rancher-a-complete-container-management-platform-for-production-environment/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/magesh/
|
||||
[1]:https://www.2daygeek.com/portainer-a-simple-docker-management-gui/
|
||||
[2]:http://rancher.com/
|
||||
[3]:https://www.2daygeek.com/install-docker-on-centos-rhel-fedora-ubuntu-debian-oracle-archi-scentific-linux-mint-opensuse/
|
||||
[4]:https://www.2daygeek.com/list-search-pull-download-remove-docker-images-on-linux/
|
||||
[5]:https://www.2daygeek.com/create-run-list-start-stop-attach-delete-interactive-daemonized-docker-containers-on-linux/
|
||||
[6]:https://www.2daygeek.com/install-run-applications-inside-docker-containers/
|
||||
[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[8]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-1.png
|
||||
[9]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-2.png
|
||||
[10]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-3.png
|
||||
[11]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-3a.png
|
||||
[12]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-4.png
|
||||
[13]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-5.png
|
||||
[14]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-6.png
|
||||
[15]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-7.png
|
||||
[16]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-8.png
|
||||
[17]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-9.png
|
||||
[18]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-10.png
|
||||
[19]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-11.png
|
||||
[20]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-12.png
|
||||
[21]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-13.png
|
||||
[22]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-14.png
|
535
sources/tech/20180206 Manage printers and printing.md
Normal file
535
sources/tech/20180206 Manage printers and printing.md
Normal file
@ -0,0 +1,535 @@
|
||||
Manage printers and printing
|
||||
======
|
||||
|
||||
|
||||
### Printing in Linux
|
||||
|
||||
Although much of our communication today is electronic and paperless, we still have considerable need to print material from our computers. Bank statements, utility bills, financial and other reports, and benefits statements are just some of the items that we still print. This tutorial introduces you to printing in Linux using CUPS.
|
||||
|
||||
CUPS, formerly an acronym for Common UNIX Printing System, is the printer and print job manager for Linux. Early computer printers typically printed lines of text in a particular character set and font size. Today's graphical printers are capable of printing both graphics and text in a variety of sizes and fonts. Nevertheless, some of the commands you use today have their history in the older line printer daemon (LPD) technology.
|
||||
|
||||
This tutorial helps you prepare for Objective 108.4 in Topic 108 of the Linux Server Professional (LPIC-1) exam 102. The objective has a weight of 2.
|
||||
|
||||
#### Prerequisites
|
||||
|
||||
To get the most from the tutorials in this series, you need a basic knowledge of Linux and a working Linux system on which you can practice the commands covered in this tutorial. You should be familiar with GNU and UNIX® commands. Sometimes different versions of a program format output differently, so your results might not always look exactly like the listings shown here.
|
||||
|
||||
In this tutorial, I use Fedora 27 for examples.
|
||||
|
||||
### Some printing history
|
||||
|
||||
This small history is not part of the LPI objectives but may help you with context for this objective.
|
||||
|
||||
Early computers mostly used line printers. These were impact printers that printed a line of text at a time using fixed-pitch characters and a single font. To speed up overall system performance, early mainframe computers interleaved work for slow peripherals such as card readers, card punches, and line printers with other work. Thus was born Simultaneous Peripheral Operation On Line or spooling, a term that is still commonly used when talking about computer printing.
|
||||
|
||||
In UNIX and Linux systems, printing initially used the Berkeley Software Distribution (BSD) printing subsystem, consisting of a line printer daemon (lpd) running as a server, and client commands such as `lpr` to submit jobs for printing. This protocol was later standardized by the IETF as RFC 1179, **Line Printer Daemon Protocol**.
|
||||
|
||||
System also had a printing daemon. It was functionally similar to the Berkeley LPD, but had a different command set. You will frequently see two commands with different options that accomplish the same task. For example, `lpr` from the Berkeley implementation and `lp` from the System V implementation each print files.
|
||||
|
||||
Advances in printer technology made it possible to mix different fonts on a page and to print images as well as words. Variable pitch fonts, and more advanced printing techniques such as kerning and ligatures, are now standard. Several improvements to the basic lpd/lpr approach to printing were devised, such as LPRng, the next generation LPR, and CUPS.
|
||||
|
||||
Many printers capable of graphical printing initially used the Adobe PostScript language. A PostScript printer has an engine that interprets the commands in a print job and produces finished pages from these commands. PostScript is often used as an intermediate form between an original file, such as a text or an image file, and a final form suitable for a particular printer that does not have PostScript capability. Conversion of a print job, such as an ASCII text file or a JPEG image to PostScript, and conversion from PostScript to the final raster form required for a non-PostScript printer is done using filters.
|
||||
|
||||
Today, Portable Document Format (PDF), which is based on PostScript, has largely replaced raw PostScript. PDF is designed to be independent of hardware and software and to encapsulate a full description of the pages to be printed. You can view PDF files as well as print them.
|
||||
|
||||
### Manage print queues
|
||||
|
||||
Users direct print jobs to a logical entity called a print queue. In single-user systems, a print queue and a printer are usually equivalent. However, CUPS allows a system without an attached printer to queue print jobs for eventual printing on a remote system, and, through the use of classes to allow a print job directed to a class to be printed on the first available printer of that class.
|
||||
|
||||
You can inspect and manipulate print queues. Some of the commands to do so are new for CUPS. Others are compatibility commands that have their roots in LPD commands, although the current options are usually a limited subset of the original LPD printing system options.
|
||||
|
||||
You can check the queues known to the system using the CUPS `lpstat` command. Some common options are shown in Table 1.
|
||||
|
||||
###### Table 1. Options for lpstat
|
||||
| Option | Purpose |
|
||||
| -a | Display accepting status of printers. |
|
||||
| -c | Display print classes. |
|
||||
| -p | Display print status: enabled or disabled. |
|
||||
| -s | Display default printer, printers, and classes. Equivalent to -d -c -v. Note that multiple options must be separated as values can be specified for many. |
|
||||
| -s | Display printers and their devices. |
|
||||
|
||||
|
||||
You may also use the LPD `lpc` command, found in /usr/sbin, with the `status` option. If you do not specify a printer name, all queues are listed. Listing 1 shows some examples of both commands.
|
||||
|
||||
###### Listing 1. Displaying available print queues
|
||||
```
|
||||
[ian@atticf27 ~]$ lpstat -d
|
||||
system default destination: HL-2280DW
|
||||
[ian@atticf27 ~]$ lpstat -v HL-2280DW
|
||||
device for HL-2280DW: dnssd://Brother%20HL-2280DW._pdl-datastream._tcp.local/
|
||||
[ian@atticf27 ~]$ lpstat -s
|
||||
system default destination: HL-2280DW
|
||||
members of class anyprint:
|
||||
HL-2280DW
|
||||
XP-610
|
||||
device for anyprint: ///dev/null
|
||||
device for HL-2280DW: dnssd://Brother%20HL-2280DW._pdl-datastream._tcp.local/
|
||||
device for XP-610: dnssd://EPSON%20XP-610%20Series._ipp._tcp.local/?uuid=cfe92100-67c4-11d4-a45f-ac18266c48aa
|
||||
[ian@atticf27 ~]$ lpstat -a XP-610
|
||||
XP-610 accepting requests since Thu 27 Apr 2017 05:53:59 PM EDT
|
||||
[ian@atticf27 ~]$ /usr/sbin/lpc status HL-2280DW
|
||||
HL-2280DW:
|
||||
printer is on device 'dnssd' speed -1
|
||||
queuing is disabled
|
||||
printing is enabled
|
||||
no entries
|
||||
daemon present
|
||||
|
||||
```
|
||||
|
||||
This example shows two printers, HL-2280DW and XP-610, and a class, `anyprint`, which allows print jobs to be directed to the first available of these two printers.
|
||||
|
||||
In this example, queuing of print jobs to HL-2280DW is currently disabled, although printing is enabled, as might be done in order to drain the queue before taking the printer offline for maintenance. Whether queuing is enabled or disabled is controlled by the `cupsaccept` and `cupsreject` commands. Formerly, these were `accept` and `reject`, but you will probably find these commands in /usr/sbin are now just links to the newer commands. Similarly, whether printing is enabled or disabled is controlled by the `cupsenable` and `cupsdisable` commands. In earlier versions of CUPS, these were called `enable` and `disable`, which allowed confusion with the builtin bash shell `enable`. Listing 2 shows how to enable queuing on printer HL-2280DW while disabling printing. Several of the CUPS commands support a `-r` option to give a reason for the action. This reason is displayed when you use `lpstat`, but not if you use `lpc`.
|
||||
|
||||
###### Listing 2. Enabling queuing and disabling printing
|
||||
```
|
||||
[ian@atticf27 ~]$ lpstat -a -p HL-2280DW
|
||||
anyprint accepting requests since Mon 29 Jan 2018 01:17:09 PM EST
|
||||
HL-2280DW not accepting requests since Thu 27 Apr 2017 05:52:27 PM EDT -
|
||||
Maintenance scheduled
|
||||
XP-610 accepting requests since Thu 27 Apr 2017 05:53:59 PM EDT
|
||||
printer HL-2280DW is idle. enabled since Thu 27 Apr 2017 05:52:27 PM EDT
|
||||
Maintenance scheduled
|
||||
[ian@atticf27 ~]$ accept HL-2280DW
|
||||
[ian@atticf27 ~]$ cupsdisable -r "waiting for toner delivery" HL-2280DW
|
||||
[ian@atticf27 ~]$ lpstat -p -a
|
||||
printer anyprint is idle. enabled since Mon 29 Jan 2018 01:17:09 PM EST
|
||||
printer HL-2280DW disabled since Mon 29 Jan 2018 04:03:50 PM EST -
|
||||
waiting for toner delivery
|
||||
printer XP-610 is idle. enabled since Thu 27 Apr 2017 05:53:59 PM EDT
|
||||
anyprint accepting requests since Mon 29 Jan 2018 01:17:09 PM EST
|
||||
HL-2280DW accepting requests since Mon 29 Jan 2018 04:03:50 PM EST
|
||||
XP-610 accepting requests since Thu 27 Apr 2017 05:53:59 PM EDT
|
||||
|
||||
```
|
||||
|
||||
Note that an authorized user must perform these tasks. This may be root or another authorized user. See the SystemGroup entry in /etc/cups/cups-files.conf and the man page for cups-files.conf for more information on authorizing user groups.
|
||||
|
||||
### Manage user print jobs
|
||||
|
||||
Now that you have seen a little of how to check on print queues and classes, I will show you how to manage jobs on printer queues. The first thing you might want to do is find out whether any jobs are queued for a particular printer or for all printers. You do this with the `lpq` command. If no option is specified, `lpq` displays the queue for the default printer. Use the `-P` option with a printer name to specify a particular printer or the `-a` option to specify all printers, as shown in Listing 3.
|
||||
|
||||
###### Listing 3. Checking print queues with lpq
|
||||
```
|
||||
[pat@atticf27 ~]$ # As user pat (non-administrator)
|
||||
[pat@atticf27 ~]$ lpq
|
||||
HL-2280DW is not ready
|
||||
Rank Owner Job File(s) Total Size
|
||||
1st unknown 4 unknown 6144 bytes
|
||||
2nd pat 6 bitlib.h 6144 bytes
|
||||
3rd pat 7 bitlib.C 6144 bytes
|
||||
4th unknown 8 unknown 1024 bytes
|
||||
5th unknown 9 unknown 1024 bytes
|
||||
|
||||
[ian@atticf27 ~]$ # As user ian (administrator)
|
||||
[ian@atticf27 ~]$ lpq -P xp-610
|
||||
xp-610 is ready
|
||||
no entries
|
||||
[ian@atticf27 ~]$ lpq -a
|
||||
Rank Owner Job File(s) Total Size
|
||||
1st ian 4 permutation.C 6144 bytes
|
||||
2nd pat 6 bitlib.h 6144 bytes
|
||||
3rd pat 7 bitlib.C 6144 bytes
|
||||
4th ian 8 .bashrc 1024 bytes
|
||||
5th ian 9 .bashrc 1024 bytes
|
||||
|
||||
```
|
||||
|
||||
In this example, five jobs, 4, 6, 7, 8, and 9, are queued for the printer named HL-2280DW and none for XP-610. Using the `-P` option in this case simply shows that the printer is ready but has no queued hobs. Note that CUPS printer names are not case-sensitive. Note also that user ian submitted a job twice, a common user action when a job does not print the first time.
|
||||
|
||||
In general, you can view or manipulate your own print jobs, but root or another authorized user is usually required to manipulate the jobs of others. Most CUPS commands also encrypted communication between the CUPS client command and CUPS server using a `-E` option
|
||||
|
||||
Use the `lprm` command to remove one of the .bashrc jobs from the queue. With no options, the current job is removed. With the `-` option, all jobs are removed. Otherwise, specify a list of jobs to be removed as shown in Listing 4.
|
||||
|
||||
###### Listing 4. Deleting print jobs with lprm
|
||||
```
|
||||
[[pat@atticf27 ~]$ # As user pat (non-administrator)
|
||||
[pat@atticf27 ~]$ lprm
|
||||
lprm: Forbidden
|
||||
|
||||
[ian@atticf27 ~]$ # As user ian (administrator)
|
||||
[ian@atticf27 ~]$ lprm 8
|
||||
[ian@atticf27 ~]$ lpq
|
||||
HL-2280DW is not ready
|
||||
Rank Owner Job File(s) Total Size
|
||||
1st ian 4 permutation.C 6144 bytes
|
||||
2nd pat 6 bitlib.h 6144 bytes
|
||||
3rd pat 7 bitlib.C 6144 bytes
|
||||
4th ian 9 .bashrc 1024 bytes
|
||||
|
||||
```
|
||||
|
||||
Note that user pat was not able to remove the first job on the queue, because it was for user ian. However, ian was able to remove his own job number 8.
|
||||
|
||||
Another command that will help you manipulate jobs on print queues is the `lp` command. Use it to alter attributes of jobs, such as priority or number of copies. Let us assume user ian wants his job 9 to print before those of user pat, and he really did want two copies of it. The job priority ranges from a lowest priority of 1 to a highest priority of 100 with a default of 50. User ian could use the `-i`, `-n`, and `-q` options to specify a job to alter and a new number of copies and priority as shown in Listing 5. Note the use of the `-l` option of the `lpq` command, which provides more verbose output.
|
||||
|
||||
###### Listing 5. Changing the number of copies and priority with lp
|
||||
```
|
||||
[ian@atticf27 ~]$ lpq
|
||||
HL-2280DW is not ready
|
||||
Rank Owner Job File(s) Total Size
|
||||
1st ian 4 permutation.C 6144 bytes
|
||||
2nd pat 6 bitlib.h 6144 bytes
|
||||
3rd pat 7 bitlib.C 6144 bytes
|
||||
4th ian 9 .bashrc 1024 bytes
|
||||
[ian@atticf27 ~]$ lp -i 9 -q 60 -n 2
|
||||
[ian@atticf27 ~]$ lpq
|
||||
HL-2280DW is not ready
|
||||
Rank Owner Job File(s) Total Size
|
||||
1st ian 9 .bashrc 1024 bytes
|
||||
2nd ian 4 permutation.C 6144 bytes
|
||||
3rd pat 6 bitlib.h 6144 bytes
|
||||
4th pat 7 bitlib.C 6144 bytes
|
||||
|
||||
```
|
||||
|
||||
Finally, the `lpmove` command allows jobs to be moved from one queue to another. For example, we might want to do this because printer HL-2280DW is not currently printing. You can specify just a hob number, such as 9, or you can qualify it with the queue name and a hyphen, such as HL-2280DW-0. The `lpmove` command requires an authorized user. Listing 6 shows how to move these jobs to another queue, specifying first by printer and job ID, then all jobs for a given printer. By the time we check the queues again, one of the jobs is already printing.
|
||||
|
||||
###### Listing 6. Moving jobs to another print queue with lpmove
|
||||
```
|
||||
[ian@atticf27 ~]$ lpmove HL-2280DW-9 anyprint
|
||||
[ian@atticf27 ~]$ lpmove HL-2280DW xp-610
|
||||
[ian@atticf27 ~]$ lpq -a
|
||||
Rank Owner Job File(s) Total Size
|
||||
active ian 9 .bashrc 1024 bytes
|
||||
1st ian 4 permutation.C 6144 bytes
|
||||
2nd pat 6 bitlib.h 6144 bytes
|
||||
3rd pat 7 bitlib.C 6144 bytes
|
||||
[ian@atticf27 ~]$ # A few minutes later
|
||||
[ian@atticf27 ~]$ lpq -a
|
||||
Rank Owner Job File(s) Total Size
|
||||
active pat 6 bitlib.h 6144 bytes
|
||||
1st pat 7 bitlib.C 6144 bytes
|
||||
|
||||
```
|
||||
|
||||
If you happen to use a print server that is not CUPS, such as LPD or LPRng, many of the queue administration functions are handled as subcommands of the `lpc` command. For example, you might use `lpc topq` to move a job to the top of a queue. Other `lpc` subcommands include `disable`, `down`, `enable`, `hold`, `move`, `redirect`, `release`, and `start`. These subcommands are not implemented in the CUPS `lpc` compatibility command.
|
||||
|
||||
#### Printing files
|
||||
|
||||
How are print jobs erected? Many graphical programs provide a method of printing, usually under the **File** menu option. These programs provide graphical tools for choosing a printer, margin sizes, color or black-and-white printing, number of copies, selecting 2-up printing (which is 2 pages per sheet, often used for handouts), and so on. Here I show you the command-line tools for controlling such features, and then a graphical implementation for comparison.
|
||||
|
||||
The simplest way to print any file is to use the `lpr` command and provide the file name. This prints the file on the default printer. The `lp` command can print files as well as modify print jobs. Listing 7 shows a simple example using both commands. Note that `lpr` quietly spools the job, but `lp` displays the job number of the spooled job.
|
||||
|
||||
###### Listing 7. Printing with lpr and lp
|
||||
```
|
||||
[ian@atticf27 ~]$ echo "Print this text" > printexample.txt
|
||||
[ian@atticf27 ~]$ lpr printexample.txt
|
||||
[ian@atticf27 ~]$ lp printexample.txt
|
||||
request id is HL-2280DW-12 (1 file(s))
|
||||
|
||||
```
|
||||
|
||||
Table 2 shows some options that you may use with `lpr`. Note that `lp` has similar options to `lpr`, but names may differ; for example, `-#` on `lpr` is equivalent to `-n` on `lp`. Check the man pages for more information.
|
||||
|
||||
###### Table 2. Options for lpr
|
||||
|
||||
| Option | Purpose |
|
||||
| -C, -J, or -T | Set a job name. |
|
||||
| -P | Select a particular printer. |
|
||||
| -# | Specify number of copies. Note this is different from the -n option you saw with the lp command. |
|
||||
| -m | Send email upon job completion. |
|
||||
| -l | Indicate that the print file is already formatted for printing. Equivalent to -o raw. |
|
||||
| -o | Set a job option. |
|
||||
| -p | Format a text file with a shaded header. Equivalent to -o prettyprint. |
|
||||
| -q | Hold (or queue) the job for later printing. |
|
||||
| -r | Remove the file after it has been spooled for printing. |
|
||||
|
||||
Listing 8 shows some of these options in action. I request an email confirmation after printing, that the job be held and that the file be deleted after printing.
|
||||
|
||||
###### Listing 8. Printing with lpr
|
||||
```
|
||||
[ian@atticf27 ~]$ lpr -P HL-2280DW -J "Ian's text file" -#2 -m -p -q -r printexample.txt
|
||||
[[ian@atticf27 ~]$ lpq -l
|
||||
HL-2280DW is ready
|
||||
|
||||
|
||||
ian: 1st [job 13 localhost]
|
||||
2 copies of Ian's text file 1024 bytes
|
||||
[ian@atticf27 ~]$ ls printexample.txt
|
||||
ls: cannot access 'printexample.txt': No such file or directory
|
||||
|
||||
```
|
||||
|
||||
I now have a held job in the HL-2280DW print queue. What to do? The `lp` command has options to hold and release jobs, using various values with the `-H` option. Listing 9 shows how to release the held job. Check the `lp` man page for information on other options.
|
||||
|
||||
###### Listing 9. Resuming printing of a held print job
|
||||
```
|
||||
[ian@atticf27 ~]$ lp -i 13 -H resume
|
||||
|
||||
```
|
||||
|
||||
Not all of the vast array of available printers support the same set of options. Use the `lpoptions` command to see the general options that are set for a printer. Add the `-l` option to display printer-specific options. Listing 10 shows two examples. Many common options relate to portrait/landscape printing, page dimensions, and placement of the output on the pages. See the man pages for details.
|
||||
|
||||
###### Listing 10. Checking printer options
|
||||
```
|
||||
[ian@atticf27 ~]$ lpoptions -p HL-2280DW
|
||||
copies=1 device-uri=dnssd://Brother%20HL-2280DW._pdl-datastream._tcp.local/
|
||||
finishings=3 job-cancel-after=10800 job-hold-until=no-hold job-priority=50
|
||||
job-sheets=none,none marker-change-time=1517325288 marker-colors=#000000,#000000
|
||||
marker-levels=-1,92 marker-names='Black\ Toner\ Cartridge,Drum\ Unit'
|
||||
marker-types=toner,opc number-up=1 printer-commands=none
|
||||
printer-info='Brother HL-2280DW' printer-is-accepting-jobs=true
|
||||
printer-is-shared=true printer-is-temporary=false printer-location
|
||||
printer-make-and-model='Brother HL-2250DN - CUPS+Gutenprint v5.2.13 Simplified'
|
||||
printer-state=3 printer-state-change-time=1517325288 printer-state-reasons=none
|
||||
printer-type=135188 printer-uri-supported=ipp://localhost/printers/HL-2280DW
|
||||
sides=one-sided
|
||||
|
||||
[ian@atticf27 ~]$ lpoptions -l -p xp-610
|
||||
PageSize/Media Size: *Letter Legal Executive Statement A4
|
||||
ColorModel/Color Model: *Gray Black
|
||||
InputSlot/Media Source: *Standard ManualAdj Manual MultiPurposeAdj MultiPurpose
|
||||
UpperAdj Upper LowerAdj Lower LargeCapacityAdj LargeCapacity
|
||||
StpQuality/Print Quality: None Draft *Standard High
|
||||
Resolution/Resolution: *301x300dpi 150dpi 300dpi 600dpi
|
||||
Duplex/2-Sided Printing: *None DuplexNoTumble DuplexTumble
|
||||
StpiShrinkOutput/Shrink Page If Necessary to Fit Borders: *Shrink Crop Expand
|
||||
StpColorCorrection/Color Correction: *None Accurate Bright Hue Uncorrected
|
||||
Desaturated Threshold Density Raw Predithered
|
||||
StpBrightness/Brightness: 0 100 200 300 400 500 600 700 800 900 *None 1100
|
||||
1200 1300 1400 1500 1600 1700 1800 1900 2000 Custom.REAL
|
||||
StpContrast/Contrast: 0 100 200 300 400 500 600 700 800 900 *None 1100 1200
|
||||
1300 1400 1500 1600 1700 1800 1900 2000 2100 2200 2300 2400 2500 2600 2700
|
||||
2800 2900 3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000 Custom.REAL
|
||||
StpImageType/Image Type: None Text Graphics *TextGraphics Photo LineArt
|
||||
|
||||
```
|
||||
|
||||
Most GUI applications have a print dialog, often using the **File >Print** menu choice. Figure 1 shows an example in GIMP, an image manipulation program.
|
||||
|
||||
###### Figure 1. Printing from the GIMP
|
||||
|
||||
![Printing from the GIMP][3]
|
||||
|
||||
So far, all our commands have been implicitly directed to the local CUPS print server. You can also direct most commands to the server on another system, by specifying the `-h` option along with a port number if it is not the CUPS default of 631.
|
||||
|
||||
### CUPS and the CUPS server
|
||||
|
||||
At the heart of the CUPS printing system is the `cupsd` print server which runs as a daemon process. The CUPS configuration file is normally located in /etc/cups/cupsd.conf. The /etc/cups directory also contains other configuration files related to CUPS. CUPS is usually started during system initialization, but may be controlled by the CUPS script located in /etc/rc.d/init.d or /etc/init.d, according to your distribution. For newer systems using systemd initialization, the CUPS service script is likely in /usr/lib/systemd/system/cups.service. As with most such scripts, you can stop, start, or restart the daemon. See our tutorial [Learn Linux, 101: Runlevels, boot targets, shutdown, and reboot][4] for more information on using initialization scripts.
|
||||
|
||||
The configuration file, /etc/cups/cupsd.conf, contains parameters that control things such as access to the printing system, whether remote printing is allowed, the location of spool files, and so on. On some systems, a second part describes individual print queues and is usually generated automatically by configuration tools. Listing 11 shows some entries for a default cupsd.conf file. Note that comments start with a # character. Defaults are usually shown as comments and entries that are changed from the default have the leading # character removed.
|
||||
|
||||
###### Listing 11. Parts of a default /etc/cups/cupsd.conf file
|
||||
```
|
||||
# Only listen for connections from the local machine.
|
||||
Listen localhost:631
|
||||
Listen /var/run/cups/cups.sock
|
||||
|
||||
# Show shared printers on the local network.
|
||||
Browsing On
|
||||
BrowseLocalProtocols dnssd
|
||||
|
||||
# Default authentication type, when authentication is required...
|
||||
DefaultAuthType Basic
|
||||
|
||||
# Web interface setting...
|
||||
WebInterface Yes
|
||||
|
||||
# Set the default printer/job policies...
|
||||
<Policy default>
|
||||
# Job/subscription privacy...
|
||||
JobPrivateAccess default
|
||||
JobPrivateValues default
|
||||
SubscriptionPrivateAccess default
|
||||
SubscriptionPrivateValues default
|
||||
|
||||
# Job-related operations must be done by the owner or an administrator...
|
||||
<Limit Create-Job Print-Job Print-URI Validate-Job>
|
||||
Order deny,allow
|
||||
</Limit>
|
||||
|
||||
```
|
||||
|
||||
File, directory, and user configuration directives that used to be allowed in cupsd.conf are now stored in cups-files.conf instead. This is to prevent certain types of privilege escalation attacks. Listing 12 shows some entries from cups-files.conf. Note that spool files are stored by default in the /var/spool file system as you would expect from the Filesystem Hierarchy Standard (FHS). See the man pages for cupsd.conf and cups-files.conf for more details on these configuration files.
|
||||
|
||||
###### Listing 12. Parts of a default /etc/cups/cups-files.conf
|
||||
```
|
||||
# Location of the file listing all of the local printers...
|
||||
#Printcap /etc/printcap
|
||||
|
||||
# Format of the Printcap file...
|
||||
#PrintcapFormat bsd
|
||||
#PrintcapFormat plist
|
||||
#PrintcapFormat solaris
|
||||
|
||||
# Location of all spool files...
|
||||
#RequestRoot /var/spool/cups
|
||||
|
||||
# Location of helper programs...
|
||||
#ServerBin /usr/lib/cups
|
||||
|
||||
# SSL/TLS keychain for the scheduler...
|
||||
#ServerKeychain ssl
|
||||
|
||||
# Location of other configuration files...
|
||||
#ServerRoot /etc/cups
|
||||
|
||||
```
|
||||
|
||||
Listing 12 refers to the /etc/printcap file. This was the name of the configuration file for LPD print servers, and some applications still use it to determine available printers and their properties. It is usually generated automatically in a CUPS system, so you will probably not modify it yourself. However, you may need to check it if you are diagnosing user printing problems. Listing 13 shows an example.
|
||||
|
||||
###### Listing 13. Automatically generated /etc/printcap
|
||||
```
|
||||
# This file was automatically generated by cupsd(8) from the
|
||||
# /etc/cups/printers.conf file. All changes to this file
|
||||
# will be lost.
|
||||
HL-2280DW|Brother HL-2280DW:rm=atticf27:rp=HL-2280DW:
|
||||
anyprint|Any available printer:rm=atticf27:rp=anyprint:
|
||||
XP-610|EPSON XP-610 Series:rm=atticf27:rp=XP-610:
|
||||
|
||||
```
|
||||
|
||||
Each line here has a printer name and printer description as well as the name of the remote machine (rm) and remote printer (rp) on that machine. Older /etc/printcap file also described the printer capabilities.
|
||||
|
||||
#### File conversion filters
|
||||
|
||||
You can print many types of files using CUPS, including plain text, PDF, PostScript, and a variety of image formats without needing to tell the `lpr` or `lp` command anything more than the file name. This magic feat is accomplished through the use of filters. Indeed, a popular filter for many years was named magicfilter.
|
||||
|
||||
CUPS uses Multipurpose Internet Mail Extensions (MIME) types to determine the appropriate conversion filter when printing a file. Other printing packages might use the magic number mechanism as used by the `file` command. See the man pages for `file` or `magic` for more details.
|
||||
|
||||
Input files are converted to an intermediate raster or PostScript format using filters. Job information such as number of copies is added. The data is finally sent through a beckend to the destination printer. There are some filters (such as `a2ps` or `dvips`) that you can use to manually filter input. You might do this to obtain special formatting results, or to handle a file format that CUPS does not support natively.
|
||||
|
||||
#### Adding printers
|
||||
|
||||
CUPS supports a variety of printers, including:
|
||||
|
||||
* Locally attached parallel and USB printers
|
||||
* Internet Printing Protocol (IPP) printers
|
||||
* Remote LPD printers
|
||||
* Microsoft® Windows® printers using SAMBA
|
||||
* Novell printers using NCP
|
||||
* HP Jetdirect attached printers
|
||||
|
||||
|
||||
|
||||
Most systems today attempt to autodetect and autoconfigure local hardware when the system starts or when the device is attached. Similarly, many network printers can be autodetected. Use the CUPS web administration tool ((<http://localhost:631> or <http://127.0.0.1:631>) to search for or add printers. Many distributions include their own configuration tools, for example YaST on SUSE systems. Figure 2 shows the CUPS interface using localhost:631 and Figure 3 shows the GNOME printer settings dialog on Fedora 27.
|
||||
|
||||
###### Figure 2. Using the CUPS web interface
|
||||
|
||||
|
||||
![Using the CUPS web interface][5]
|
||||
|
||||
###### Figure 3. Using printer settings on Fedora 27
|
||||
|
||||
|
||||
![Using printer settings on Fedora 27][6]
|
||||
|
||||
You can also configure printers from a command line. Before you configure a printer, you need some basic information about the printer and about how it is connected. If a remote system needs a user ID or password, you will also need that information.
|
||||
|
||||
You need to know what driver to use for your printer. Not all printers are fully supported on Linux and some may not work at all, or only with limitations. Check at OpenPrinting.org (see Related topics) to see if there is a driver for your particular printer. The `lpinfo` command can also help you identify the available device types and drivers. Use the `-v` option to list supported devices and the `-m` option to list drivers, as shown in Listing 14.
|
||||
|
||||
###### Listing 14. Available printer drivers
|
||||
```
|
||||
[ian@atticf27 ~]$ lpinfo -m | grep -i xp-610
|
||||
lsb/usr/Epson/epson-inkjet-printer-escpr/Epson-XP-610_Series-epson-escpr-en.ppd.gz
|
||||
EPSON XP-610 Series, Epson Inkjet Printer Driver (ESC/P-R) for Linux
|
||||
[ian@atticf27 ~]$ locate "Epson-XP-610_Series-epson-escpr-en.ppd.gz"
|
||||
/usr/share/ppd/Epson/epson-inkjet-printer-escpr/Epson-XP-610_Series-epson-escpr-en.ppd.gz
|
||||
[ian@atticf27 ~]$ lpinfo -v
|
||||
network socket
|
||||
network ipps
|
||||
network lpd
|
||||
network beh
|
||||
network ipp
|
||||
network http
|
||||
network https
|
||||
direct hp
|
||||
serial serial:/dev/ttyS0?baud=115200
|
||||
direct parallel:/dev/lp0
|
||||
network smb
|
||||
direct hpfax
|
||||
network dnssd://Brother%20HL-2280DW._pdl-datastream._tcp.local/
|
||||
network dnssd://EPSON%20XP-610%20Series._ipp._tcp.local/?uuid=cfe92100-67c4-11d4-a45f-ac18266c48aa
|
||||
network lpd://BRN001BA98A1891/BINARY_P1
|
||||
network lpd://192.168.1.38:515/PASSTHRU
|
||||
|
||||
```
|
||||
|
||||
The Epson-XP-610_Series-epson-escpr-en.ppd.gz driver is located in the /usr/share/ppd/Epson/epson-inkjet-printer-escpr/ directory on my system.
|
||||
|
||||
Is you don't find a driver, check the printer manufacturer's website in case a proprietary driver is available. For example, at the time of writing Brother has a driver for my HL-2280DW printer, but this driver is not listed at OpenPrinting.org.
|
||||
|
||||
Once you have the basic information, you can configure a printer using the `lpadmin` command as shown in Listing 15. For this purpose, I will create another instance of my HL-2280DW printer for duplex printing.
|
||||
|
||||
###### Listing 15. Configuring a printer
|
||||
```
|
||||
[ian@atticf27 ~]$ lpinfo -m | grep -i "hl.*2280"
|
||||
HL2280DW.ppd Brother HL2280DW for CUPS
|
||||
lsb/usr/HL2280DW.ppd Brother HL2280DW for CUPS
|
||||
[ian@atticf27 ~]$ lpadmin -p HL-2280DW-duplex -E -m HL2280DW.ppd \
|
||||
> -v dnssd://Brother%20HL-2280DW._pdl-datastream._tcp.local/ \
|
||||
> -D "Brother 1" -o sides=two-sided-long-edge
|
||||
[ian@atticf27 ~]$ lpstat -a
|
||||
anyprint accepting requests since Mon 29 Jan 2018 01:17:09 PM EST
|
||||
HL-2280DW accepting requests since Tue 30 Jan 2018 10:56:10 AM EST
|
||||
HL-2280DW-duplex accepting requests since Wed 31 Jan 2018 11:41:16 AM EST
|
||||
HXP-610 accepting requests since Mon 29 Jan 2018 10:34:49 PM EST
|
||||
|
||||
```
|
||||
|
||||
Rather than creating a copy of the printer for duplex printing, you can just create a new class for duplex printing using `lpadmin` with the `-c` option .
|
||||
|
||||
If you need to remove a printer, use `lpadmin` with the `-x` option.
|
||||
|
||||
Listing 16 shows how to remove the printer and create a class instead.
|
||||
|
||||
###### Listing 16. Removing a printer and creating a class
|
||||
```
|
||||
[ian@atticf27 ~]$ lpadmin -x HL-2280DW-duplex
|
||||
[ian@atticf27 ~]$ lpadmin -p HL-2280DW -c duplex -E -D "Duplex printing" -o sides=two-sided-long-edge
|
||||
[ian@atticf27 ~]$ cupsenable duplex
|
||||
[ian@atticf27 ~]$ cupsaccept duplex
|
||||
[ian@atticf27 ~]$ lpstat -a
|
||||
anyprint accepting requests since Mon 29 Jan 2018 01:17:09 PM EST
|
||||
duplex accepting requests since Wed 31 Jan 2018 12:12:05 PM EST
|
||||
HL-2280DW accepting requests since Wed 31 Jan 2018 11:51:16 AM EST
|
||||
XP-610 accepting requests since Mon 29 Jan 2018 10:34:49 PM EST
|
||||
|
||||
```
|
||||
|
||||
You can also set various printer options using the `lpadmin` or `lpoptions` commands. See the man pages for more details.
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
If you are having trouble printing, try these tips:
|
||||
|
||||
* Ensure that the CUPS server is running. You can use the `lpstat` command, which will report an error if it is unable to connect to the cupsd daemon. Alternatively, you might use the `ps -ef` command and check for cupsd in the output.
|
||||
* If you try to queue a job for printing and get an error message indicating that the printer is not accepting jobs results, use `lpstat -a` or `lpc status` to check that the printer is accepting jobs.
|
||||
* If a queued job does not print, use `lpstat -p` or `lpc status` to check that the printer is accepting jobs. You may need to move the job to another printer as discussed earlier.
|
||||
* If the printer is remote, check that it still exists on the remote system and that it is operational.
|
||||
* Check the configuration file to ensure that a particular user or remote system is allowed to print on the printer.
|
||||
* Ensure that your firewall allows remote printing requests, either from another system to your system, or from your system to another, as appropriate.
|
||||
* Verify that you have the right driver.
|
||||
|
||||
|
||||
|
||||
As you can see, printing involves the correct functioning of several components of your system and possibly network. In a tutorial of this length, we can only give you starting points for diagnosis. Most CUPS systems also have a graphical interface to the command-line functions that we discuss here. Generally, this interface is accessible from the local host using a browser pointed to port 631 (<http://localhost:631> or <http://127.0.0.1:631>), as shown earlier in Figure 2.
|
||||
|
||||
You can debug CUPS by running it in the foreground rather than as a daemon process. You can also test alternate configuration files if necessary. Run `cupsd -h` for more information, or see the man pages.
|
||||
|
||||
CUPS also maintains an access log and an error log. You can change the level of logging using the LogLevel statement in cupsd.conf. By default, logs are stored in the /var/log/cups directory. They may be viewed from the **Administration** tab on the browser interface (<http://localhost:631>). Use the `cupsctl` command without any options to display logging options. Either edit cupsd.conf, or use `cupsctl` to adjust various logging parameters. See the `cupsctl` man page for more details.
|
||||
|
||||
The Ubuntu Wiki also has a good page on [Debugging Printing Problems][7].
|
||||
|
||||
This concludes your introduction to printing and CUPS.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ibm.com/developerworks/library/l-lpic1-108-4/index.html
|
||||
|
||||
作者:[Ian Shields][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ibm.com
|
||||
[1]:http://www.lpi.org
|
||||
[2]:https://www.ibm.com/developerworks/library/l-lpic1-map/
|
||||
[3]:https://www.ibm.com/developerworks/library/l-lpic1-108-4/gimp-print.jpg
|
||||
[4]:https://www.ibm.com/developerworks/library/l-lpic1-101-3/
|
||||
[5]:https://www.ibm.com/developerworks/library/l-lpic1-108-4/fig-cups-web.jpg
|
||||
[6]:https://www.ibm.com/developerworks/library/l-lpic1-108-4/fig-settings.jpg
|
||||
[7]:https://wiki.ubuntu.com/DebuggingPrintingProblems
|
@ -0,0 +1,506 @@
|
||||
Translating by Flowsnow
|
||||
|
||||
Modern Web Automation With Python and Selenium – Real Python
|
||||
======
|
||||
|
||||
In this tutorial you’ll learn advanced Python web automation techniques: Using Selenium with a “headless” browser, exporting the scraped data to CSV files, and wrapping your scraping code in a Python class.
|
||||
|
||||
### Motivation: Tracking Listening Habits
|
||||
|
||||
Suppose that you have been listening to music on [bandcamp][4] for a while now, and you find yourself wishing you could remember a song you heard a few months back.
|
||||
|
||||
Sure you could dig through your browser history and check each song, but that might be a pain… All you remember is that you heard the song a few months ago and that it was in the electronic genre.
|
||||
|
||||
“Wouldn’t it be great,” you think to yourself, “if I had a record of my listening history? I could just look up the electronic songs from two months ago and I’d surely find it.”
|
||||
|
||||
**Today, you will build a basic Python class, called`BandLeader` that connects to [bandcamp.com][4], streams music from the “discovery” section of the front page, and keeps track of your listening history.**
|
||||
|
||||
The listening history will be saved to disk in a [CSV][5] file. You can then explore that CSV file in your favorite spreadsheet application or even with Python.
|
||||
|
||||
If you have had some experience with [web scraping in Python][6], you are familiar with making HTTP requests and using Pythonic APIs to navigate the DOM. You will do more of the same today, except with one difference.
|
||||
|
||||
**Today you will use a full-fledged browser running in headless mode to do the HTTP requests for you.**
|
||||
|
||||
A [headless browser][7] is just a regular web browser, except that it contains no visible UI element. Just like you’d expect, it can do more than make requests: it can also render HTML (though you cannot see it), keep session information, and even perform asynchronous network communications by running JavaScript code.
|
||||
|
||||
If you want to automate the modern web, headless browsers are essential.
|
||||
|
||||
**Free Bonus:** [Click here to download a "Python + Selenium" project skeleton with full source code][1] that you can use as a foundation for your own Python web scraping and automation apps.
|
||||
|
||||
### Setup
|
||||
|
||||
Your first step, before writing a single line of Python, is to install a [Selenium][8] supported [WebDriver][9] for your favorite web browser. In what follows, you will be working with [Firefox][10], but [Chrome][11] could easily work too.
|
||||
|
||||
So, assuming that the path `~/.local/bin` is in your execution `PATH`, here’s how you would install the Firefox webdriver, called `geckodriver`, on a Linux machine:
|
||||
```
|
||||
$ wget https://github.com/mozilla/geckodriver/releases/download/v0.19.1/geckodriver-v0.19.1-linux64.tar.gz
|
||||
$ tar xvfz geckodriver-v0.19.1-linux64.tar.gz
|
||||
$ mv geckodriver ~/.local/bin
|
||||
|
||||
```
|
||||
|
||||
Next, you install the [selenium][12] package, using `pip` or however else you like. If you made a [virtual environment][13] for this project, you just type:
|
||||
```
|
||||
$ pip install selenium
|
||||
|
||||
```
|
||||
|
||||
[ If you ever feel lost during the course of this tutorial, the full code demo can be found [on GitHub][14]. ]
|
||||
|
||||
Now it’s time for a test drive:
|
||||
|
||||
### Test Driving a Headless Browser
|
||||
|
||||
To test that everything is working, you decide to try out a basic web search via [DuckDuckGo][15]. You fire up your preferred Python interpreter and type:
|
||||
```
|
||||
>>> from selenium.webdriver import Firefox
|
||||
>>> from selenium.webdriver.firefox.options import Options
|
||||
>>> opts = Options()
|
||||
>>> opts.set_headless()
|
||||
>>> assert options.headless # operating in headless mode
|
||||
>>> browser = Firefox(options=opts)
|
||||
>>> browser.get('https://duckduckgo.com')
|
||||
|
||||
```
|
||||
|
||||
So far you have created a headless Firefox browser navigated to `https://duckduckgo.com`. You made an `Options` instance and used it to activate headless mode when you passed it to the `Firefox` constructor. This is akin to typing `firefox -headless` at the command line.
|
||||
|
||||
![](https://files.realpython.com/media/web-scraping-duckduckgo.f7bc7a5e2918.jpg)
|
||||
|
||||
Now that a page is loaded you can query the DOM using methods defined on your newly minted `browser` object. But how do you know what to query? The best way is to open your web browser and use its developer tools to inspect the contents of the page. Right now you want to get ahold of the search form so you can submit a query. By inspecting DuckDuckGo’s home page you find that the search form `<input>` element has an `id` attribute `"search_form_input_homepage"`. That’s just what you needed:
|
||||
```
|
||||
>>> search_form = browser.find_element_by_id('search_form_input_homepage')
|
||||
>>> search_form.send_keys('real python')
|
||||
>>> search_form.submit()
|
||||
|
||||
```
|
||||
|
||||
You found the search form, used the `send_keys` method to fill it out, and then the `submit` method to perform your search for `"Real Python"`. You can checkout the top result:
|
||||
```
|
||||
>>> results = browser.find_elements_by_class_name('result')
|
||||
>>> print(results[0].text)
|
||||
|
||||
Real Python - Real Python
|
||||
Get Real Python and get your hands dirty quickly so you spend more time making real applications. Real Python teaches Python and web development from the ground up ...
|
||||
https://realpython.com
|
||||
|
||||
```
|
||||
|
||||
Everything seems to be working. In order to prevent invisible headless browser instances from piling up on your machine, you close the browser object before exiting your python session:
|
||||
```
|
||||
>>> browser.close()
|
||||
>>> quit()
|
||||
|
||||
```
|
||||
|
||||
### Groovin on Tunes
|
||||
|
||||
You’ve tested that you can drive a headless browser using Python, now to put it to use.
|
||||
|
||||
1. You want to play music
|
||||
2. You want to browse and explore music
|
||||
3. You want information about what music is playing.
|
||||
|
||||
|
||||
|
||||
To start, you navigate to <https://bandcamp.com> and start to poke around in your browser’s developer tools. You discover a big shiny play button towards the bottom of the screen with a `class` attribute that contains the value`"playbutton"`. You check that it works:
|
||||
|
||||
<https://files.realpython.com/media/web-scraping-bandcamp-discovery-section.84a10034f564.jpg>
|
||||
```
|
||||
>>> opts = Option()
|
||||
>>> opts.set_headless()
|
||||
>>> browser = Firefox(options=opts)
|
||||
>>> browser.get('https://bandcamp.com')
|
||||
>>> browser.find_element_by_class('playbutton').click()
|
||||
|
||||
```
|
||||
|
||||
You should hear music! Leave it playing and move back to your web browser. Just to the side of the play button is the discovery section. Again, you inspect this section and find that each of the currently visible available tracks has a `class` value of `"discover-item"`, and that each item seems to be clickable. In Python, you check this out:
|
||||
```
|
||||
>>> tracks = browser.find_elements_by_class_name('discover-item')
|
||||
>>> len(tracks) # 8
|
||||
>>> tracks[3].click()
|
||||
|
||||
```
|
||||
|
||||
A new track should be playing! This is the first step to exploring bandcamp using Python! You spend a few minutes clicking on different tracks in your Python environment but soon grow tired of the meagre library of 8 songs.
|
||||
|
||||
### Exploring the Catalogue
|
||||
|
||||
Looking a back at your browser, you see the buttons for exploring all of the tracks featured in bandcamp’s music discovery section. By now this feels familiar: each button has a `class` value of `"item-page"`. The very last button is the “next” button that will display the next eight tracks in the catalogue. You go to work:
|
||||
```
|
||||
>>> next_button = [e for e in browser.find_elements_by_class_name('item-page')
|
||||
if e.text.lower().find('next') > -1]
|
||||
>>> next_button.click()
|
||||
|
||||
```
|
||||
|
||||
Great! Now you want to look at the new tracks, so you think “I’ll just repopulate my `tracks` variable like I did a few minutes ago”. But this is where things start to get tricky.
|
||||
|
||||
First, bandcamp designed their site for humans to enjoy using, not for Python scripts to access programmatically. When you call `next_button.click()` the real web browser responds by executing some JavaScript code. If you try it out in your browser, you see that some time elapses as the catalogue of songs scrolls with a smooth animation effect. If you try to repopulate your `tracks` variable before the animation finishes, you may not get all the tracks and you may get some that you don’t want.
|
||||
|
||||
The solution? You can just sleep for a second or, if you are just running all this in a Python shell, you probably wont even notice - after all it takes time for you to type too.
|
||||
|
||||
Another slight kink is something that can only be discovered through experimentation. You try to run the same code again:
|
||||
```
|
||||
>>> tracks = browser.find_elements_by_class_name('discover-item')
|
||||
>>> assert(len(tracks) == 8)
|
||||
AssertionError
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
But you notice something strange. `len(tracks)` is not equal to `8` even though only the next batch of `8` should be displayed. Digging a little further you find that your list contains some tracks that were displayed before. To get only the tracks that are actually visible in the browser, you need to filter the results a little.
|
||||
|
||||
After trying a few things, you decide to keep a track only if its `x` coordinate on the page fall within the bounding box of the containing element. The catalogue’s container has a `class` value of `"discover-results"`. Here’s how you proceed:
|
||||
```
|
||||
>>> discover_section = self.browser.find_element_by_class_name('discover-results')
|
||||
>>> left_x = discover_section.location['x']
|
||||
>>> right_x = left_x + discover_section.size['width']
|
||||
>>> discover_items = browser.find_element_by_class_name('discover_items')
|
||||
>>> tracks = [t for t in discover_items
|
||||
if t.location['x'] >= left_x and t.location['x'] < right_x]
|
||||
>>> assert len(tracks) == 8
|
||||
|
||||
```
|
||||
|
||||
### Building a Class
|
||||
|
||||
If you are growing weary of retyping the same commands over and over again in your Python environment, you should dump some of it into a module. A basic class for your bandcamp manipulation should do the following:
|
||||
|
||||
1. Initialize a headless browser and navigate to bandcamp
|
||||
2. Keep a list of available tracks
|
||||
3. Support finding more tracks
|
||||
4. Play, pause, and skip tracks
|
||||
|
||||
|
||||
|
||||
All in one go, here’s the basic code:
|
||||
```
|
||||
from selenium.webdriver import Firefox
|
||||
from selenium.webdriver.firefox.options import Options
|
||||
from time import sleep, ctime
|
||||
from collections import namedtuple
|
||||
from threading import Thread
|
||||
from os.path import isfile
|
||||
import csv
|
||||
|
||||
|
||||
BANDCAMP_FRONTPAGE='https://bandcamp.com/'
|
||||
|
||||
class BandLeader():
|
||||
def __init__(self):
|
||||
# create a headless browser
|
||||
opts = Options()
|
||||
opts.set_headless()
|
||||
self.browser = Firefox(options=opts)
|
||||
self.browser.get(BANDCAMP_FRONTPAGE)
|
||||
|
||||
# track list related state
|
||||
self._current_track_number = 1
|
||||
self.track_list = []
|
||||
self.tracks()
|
||||
|
||||
def tracks(self):
|
||||
'''
|
||||
query the page to populate a list of available tracks
|
||||
'''
|
||||
|
||||
# sleep to give the browser time to render and finish any animations
|
||||
sleep(1)
|
||||
|
||||
# get the container for the visible track list
|
||||
discover_section = self.browser.find_element_by_class_name('discover-results')
|
||||
left_x = discover_section.location['x']
|
||||
right_x = left_x + discover_section.size['width']
|
||||
|
||||
# filter the items in the list to include only those we can click
|
||||
discover_items = self.browser.find_elements_by_class_name('discover-item')
|
||||
self.track_list = [t for t in discover_items
|
||||
if t.location['x'] >= left_x and t.location['x'] < right_x]
|
||||
|
||||
# print the available tracks to the screen
|
||||
for (i,track) in enumerate(self.track_list):
|
||||
print('[{}]'.format(i+1))
|
||||
lines = track.text.split('\n')
|
||||
print('Album : {}'.format(lines[0]))
|
||||
print('Artist : {}'.format(lines[1]))
|
||||
if len(lines) > 2:
|
||||
print('Genre : {}'.format(lines[2]))
|
||||
|
||||
def catalogue_pages(self):
|
||||
'''
|
||||
print the available pages in the catalogue that are presently
|
||||
accessible
|
||||
'''
|
||||
print('PAGES')
|
||||
for e in self.browser.find_elements_by_class_name('item-page'):
|
||||
print(e.text)
|
||||
print('')
|
||||
|
||||
|
||||
def more_tracks(self,page='next'):
|
||||
'''
|
||||
advances the catalog and repopulates the track list, we can pass in a number
|
||||
to advance any of hte available pages
|
||||
'''
|
||||
|
||||
next_btn = [e for e in self.browser.find_elements_by_class_name('item-page')
|
||||
if e.text.lower().strip() == str(page)]
|
||||
|
||||
if next_btn:
|
||||
next_btn[0].click()
|
||||
self.tracks()
|
||||
|
||||
def play(self,track=None):
|
||||
'''
|
||||
play a track. If no track number is supplied, the presently selected track
|
||||
will play
|
||||
'''
|
||||
|
||||
if track is None:
|
||||
self.browser.find_element_by_class_name('playbutton').click()
|
||||
elif type(track) is int and track <= len(self.track_list) and track >= 1:
|
||||
self._current_track_number = track
|
||||
self.track_list[self._current_track_number - 1].click()
|
||||
|
||||
|
||||
def play_next(self):
|
||||
'''
|
||||
plays the next available track
|
||||
'''
|
||||
if self._current_track_number < len(self.track_list):
|
||||
self.play(self._current_track_number+1)
|
||||
else:
|
||||
self.more_tracks()
|
||||
self.play(1)
|
||||
|
||||
|
||||
def pause(self):
|
||||
'''
|
||||
pauses the playback
|
||||
'''
|
||||
self.play()
|
||||
```
|
||||
|
||||
Pretty neat. You can import this into your Python environment and run bandcamp programmatically! But wait, didn’t you start this whole thing because you wanted to keep track of information about your listening history?
|
||||
|
||||
### Collecting Structured Data
|
||||
|
||||
Your final task is to keep track of the songs that you actually listened to. How might you do this? What does it mean to actually listen to something anyway? If you are perusing the catalogue, stopping for a few seconds on each song, do each of those songs count? Probably not. You are going to allow some ‘exploration’ time to factor in to your data collection.
|
||||
|
||||
Your goals are now to:
|
||||
|
||||
1. Collect structured information about the currently playing track
|
||||
2. Keep a “database” of tracks
|
||||
3. Save and restore that “database” to and from disk
|
||||
|
||||
|
||||
|
||||
You decide to use a [namedtuple][16] to store the information that you track. Named tuples are good for representing bundles of attributes with no functionality tied to them, a bit like a database record.
|
||||
```
|
||||
TrackRec = namedtuple('TrackRec', [
|
||||
'title',
|
||||
'artist',
|
||||
'artist_url',
|
||||
'album',
|
||||
'album_url',
|
||||
'timestamp' # When you played it
|
||||
])
|
||||
|
||||
```
|
||||
|
||||
In order to collect this information, you add a method to the `BandLeader` class. Checking back in with the browser’s developer tools, you find the right HTML elements and attributes to select all the information you need. Also, you only want to get information about the currently playing track if there music is actually playing at the time. Luckily, the page player adds a `"playing"` class to the play button whenever music is playing and removes it when the music stops. With these considerations in mind, you write a couple of methods:
|
||||
```
|
||||
def is_playing(self):
|
||||
'''
|
||||
returns `True` if a track is presently playing
|
||||
'''
|
||||
playbtn = self.browser.find_element_by_class_name('playbutton')
|
||||
return playbtn.get_attribute('class').find('playing') > -1
|
||||
|
||||
|
||||
def currently_playing(self):
|
||||
'''
|
||||
Returns the record for the currently playing track,
|
||||
or None if nothing is playing
|
||||
'''
|
||||
try:
|
||||
if self.is_playing():
|
||||
title = self.browser.find_element_by_class_name('title').text
|
||||
album_detail = self.browser.find_element_by_css_selector('.detail-album > a')
|
||||
album_title = album_detail.text
|
||||
album_url = album_detail.get_attribute('href').split('?')[0]
|
||||
artist_detail = self.browser.find_element_by_css_selector('.detail-artist > a')
|
||||
artist = artist_detail.text
|
||||
artist_url = artist_detail.get_attribute('href').split('?')[0]
|
||||
return TrackRec(title, artist, artist_url, album_title, album_url, ctime())
|
||||
|
||||
except Exception as e:
|
||||
print('there was an error: {}'.format(e))
|
||||
|
||||
return None
|
||||
```
|
||||
|
||||
For good measure, you also modify the `play` method to keep track of the currently playing track:
|
||||
```
|
||||
def play(self, track=None):
|
||||
'''
|
||||
play a track. If no track number is supplied, the presently selected track
|
||||
will play
|
||||
'''
|
||||
|
||||
if track is None:
|
||||
self.browser.find_element_by_class_name('playbutton').click()
|
||||
elif type(track) is int and track <= len(self.track_list) and track >= 1:
|
||||
self._current_track_number = track
|
||||
self.track_list[self._current_track_number - 1].click()
|
||||
|
||||
sleep(0.5)
|
||||
if self.is_playing():
|
||||
self._current_track_record = self.currently_playing()
|
||||
```
|
||||
|
||||
Next, you’ve got to keep a database of some kind. Though it may not scale well in the long run, you can go far with a simple list. You add `self.database = []` to `BandCamp`‘s `__init__` method. Because you want to allow for time to pass before entering a `TrackRec` object into the database, you decide to use Python’s [threading tools][17] to run a separate process that maintains the database in the background.
|
||||
|
||||
You’ll supply a `_maintain()` method to `BandLeader` instances that will run it a separate thread. The new method will periodically check the value of `self._current_track_record` and add it to the database if it is new.
|
||||
|
||||
You will start the thread when the class is instantiated by adding some code to `__init__`.
|
||||
```
|
||||
# the new init
|
||||
def __init__(self):
|
||||
# create a headless browser
|
||||
opts = Options()
|
||||
opts.set_headless()
|
||||
self.browser = Firefox(options=opts)
|
||||
self.browser.get(BANDCAMP_FRONTPAGE)
|
||||
|
||||
# track list related state
|
||||
self._current_track_number = 1
|
||||
self.track_list = []
|
||||
self.tracks()
|
||||
|
||||
# state for the database
|
||||
self.database = []
|
||||
self._current_track_record = None
|
||||
|
||||
# the database maintenance thread
|
||||
self.thread = Thread(target=self._maintain)
|
||||
self.thread.daemon = True # kills the thread with the main process dies
|
||||
self.thread.start()
|
||||
|
||||
self.tracks()
|
||||
|
||||
|
||||
def _maintain(self):
|
||||
while True:
|
||||
self._update_db()
|
||||
sleep(20) # check every 20 seconds
|
||||
|
||||
|
||||
def _update_db(self):
|
||||
try:
|
||||
check = (self._current_track_record is not None
|
||||
and (len(self.database) == 0
|
||||
or self.database[-1] != self._current_track_record)
|
||||
and self.is_playing())
|
||||
if check:
|
||||
self.database.append(self._current_track_record)
|
||||
|
||||
except Exception as e:
|
||||
print('error while updating the db: {}'.format(e)
|
||||
|
||||
```
|
||||
|
||||
If you’ve never worked with multithreaded programming in Python, [you should read up on it!][18] For your present purpose, you can think of thread as a loop that runs in the background of the main Python process (the one you interact with directly). Every twenty seconds, the loop checks a few things to see if the database needs to be updated, and if it does, appends a new record. Pretty cool.
|
||||
|
||||
The very last step is saving the database and restoring from saved states. Using the [csv][19] package you can ensure your database resides in a highly portable format, and remains usable even if you abandon your wonderful `BandLeader` class ;)
|
||||
|
||||
The `__init__` method should be yet again altered, this time to accept a file path where you’d like to save the database. You’d like to load this database if it is available, and you’d like to save it periodically, whenever it is updated. The updates look like so:
|
||||
```
|
||||
def __init__(self,csvpath=None):
|
||||
self.database_path=csvpath
|
||||
self.database = []
|
||||
|
||||
# load database from disk if possible
|
||||
if isfile(self.database_path):
|
||||
with open(self.database_path, newline='') as dbfile:
|
||||
dbreader = csv.reader(dbfile)
|
||||
next(dbreader) # to ignore the header line
|
||||
self.database = [TrackRec._make(rec) for rec in dbreader]
|
||||
|
||||
# .... the rest of the __init__ method is unchanged ....
|
||||
|
||||
|
||||
# a new save_db method
|
||||
def save_db(self):
|
||||
with open(self.database_path,'w',newline='') as dbfile:
|
||||
dbwriter = csv.writer(dbfile)
|
||||
dbwriter.writerow(list(TrackRec._fields))
|
||||
for entry in self.database:
|
||||
dbwriter.writerow(list(entry))
|
||||
|
||||
|
||||
# finally add a call to save_db to your database maintenance method
|
||||
def _update_db(self):
|
||||
try:
|
||||
check = (self._current_track_record is not None
|
||||
and self._current_track_record is not None
|
||||
and (len(self.database) == 0
|
||||
or self.database[-1] != self._current_track_record)
|
||||
and self.is_playing())
|
||||
if check:
|
||||
self.database.append(self._current_track_record)
|
||||
self.save_db()
|
||||
|
||||
except Exception as e:
|
||||
print('error while updating the db: {}'.format(e)
|
||||
```
|
||||
|
||||
And voilà! You can listen to music and keep a record of what you hear! Amazing.
|
||||
|
||||
Something interesting about the above is that [using a `namedtuple`][16] really begins to pay off. When converting to and from CSV format, you take advantage of the ordering of the rows in the CSV file to fill in the rows in the `TrackRec` objects. Likewise, you can create the header row of the CSV file by referencing the `TrackRec._fields` attribute. This is one of the reasons using a tuple ends up making sense for columnar data.
|
||||
|
||||
### What’s Next and What Have You Learned?
|
||||
|
||||
From here you could do loads more! Here are a few quick ideas that would leverage the mild superpower that is Python + Selenium:
|
||||
|
||||
* You could extend the `BandLeader` class to navigate to album pages and play the tracks you find there
|
||||
* You might decide to create playlists based on your favorite or most frequently heard tracks
|
||||
* Perhaps you want to add an autoplay feature
|
||||
* Maybe you’d like to query songs by date or title or artist and build playlists that way
|
||||
|
||||
|
||||
|
||||
**Free Bonus:** [Click here to download a "Python + Selenium" project skeleton with full source code][1] that you can use as a foundation for your own Python web scraping and automation apps.
|
||||
|
||||
You have learned that Python can do everything that a web browser can do, and a bit more. You could easily write scripts to control virtual browser instances that run in the cloud, create bots that interact with real users, or that mindlessly fill out forms! Go forth, and automate!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://realpython.com/blog/python/modern-web-automation-with-python-and-selenium/
|
||||
|
||||
作者:[Colin OKeefe][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://realpython.com/team/cokeefe/
|
||||
[1]:https://realpython.com/blog/python/modern-web-automation-with-python-and-selenium/#
|
||||
[4]:https://bandcamp.com
|
||||
[5]:https://en.wikipedia.org/wiki/Comma-separated_values
|
||||
[6]:https://realpython.com/blog/python/python-web-scraping-practical-introduction/
|
||||
[7]:https://en.wikipedia.org/wiki/Headless_browser
|
||||
[8]:http://www.seleniumhq.org/docs/
|
||||
[9]:https://en.wikipedia.org/wiki/Selenium_(software)#Selenium_WebDriver
|
||||
[10]:https://www.mozilla.org/en-US/firefox/new/
|
||||
[11]:https://www.google.com/chrome/index.html
|
||||
[12]:http://seleniumhq.github.io/selenium/docs/api/py/
|
||||
[13]:https://realpython.com/blog/python/python-virtual-environments-a-primer/
|
||||
[14]:https://github.com/realpython/python-web-scraping-examples
|
||||
[15]:https://duckduckgo.com
|
||||
[16]:https://dbader.org/blog/writing-clean-python-with-namedtuples
|
||||
[17]:https://docs.python.org/3.6/library/threading.html#threading.Thread
|
||||
[18]:https://dbader.org/blog/python-parallel-computing-in-60-seconds
|
||||
[19]:https://docs.python.org/3.6/library/csv.html
|
156
sources/tech/20180206 Power(Shell) to the people.md
Normal file
156
sources/tech/20180206 Power(Shell) to the people.md
Normal file
@ -0,0 +1,156 @@
|
||||
Power(Shell) to the people
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_lightbulbs.png?itok=pwp22hTw)
|
||||
|
||||
Earlier this year, [PowerShell Core][1] [became generally available][2] under an Open Source ([MIT][3]) license. PowerShell is hardly a new technology. From its first release for Windows in 2006, PowerShell's creators [sought][4] to incorporate the power and flexibility of Unix shells while remedying their perceived deficiencies, particularly the need for text manipulation to derive value from combining commands.
|
||||
|
||||
Five major releases later, PowerShell Core allows the same innovative shell and command environment to run natively on all major operating systems, including OS X and Linux. Some (read: almost everyone) may still scoff at the audacity and/or the temerity of this Windows-born interloper to offer itself to platforms that have had strong shell environments since time immemorial (at least as defined by a millennial). In this post, I hope to make the case that PowerShell can provide advantages to even seasoned users.
|
||||
|
||||
### Consistency across platforms
|
||||
|
||||
If you plan to port your scripts from one execution environment to another, you need to make sure you use only the commands and syntaxes that work. For example, on GNU systems, you would obtain yesterday's date as follows:
|
||||
```
|
||||
date --date="1 day ago"
|
||||
|
||||
```
|
||||
|
||||
On BSD systems (such as OS X), the above syntax wouldn't work, as the BSD date utility requires the following syntax:
|
||||
```
|
||||
date -v -1d
|
||||
|
||||
```
|
||||
|
||||
Because PowerShell is licensed under a permissive license and built for all platforms, you can ship it with your application. Thus, when your scripts run in the target environment, they'll be running on the same shell using the same command implementations as the environment in which you tested your scripts.
|
||||
|
||||
### Objects and structured data
|
||||
|
||||
*nix commands and utilities rely on your ability to consume and manipulate unstructured data. Those who have lived for years with `sed` `grep` and `awk` may be unbothered by this statement, but there is a better way.
|
||||
|
||||
Let's redo the yesterday's date example in PowerShell. To get the current date, run the `Get-Date` cmdlet (pronounced "commandlet"):
|
||||
```
|
||||
> Get-Date
|
||||
|
||||
|
||||
|
||||
Sunday, January 21, 2018 8:12:41 PM
|
||||
|
||||
```
|
||||
|
||||
The output you see isn't really a string of text. Rather, it is a string representation of a .Net Core object. Just like any other object in any other OOP environment, it has a type and most often, methods you can call.
|
||||
|
||||
Let's prove this:
|
||||
```
|
||||
> $(Get-Date).GetType().FullName
|
||||
|
||||
System.DateTime
|
||||
|
||||
```
|
||||
|
||||
The `$(...)` syntax behaves exactly as you'd expect from POSIX shells—the result of the evaluation of the command in parentheses is substituted for the entire expression. In PowerShell, however, the $ is strictly optional in such expressions. And, most importantly, the result is a .Net object, not text. So we can call the `GetType()` method on that object to get its type object (similar to `Class` object in Java), and the `FullName` [property][5] to get the full name of the type.
|
||||
|
||||
So, how does this object-orientedness make your life easier?
|
||||
|
||||
First, you can pipe any object to the `Get-Member` cmdlet to see all the methods and properties it has to offer.
|
||||
```
|
||||
> (Get-Date) | Get-Member
|
||||
PS /home/yevster/Documents/ArticlesInProgress> $(Get-Date) | Get-Member
|
||||
|
||||
|
||||
TypeName: System.DateTime
|
||||
|
||||
|
||||
Name MemberType Definition
|
||||
---- ---------- ----------
|
||||
Add Method datetime Add(timespan value)
|
||||
AddDays Method datetime AddDays(double value)
|
||||
AddHours Method datetime AddHours(double value)
|
||||
AddMilliseconds Method datetime AddMilliseconds(double value)
|
||||
AddMinutes Method datetime AddMinutes(double value)
|
||||
AddMonths Method datetime AddMonths(int months)
|
||||
AddSeconds Method datetime AddSeconds(double value)
|
||||
AddTicks Method datetime AddTicks(long value)
|
||||
AddYears Method datetime AddYears(int value)
|
||||
CompareTo Method int CompareTo(System.Object value), int ...
|
||||
```
|
||||
|
||||
You can quickly see that the DateTime object has an `AddDays` that you can quickly use to get yesterday's date:
|
||||
```
|
||||
> (Get-Date).AddDays(-1)
|
||||
|
||||
|
||||
Saturday, January 20, 2018 8:24:42 PM
|
||||
```
|
||||
|
||||
To do something slightly more exciting, let's call Yahoo's weather service (because it doesn't require an API token) and get your local weather.
|
||||
```
|
||||
$city="Boston"
|
||||
$state="MA"
|
||||
$url="https://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20weather.forecast%20where%20woeid%20in%20(select%20woeid%20from%20geo.places(1)%20where%20text%3D%22${city}%2C%20${state}%22)&format=json&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys"
|
||||
```
|
||||
|
||||
Now, we could do things the old-fashioned way and just run `curl $url` to get a giant blob of JSON, or...
|
||||
```
|
||||
$weather=(Invoke-RestMethod $url)
|
||||
```
|
||||
|
||||
If you look at the type of `$weather` (by running `echo $weather.GetType().FullName`), you will see that it's a `PSCustomObject`. It's a dynamic object that reflects the structure of the JSON.
|
||||
|
||||
And PowerShell will be thrilled to help you navigate through it with its tab completion. Just type `$weather.` (making sure to include the ".") and press Tab. You will see all the root-level JSON keys. Type one, followed by a "`.`", press Tab again, and you'll see its children (if any).
|
||||
|
||||
Thus, you can easily navigate to the data you want:
|
||||
```
|
||||
> echo $weather.query.results.channel.atmosphere.pressure
|
||||
1019.0
|
||||
|
||||
|
||||
> echo $weather.query.results.channel.wind.chill
|
||||
41
|
||||
```
|
||||
|
||||
And if you have JSON or CSV lying around (or returned by an outside command) as unstructured data, just pipe it into the `ConvertFrom-Json` or `ConvertFrom-CSV` cmdlet, respectively, and you can have your data in nice clean objects.
|
||||
|
||||
### Computing vs. automation
|
||||
|
||||
We use shells for two purposes. One is for computing, to run individual commands and to manually respond to their output. The other is automation, to write scripts that execute multiple commands and respond to their output programmatically.
|
||||
|
||||
A problem that most of us have learned to overlook is that these two purposes place different and conflicting requirements on the shell. Computing requires the shell to be laconic. The fewer keystrokes a user can get away with, the better. It's unimportant if what the user has typed is barely legible to another human being. Scripts, on the other hand, are code. Readability and maintainability are key. And here, POSIX utilities often fail us. While some commands do offer both laconic and readable syntaxes (e.g. `-f` and `--force`) for some of their parameters, the command names themselves err on the side of brevity, not readability.
|
||||
|
||||
PowerShell includes several mechanisms to eliminate that Faustian tradeoff.
|
||||
|
||||
First, tab completion eliminates typing of argument names. For instance, type `Get-Random -Mi`, press Tab and PowerShell will complete the argument for you: `Get-Random -Minimum`. But if you really want to be laconic, you don't even need to press Tab. For instance, PowerShell will understand
|
||||
```
|
||||
Get-Random -Mi 1 -Ma 10
|
||||
```
|
||||
|
||||
because `Mi` and `Ma` each have unique completions.
|
||||
|
||||
You may have noticed that all PowerShell cmdlet names have a verb-noun structure. This can help script readability, but you probably don't want to keep typing `Get-` over and over in the command line. So don't! If you type a noun without a verb, PowerShell will look for a `Get-` command with that noun.
|
||||
|
||||
Caution: although PowerShell is not case-sensitive, it's a good practice to capitalize the first letter of the noun when you intend to use a PowerShell command. For example, typing `date` will call your system's `date` utility. Typing `Date` will call PowerShell's `Get-Date` cmdlet.
|
||||
|
||||
And if that's not enough, PowerShell has aliases to create simple names. For example, if you type `alias -name cd`, you will discover the `cd` command in PowerShell is itself an alias for the `Set-Location` command.
|
||||
|
||||
So to review—you get powerful tab completion, aliases, and noun completions to keep your command names short, automatic and consistent parameter name truncation, while still enjoying a rich, readable syntax for scripting.
|
||||
|
||||
### So... friends?
|
||||
|
||||
There are just some of the advantages of PowerShell. There are more features and cmdlets I haven't discussed (check out [Where-Object][6] or its alias `?` if you want to make `grep` cry). And hey, if you really feel homesick, PowerShell will be happy to launch your old native utilities for you. But give yourself enough time to get acclimated in PowerShell's object-oriented cmdlet world, and you may find yourself choosing to forget the way back.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/powershell-people
|
||||
|
||||
作者:[Yev Bronshteyn][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/yevster
|
||||
[1]:https://github.com/PowerShell/PowerShell/blob/master/README.md
|
||||
[2]:https://blogs.msdn.microsoft.com/powershell/2018/01/10/powershell-core-6-0-generally-available-ga-and-supported/
|
||||
[3]:https://spdx.org/licenses/MIT
|
||||
[4]:http://www.jsnover.com/Docs/MonadManifesto.pdf
|
||||
[5]:https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/properties
|
||||
[6]:https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/where-object?view=powershell-6
|
398
sources/tech/20180206 Programming in Color with ncurses.md
Normal file
398
sources/tech/20180206 Programming in Color with ncurses.md
Normal file
@ -0,0 +1,398 @@
|
||||
Programming in Color with ncurses
|
||||
======
|
||||
In parts [one][1] and [two][2] of my article series about programming with the ncurses library, I introduced a few curses functions to draw text on the screen, query characters from the screen and read from the keyboard. To demonstrate several of these functions, I created a simple adventure game in curses that drew a game map and player character using simple characters. In this follow-up article, I show how to add color to a curses program.
|
||||
|
||||
Drawing on the screen is all very well and good, but if it's all white-on-black text, your program might seem dull. Colors can help convey more information—for example, if your program needs to indicate success or failure. In such a case, you could display text in green or red to help emphasize the outcome. Or, maybe you simply want to use colors to "snazz" up your program to make it look prettier.
|
||||
|
||||
In this article, I use a simple example to demonstrate color manipulation via the curses functions. In my previous article, I wrote a basic adventure-style game that lets you move a player character around a crudely drawn map. However, the map was entirely black and white text, relying on shapes to suggest water (~) or mountains (^), so let's update the game to use colors.
|
||||
|
||||
### Color Essentials
|
||||
|
||||
Before you can use colors, your program needs to know if it can rely on the terminal to display the colors correctly. On modern systems, this always should be true. But in the classic days of computing, some terminals were monochromatic, such as the venerable VT52 and VT100 terminals, usually providing white-on-black or green-on-black text.
|
||||
|
||||
To query the terminal capability for colors, use the has_colors() function. This will return a true value if the terminal can display color, and a false value if not. It is usually used to start an if block, like this:
|
||||
|
||||
```
|
||||
|
||||
if (has_colors() == FALSE) {
|
||||
endwin();
|
||||
printf("Your terminal does not support color\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Having determined that the terminal can display color, you then can set up curses to use colors with the start_color() function. Now you're ready to define the colors your program will use.
|
||||
|
||||
In curses, you define colors in pairs: a foreground color on a background color. This allows curses to set both color attributes at once, which often is what you want to do. To establish a color pair, use init_pair() to define a foreground and background color, and associate it to an index number. The general syntax is:
|
||||
|
||||
```
|
||||
|
||||
init_pair(index, foreground, background);
|
||||
|
||||
```
|
||||
|
||||
Consoles support only eight basic colors: black, red, green, yellow, blue, magenta, cyan and white. These colors are defined for you with the following names:
|
||||
|
||||
* COLOR_BLACK
|
||||
|
||||
* COLOR_RED
|
||||
|
||||
* COLOR_GREEN
|
||||
|
||||
* COLOR_YELLOW
|
||||
|
||||
* COLOR_BLUE
|
||||
|
||||
* COLOR_MAGENTA
|
||||
|
||||
* COLOR_CYAN
|
||||
|
||||
* COLOR_WHITE
|
||||
|
||||
### Applying the Colors
|
||||
|
||||
In my adventure game, I'd like the grassy areas to be green and the player's "trail" to be a subtle yellow-on-green dotted path. Water should be blue, with the tildes in the similar cyan color. I'd like mountains to be grey, but black text on a white background should make for a reasonable compromise. To make the player's character more visible, I'd like to use a garish red-on-magenta scheme. I can define these colors pairs like so:
|
||||
|
||||
```
|
||||
|
||||
start_color();
|
||||
init_pair(1, COLOR_YELLOW, COLOR_GREEN);
|
||||
init_pair(2, COLOR_CYAN, COLOR_BLUE);
|
||||
init_pair(3, COLOR_BLACK, COLOR_WHITE);
|
||||
init_pair(4, COLOR_RED, COLOR_MAGENTA);
|
||||
|
||||
```
|
||||
|
||||
To make my color pairs easy to remember, my program defines a few symbolic constants:
|
||||
|
||||
```
|
||||
|
||||
#define GRASS_PAIR 1
|
||||
#define EMPTY_PAIR 1
|
||||
#define WATER_PAIR 2
|
||||
#define MOUNTAIN_PAIR 3
|
||||
#define PLAYER_PAIR 4
|
||||
|
||||
```
|
||||
|
||||
With these constants, my color definitions become:
|
||||
|
||||
```
|
||||
|
||||
start_color();
|
||||
init_pair(GRASS_PAIR, COLOR_YELLOW, COLOR_GREEN);
|
||||
init_pair(WATER_PAIR, COLOR_CYAN, COLOR_BLUE);
|
||||
init_pair(MOUNTAIN_PAIR, COLOR_BLACK, COLOR_WHITE);
|
||||
init_pair(PLAYER_PAIR, COLOR_RED, COLOR_MAGENTA);
|
||||
|
||||
```
|
||||
|
||||
Whenever you want to display text using a color, you just need to tell curses to set that color attribute. For good programming practice, you also should tell curses to undo the color combination when you're done using the colors. To set the color, use attron() before calling functions like mvaddch(), and then turn off the color attributes with attroff() afterward. For example, when I draw the player's character, I might do this:
|
||||
|
||||
```
|
||||
|
||||
attron(COLOR_PAIR(PLAYER_PAIR));
|
||||
mvaddch(y, x, PLAYER);
|
||||
attroff(COLOR_PAIR(PLAYER_PAIR));
|
||||
|
||||
```
|
||||
|
||||
Note that applying colors to your programs adds a subtle change to how you query the screen. Normally, the value returned by mvinch() is of type chtype Without color attributes, this is basically an integer and can be used as such. But, colors add extra attributes to the characters on the screen, so chtype carries extra color information in an extended bit pattern. If you use mvinch(), the returned value will contain this extra color value. To extract just the "text" value, such as in the is_move_okay() function, you need to apply a bitwise & with the A_CHARTEXT bit mask:
|
||||
|
||||
```
|
||||
|
||||
int is_move_okay(int y, int x)
|
||||
{
|
||||
int testch;
|
||||
|
||||
/* return true if the space is okay to move into */
|
||||
|
||||
testch = mvinch(y, x);
|
||||
return (((testch & A_CHARTEXT) == GRASS)
|
||||
|| ((testch & A_CHARTEXT) == EMPTY));
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
With these changes, I can update the adventure game to use colors:
|
||||
|
||||
```
|
||||
|
||||
/* quest.c */
|
||||
|
||||
#include
|
||||
#include
|
||||
|
||||
#define GRASS ' '
|
||||
#define EMPTY '.'
|
||||
#define WATER '~'
|
||||
#define MOUNTAIN '^'
|
||||
#define PLAYER '*'
|
||||
|
||||
#define GRASS_PAIR 1
|
||||
#define EMPTY_PAIR 1
|
||||
#define WATER_PAIR 2
|
||||
#define MOUNTAIN_PAIR 3
|
||||
#define PLAYER_PAIR 4
|
||||
|
||||
int is_move_okay(int y, int x);
|
||||
void draw_map(void);
|
||||
|
||||
int main(void)
|
||||
{
|
||||
int y, x;
|
||||
int ch;
|
||||
|
||||
/* initialize curses */
|
||||
|
||||
initscr();
|
||||
keypad(stdscr, TRUE);
|
||||
cbreak();
|
||||
noecho();
|
||||
|
||||
/* initialize colors */
|
||||
|
||||
if (has_colors() == FALSE) {
|
||||
endwin();
|
||||
printf("Your terminal does not support color\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
start_color();
|
||||
init_pair(GRASS_PAIR, COLOR_YELLOW, COLOR_GREEN);
|
||||
init_pair(WATER_PAIR, COLOR_CYAN, COLOR_BLUE);
|
||||
init_pair(MOUNTAIN_PAIR, COLOR_BLACK, COLOR_WHITE);
|
||||
init_pair(PLAYER_PAIR, COLOR_RED, COLOR_MAGENTA);
|
||||
|
||||
clear();
|
||||
|
||||
/* initialize the quest map */
|
||||
|
||||
draw_map();
|
||||
|
||||
/* start player at lower-left */
|
||||
|
||||
y = LINES - 1;
|
||||
x = 0;
|
||||
|
||||
do {
|
||||
|
||||
/* by default, you get a blinking cursor - use it to
|
||||
indicate player * */
|
||||
|
||||
attron(COLOR_PAIR(PLAYER_PAIR));
|
||||
mvaddch(y, x, PLAYER);
|
||||
attroff(COLOR_PAIR(PLAYER_PAIR));
|
||||
move(y, x);
|
||||
refresh();
|
||||
|
||||
ch = getch();
|
||||
|
||||
/* test inputted key and determine direction */
|
||||
|
||||
switch (ch) {
|
||||
case KEY_UP:
|
||||
case 'w':
|
||||
case 'W':
|
||||
if ((y > 0) && is_move_okay(y - 1, x)) {
|
||||
attron(COLOR_PAIR(EMPTY_PAIR));
|
||||
mvaddch(y, x, EMPTY);
|
||||
attroff(COLOR_PAIR(EMPTY_PAIR));
|
||||
y = y - 1;
|
||||
}
|
||||
break;
|
||||
case KEY_DOWN:
|
||||
case 's':
|
||||
case 'S':
|
||||
if ((y < LINES - 1) && is_move_okay(y + 1, x)) {
|
||||
attron(COLOR_PAIR(EMPTY_PAIR));
|
||||
mvaddch(y, x, EMPTY);
|
||||
attroff(COLOR_PAIR(EMPTY_PAIR));
|
||||
y = y + 1;
|
||||
}
|
||||
break;
|
||||
case KEY_LEFT:
|
||||
case 'a':
|
||||
case 'A':
|
||||
if ((x > 0) && is_move_okay(y, x - 1)) {
|
||||
attron(COLOR_PAIR(EMPTY_PAIR));
|
||||
mvaddch(y, x, EMPTY);
|
||||
attroff(COLOR_PAIR(EMPTY_PAIR));
|
||||
x = x - 1;
|
||||
}
|
||||
break;
|
||||
case KEY_RIGHT:
|
||||
case 'd':
|
||||
case 'D':
|
||||
if ((x < COLS - 1) && is_move_okay(y, x + 1)) {
|
||||
attron(COLOR_PAIR(EMPTY_PAIR));
|
||||
mvaddch(y, x, EMPTY);
|
||||
attroff(COLOR_PAIR(EMPTY_PAIR));
|
||||
x = x + 1;
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
while ((ch != 'q') && (ch != 'Q'));
|
||||
|
||||
endwin();
|
||||
|
||||
exit(0);
|
||||
}
|
||||
|
||||
int is_move_okay(int y, int x)
|
||||
{
|
||||
int testch;
|
||||
|
||||
/* return true if the space is okay to move into */
|
||||
|
||||
testch = mvinch(y, x);
|
||||
return (((testch & A_CHARTEXT) == GRASS)
|
||||
|| ((testch & A_CHARTEXT) == EMPTY));
|
||||
}
|
||||
|
||||
void draw_map(void)
|
||||
{
|
||||
int y, x;
|
||||
|
||||
/* draw the quest map */
|
||||
|
||||
/* background */
|
||||
|
||||
attron(COLOR_PAIR(GRASS_PAIR));
|
||||
for (y = 0; y < LINES; y++) {
|
||||
mvhline(y, 0, GRASS, COLS);
|
||||
}
|
||||
attroff(COLOR_PAIR(GRASS_PAIR));
|
||||
|
||||
/* mountains, and mountain path */
|
||||
|
||||
attron(COLOR_PAIR(MOUNTAIN_PAIR));
|
||||
for (x = COLS / 2; x < COLS * 3 / 4; x++) {
|
||||
mvvline(0, x, MOUNTAIN, LINES);
|
||||
}
|
||||
attroff(COLOR_PAIR(MOUNTAIN_PAIR));
|
||||
|
||||
attron(COLOR_PAIR(GRASS_PAIR));
|
||||
mvhline(LINES / 4, 0, GRASS, COLS);
|
||||
attroff(COLOR_PAIR(GRASS_PAIR));
|
||||
|
||||
/* lake */
|
||||
|
||||
attron(COLOR_PAIR(WATER_PAIR));
|
||||
for (y = 1; y < LINES / 2; y++) {
|
||||
mvhline(y, 1, WATER, COLS / 3);
|
||||
}
|
||||
attroff(COLOR_PAIR(WATER_PAIR));
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Unless you have a keen eye, you may not be able to spot all of the changes necessary to support color in the adventure game. The diff tool shows all the instances where functions were added or code was changed to support colors:
|
||||
|
||||
```
|
||||
|
||||
$ diff quest-color/quest.c quest/quest.c
|
||||
12,17d11
|
||||
< #define GRASS_PAIR 1
|
||||
< #define EMPTY_PAIR 1
|
||||
< #define WATER_PAIR 2
|
||||
< #define MOUNTAIN_PAIR 3
|
||||
< #define PLAYER_PAIR 4
|
||||
<
|
||||
33,46d26
|
||||
< /* initialize colors */
|
||||
<
|
||||
< if (has_colors() == FALSE) {
|
||||
< endwin();
|
||||
< printf("Your terminal does not support color\n");
|
||||
< exit(1);
|
||||
< }
|
||||
<
|
||||
< start_color();
|
||||
< init_pair(GRASS_PAIR, COLOR_YELLOW, COLOR_GREEN);
|
||||
< init_pair(WATER_PAIR, COLOR_CYAN, COLOR_BLUE);
|
||||
< init_pair(MOUNTAIN_PAIR, COLOR_BLACK, COLOR_WHITE);
|
||||
< init_pair(PLAYER_PAIR, COLOR_RED, COLOR_MAGENTA);
|
||||
<
|
||||
61d40
|
||||
< attron(COLOR_PAIR(PLAYER_PAIR));
|
||||
63d41
|
||||
< attroff(COLOR_PAIR(PLAYER_PAIR));
|
||||
76d53
|
||||
< attron(COLOR_PAIR(EMPTY_PAIR));
|
||||
78d54
|
||||
< attroff(COLOR_PAIR(EMPTY_PAIR));
|
||||
86d61
|
||||
< attron(COLOR_PAIR(EMPTY_PAIR));
|
||||
88d62
|
||||
< attroff(COLOR_PAIR(EMPTY_PAIR));
|
||||
96d69
|
||||
< attron(COLOR_PAIR(EMPTY_PAIR));
|
||||
98d70
|
||||
< attroff(COLOR_PAIR(EMPTY_PAIR));
|
||||
106d77
|
||||
< attron(COLOR_PAIR(EMPTY_PAIR));
|
||||
108d78
|
||||
< attroff(COLOR_PAIR(EMPTY_PAIR));
|
||||
128,129c98
|
||||
< return (((testch & A_CHARTEXT) == GRASS)
|
||||
< || ((testch & A_CHARTEXT) == EMPTY));
|
||||
---
|
||||
> return ((testch == GRASS) || (testch == EMPTY));
|
||||
140d108
|
||||
< attron(COLOR_PAIR(GRASS_PAIR));
|
||||
144d111
|
||||
< attroff(COLOR_PAIR(GRASS_PAIR));
|
||||
148d114
|
||||
< attron(COLOR_PAIR(MOUNTAIN_PAIR));
|
||||
152d117
|
||||
< attroff(COLOR_PAIR(MOUNTAIN_PAIR));
|
||||
154d118
|
||||
< attron(COLOR_PAIR(GRASS_PAIR));
|
||||
156d119
|
||||
< attroff(COLOR_PAIR(GRASS_PAIR));
|
||||
160d122
|
||||
< attron(COLOR_PAIR(WATER_PAIR));
|
||||
164d125
|
||||
< attroff(COLOR_PAIR(WATER_PAIR));
|
||||
|
||||
```
|
||||
|
||||
### Let's Play—Now in Color
|
||||
|
||||
The program now has a more pleasant color scheme, more closely matching the original tabletop gaming map, with green fields, blue lake and imposing gray mountains. The hero clearly stands out in red and magenta livery.
|
||||
|
||||
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-map_0.jpg)
|
||||
|
||||
Figure 1\. A Simple Tabletop Game Map, with a Lake and Mountains
|
||||
|
||||
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-color-start.png)
|
||||
|
||||
Figure 2\. The player starts the game in the lower-left corner.
|
||||
|
||||
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-color-1.png)
|
||||
|
||||
Figure 3\. The player can move around the play area, such as around the lake, through the mountain pass and into unknown regions.
|
||||
|
||||
With colors, you can represent information more clearly. This simple example uses colors to indicate playable areas (green) versus impassable regions (blue or gray). I hope you will use this example game as a starting point or reference for your own programs. You can do so much more with curses, depending on what you need your program to do.
|
||||
|
||||
In a follow-up article, I plan to demonstrate other features of the ncurses library, such as how to create windows and frames. In the meantime, if you are interested in learning more about curses, I encourage you to read Pradeep Padala's [NCURSES Programming HOWTO][3], at the Linux Documentation Project.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/programming-color-ncurses
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/users/jim-hall
|
||||
[1]:http://www.linuxjournal.com/content/getting-started-ncurses
|
||||
[2]:http://www.linuxjournal.com/content/creating-adventure-game-terminal-ncurses
|
||||
[3]:http://tldp.org/HOWTO/NCURSES-Programming-HOWTO
|
@ -0,0 +1,61 @@
|
||||
translating by lujun9972
|
||||
Save Some Battery On Our Linux Machines With TLP
|
||||
======
|
||||
![](http://www.linuxandubuntu.com/home/save-some-battery-on-our-linux-machines-with-tlp)
|
||||
|
||||
I have always found battery life with Linux to be relatively lesser than windows. Nevertheless, this is [Linux][1] and we always have something up our sleeves.
|
||||
|
||||
Now talking about this small utility called TLP, that can actually save some juice on your device.
|
||||
|
||||
|
||||
|
||||
**TLP - Linux Advanced Power Management** is a small command line utility that can genuinely help extend battery life by performing several tweaks on your Linux system.
|
||||
|
||||
```
|
||||
sudo apt install tlp
|
||||
```
|
||||
|
||||
[![install tlp in linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/install-tlp-in-linux.jpeg?1517926012)][2]
|
||||
|
||||
For other distributions, you can read the instructions from the [official website][3] .
|
||||
|
||||
|
||||
|
||||
After installation is complete, you will have to run the following command to start tlp for the first time only. TLP will automatically start the next time you boot your system.
|
||||
|
||||
[![start tlp on linux](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/published/start-tlp-on-linux.jpeg?1517926209)][4]
|
||||
|
||||
Now TLP has started and it has already made the default configurations needed to save battery. We will now see the configurations file. It is located in **/etc/default/tlp**. We need to edit this file to change various configurations.
|
||||
|
||||
There are many options in this file and to enable an option just remove the leading **#** character from that line. There will be instructions about each option and the values that you can allot to it. Some of the things that you will be able to do are -
|
||||
|
||||
* Autosuspend USB devices
|
||||
|
||||
* Define wireless devices to enable/disable at startup
|
||||
|
||||
* Spin down hard drives
|
||||
|
||||
* Switch off wireless devices
|
||||
|
||||
* Set CPU for performance or power savings
|
||||
|
||||
### Conclusion
|
||||
|
||||
TLP is an amazing utility that can help save battery life on Linux systems. I have personally found at least 30-40% of extended battery life when using TLP.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxandubuntu.com/home/save-some-battery-on-our-linux-machines-with-tlp
|
||||
|
||||
作者:[LinuxAndUbuntu][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxandubuntu.com
|
||||
[1]:http://www.linuxandubuntu.com/home/category/linux
|
||||
[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/install-tlp-in-linux.jpeg
|
||||
[3]:http://linrunner.de/en/tlp/docs/tlp-linux-advanced-power-management.html
|
||||
[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/edited/start-tlp-on-linux.jpeg
|
@ -0,0 +1,261 @@
|
||||
23 open source audio-visual production tools
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-photo-camera-blue.png?itok=AsIMZ9ga)
|
||||
|
||||
Open source is well established in cloud infrastructure, web hosting, embedded devices, and many other areas. Fewer people know that open source is a great option for producing professional-level audio-visual materials.
|
||||
|
||||
As a product owner and sometimes marketing support person, I produce a lot of content for end users: documentation, web articles, video tutorials, event booth materials, white papers, interviews, and more. I have found plenty of great open source software that helps me do my job producing audio, video, print, and screen graphics. There are a lot of [reasons][1] that people choose open source over proprietary options, and I've compiled this list of open source audio and video tools for people who:
|
||||
|
||||
* want to switch to GNU/Linux, but need to start slowly with cross-platform software on their regular operating system;
|
||||
* are already open source enthusiasts, but are new to open source A/V software and want to know which options to trust;
|
||||
* want to discover new tools to fuel their creativity and don't want to use the same approaches or software everyone else uses; or
|
||||
* have some other reason to use open source A/V solutions (if this is you, share your reason in the comments).
|
||||
|
||||
|
||||
|
||||
Fortunately, there is a lot of open source software available for A/V creators, as well as hardware that supports those applications. All of the software on this list meets the following criteria:
|
||||
|
||||
* cross-platform
|
||||
* open source (for software and drivers)
|
||||
* stable
|
||||
* actively maintained
|
||||
* well documented and supported
|
||||
|
||||
|
||||
|
||||
I've divided this list into graphics, audio, video, and animation solutions. Note that the software applications in this article are not exact equivalents of well-known proprietary software, they'll require you to learn new applications, and you may need to modify your workflow, but learning new tools enables you to create differently.
|
||||
|
||||
### Graphics
|
||||
|
||||
I create a lot of graphics for print and web, including logos, banners, video titles, and mockups. Here are some of the open source applications I use, as well as the hardware I use with them.
|
||||
|
||||
#### Software
|
||||
|
||||
**1.[Inkscape][2]** (vector graphics)
|
||||
Inkscape is a good vector graphics editor for creating SVG and PDF files in the RGB color space. (It can create CMYK images, but that's not the main aim.) It's a lifesaver for manipulating SVG maps and charts for web applications; not only can you open files with the integrated XML editor, you can also see all of an object's parameters. One drawback: it is not well optimized on Mac. For examples, see [Inkscape's gallery][3].
|
||||
|
||||
**2.[GIMP][4]** (picture editor)
|
||||
GIMP is my favorite application to edit images, including manipulating color, cropping and resizing, and (especially) optimizing file size for the web (many of my Photoshop-using colleagues ask me to do that last step for them). You can also create and draw images from scratch, but GIMP is not my favorite tool for that. See [GIMP Artists on DeviantArt][5] for examples.
|
||||
|
||||
**3.[Krita][6]** (digital painting)
|
||||
So you have this beautiful Wacom drawing tablet on your desk, and you want to try a true digital painting application? Krita is what you need to create beautiful drawings and paintings. See [Krita's Gallery][7] to see what I mean.
|
||||
|
||||
**4.[Scribus][8]** (desktop publishing)
|
||||
You can use Scribus to create a complete document, or just to convert a PDF from Inkscape or Libre Office from RGB to CMYK. One feature I really like: You can simulate and check what people with visual disabilities will experience with a Scribus document. I count on Scribus when I send PDF files to a commercial printer. While printing companies may be used to files created with proprietary solutions like InDesign, if your Scribus file is done correctly, your printer won't have any issues. Free trick: The first time you send a file, don't tell the printer the name of the software you used to create it. See [Made with Scribus][9] for examples of documents created with this software.
|
||||
|
||||
**5.[RawTherapee][10]** (RAW image photo development)
|
||||
RawTherapee is the only completely cross-platform alternative to Lightroom I know of. You can use your camera in RAW mode, and then use RawTherapee to "develop" your picture. It provides a very powerful engine and a non-destructive editor. For examples, see [RawTherapee screenshots][11].
|
||||
|
||||
**6.[LibreOffice Draw][12]** (desktop publishing)
|
||||
Although you may not think of LibreOffice Draw as a professional desktop publishing solution, it can save you in many situations; for example, if you are creating whitepapers, diagrams, or posters that other people (even those who don't know graphics software) can update later. Not only is it easy to use, it's also a great alternative to Impress or PowerPoint for creating interesting documents.
|
||||
|
||||
#### Graphics hardware
|
||||
|
||||
**Graphics tablets**
|
||||
[Wacom][13] tablets (and compatibles) are usually well supported on all operating systems.
|
||||
|
||||
**Color calibration**
|
||||
Color calibration products are available on all operating systems, including GNU/Linux. The [Spyder][14] products by Datacolor are well supported with applications for all platforms.
|
||||
|
||||
**Scanners and printers**
|
||||
Graphic artists need the colors they output (whether print or electronic) to be accurate. But devices that are truly cross-platform, with easy-to-install drivers for all platform, are not as common as you'd think. Your best choices are scanners that are compatible with TWAIN and printers that are compatible with Postscript. In my experience, professional-range printers and scanners from Epson and Xerox are less likely to have driver issues, and they always work out of the box, with beautiful and accurate colors.
|
||||
|
||||
### Audio
|
||||
|
||||
There are plenty of open source audio software options for musicians, video makers, game makers, music publishers, and others. Here are the ones that I've used for content creation and audio recording.
|
||||
|
||||
#### Software
|
||||
|
||||
**7. [Ardour][15] **(digital audio recording)
|
||||
For recording and editing audio, the best alternative to the professional Pro Tools music-creation software is, hands down, Ardour. It sounds great, the mixer section is complete and flexible, it supports your favorite plugins, and it makes it very easy to edit, listen, and compare your modifications. I use it a lot for audio recording or mixing sound on videos. It's not easy to find music recorded with Ardour, because musicians rarely credit the software they use. However, you can get an idea of its capabilities by looking at its [features and screenshots][16].
|
||||
|
||||
(If you are looking for an "analog feeling" in term of sound and workflow, you can try [Harrison Mixbus][17], which is not an open source project, but is heavily based on Ardour, with Harrison's analog console emulator. I really like to work with it and my customers like the sound. Mixbus is cross platform.)
|
||||
|
||||
**8.[Audacity][18]** (audio editing)
|
||||
Audacity is the "Swiss Army knife" of audio software. It's not perfect, but you can do almost everything with it. Plus it's very easy to use, and anyone can learn it in a few minutes. Like Ardour, it's hard to find work credited to Audacity, but you can find ways to use it on these [screenshots][19].
|
||||
|
||||
**9.[LMMS][20]** (music production)
|
||||
LMMS, designed as an alternative to FL Studio, might not be as popular, but it is very complete and easy to use. You can use your favorite plugins, edit instruments using "piano roll" sequencing, play drum samples with a step sequencer, mix your tracks ... almost anything is possible. I use it to create audio loops for videos when I don't have the time to record musicians. See [The Best of LMMS][21] playlists for examples.
|
||||
|
||||
**10.[Mixxx][22]** (DJ, music mixing)
|
||||
If you need powerful software to mix music and play DJ, Mixx is the one to use. It's compatible with most MIDI controllers, timecoded discs, and dedicated sound cards. You can manage your music library, add effects, and have fun. Take a look at the [features][23] to see how it works.
|
||||
|
||||
#### Audio interface hardware
|
||||
|
||||
While you can record audio with any computer's sound card, to record well, you need an audio interface—a specialized type of external sound card that records high-quality audio input. For cross-platform compatibility, most "USB Class Compliant" or "compatible with iOS" audio interface devices should work for MIDI or other audio. Below is a list of cross-platform devices I use and know well.
|
||||
|
||||
**[Behringer U-PHORIA UMC22][24]**
|
||||
The UMC22 is the cheapest option you should consider. With less expensive options, the preamps are too noisy and the quality of the box is very low.
|
||||
|
||||
**[Presonus AudioBox USB][25]**
|
||||
The AudioBox USB is one of the first USB Class Compliant (and thereby cross-platform) recording systems out there. It is very robust and available on the second-hand market.
|
||||
|
||||
**[Focusrite Scarlett][26]**
|
||||
The Scarlett range is, in my opinion, the highest quality cross-platform sound card available. The various options range from devices with two to 18 input/outputs. You can find first-version models on the second-hand market, and the new second version offers better preamps and specs. I've worked a lot with the [2i2][27] model.
|
||||
|
||||
**[Arturia AudioFuse][28]**
|
||||
The AudioFuse allows you to plug in nearly anything, from a microphone to a vinyl disc player to various digital inputs. It provides both great sound and great design, and it's what I'm using the most now. It is cross-platform, but the configuration software is not yet available for GNU/Linux. It remembers my configuration even after I unplug it from my Windows PC, but really, Arturia, please be serious and make the software available for GNU/Linux.
|
||||
|
||||
#### MIDI controllers
|
||||
|
||||
A MIDI controller is a musical instrument—e.g., keyboards, drum pads, etc.—that allow you to control music software and hardware. Most of the recent USB MIDI controllers are cross-platform and compatible with the main software used to record and edit audio. Web-based tutorials will help you configure them for different software; although it may be harder to find info on GNU/Linux configurations, they will work. I've used many Akai and M-Audio devices without any issues. It's best to try a musical instrument before you buy, at least to listen to the sound quality or to touch the buttons.
|
||||
|
||||
#### Audio codecs
|
||||
|
||||
Audio codecs compress and decompress digital audio to deliver the best-quality audio at the smallest possible file size. Fortunately, the best codec for listening and streaming happens to be open source: [FLAC][29]. [Ogg Vorbis][30] is another open source audio codec worth checking out; it's far better than MP3 at the same bitrate. If you need to export audio in different file formats, I recommend always exporting and archiving audio at the best possible quality, then compressing a specific version if it's needed.
|
||||
|
||||
### Video
|
||||
|
||||
The impact of video in brand communications is significant. Even if you are not a video specialist, it's smart to learn the basics.
|
||||
|
||||
#### Software
|
||||
|
||||
**11.[VLC][31]** (video player and converter)
|
||||
Originally developed for media streaming, VLC is now known for its ability to read all video formats on all devices. It's very useful; for example, you can also use it to convert a video into another codec or container or to recover a broken video.
|
||||
|
||||
**12.[OpenShot][32]** (video editor)
|
||||
OpenShot is simple software that produces great results, especially for short videos. (It is a bit limited in terms of editing or improving the sound of a video, but it will do the job.) I especially like the tool to move, resize, or crop a clip; it's perfect to create intros and outros that you can export, then use in a more complex editor. You can see [examples][33] (and get more information) on OpenShot's website.
|
||||
|
||||
**13.[Shotcut][34]** (video editor)
|
||||
I think Shotcut is a bit more complete than OpenShot—it's a very good competitor to the basic editors in your operating system, and it supports 4K and professional codecs. Give it a try, I think you will love it. You can see examples in these [video tutorials][35].
|
||||
|
||||
**14.[Blender Velvets][36]** (vdeo editing, compositing, effects)
|
||||
While the learning curve is not the lightest on this list, Blender Velvets is one of the most powerful solutions you will find. It is a collection of extensions and scripts, created by movie makers, that transform the Blender 3D creation software into a 2D video editor. While it's complexity means it's not my top choice for video editing, you can find plenty of tutorials on YouTube and other sites, and once you learn it, you can do everything with this software. Watch this [tutorial video][37] to see its functions and how it works.
|
||||
|
||||
**15.[Natron][38]** (compositing)
|
||||
I don't use Natron, but I've gotten great feedback from people who do. It's an alternative to Adobe's After Effects, but works differently. To learn more, watch a few video tutorials, like these on [Natron's YouTube][39] channel.
|
||||
|
||||
**16.[OBS][40]** (live editing, recording, and streaming)
|
||||
Open Broadcaster Software (OBS) is the leading solution for recording or [livestreaming][41] e-sports and video games on YouTube or Twitch. I use it a lot to record users' screens, conferences, meetups, etc. For more information, see the tutorial I wrote for Opensource.com about recording live presentations, [Part 1: Choosing your equipment][42] and [Part 2: Software setup][43].
|
||||
|
||||
#### Video hardware
|
||||
|
||||
First things first: You will need a powerful workstation with a fast hard drive and updated software and drivers.
|
||||
|
||||
**Graphics processing unit (GPU)**
|
||||
Some software on this list, including Blender and Shotcut, use OpenGL and hardware acceleration, which have high GPU demands. I recommend the most powerful GPU you can afford. I've had good experience with AMD and Nvidia, depending on the platform. Don't forget to install the latest drivers.
|
||||
|
||||
**Hard drives**
|
||||
In general, the faster and bigger the hard drive, the better it is for video. Don't forget to configure your software to use the right path.
|
||||
|
||||
**Video capture hardware**
|
||||
|
||||
* [Blackmagic Design][44]: Blackmagic provides very good, professional-grade video capture and playback hardware. Drivers are available for Mac, Windows, and GNU/Linux (but not all distributions).
|
||||
* [Epiphan][45]: Among Epiphan's professional USB video capture devices is a new USB Class Compliant model for HDMI and high screen resolutions. However, you can find the older VGA devices on the secondhand market, for which they continue to provide dedicated drivers for GNU/Linux and Windows.
|
||||
|
||||
|
||||
|
||||
#### Video codecs
|
||||
|
||||
Unfortunately, it is still difficult to work with open source codecs. For example, many cameras use proprietary codecs to record videos in H.264 and sound in AC3, in a format called AVCHD. Therefore, we have to be pragmatic and use what is available.
|
||||
|
||||
The good news is that the content industry is moving to open source codecs to avoid fees and to use open standards. For distribution and streaming, [Google'][46][s WebM][46] is a good open source codec, and most video editors can export in that format. Also, [GoPro's][47][Cineform][47] codec for very high resolution and 360° video is now open source. Hopefully more devices and vendors will use it soon.
|
||||
|
||||
### 2D and 3D animation
|
||||
|
||||
Animation is not my field of expertise, so I've asked my friends who are working on animated content, including movies and series for kids, for their recommendations to compile this list.
|
||||
|
||||
#### Software
|
||||
|
||||
**17. [Blender][48] **(3D modeling and rendering)
|
||||
Blender is the top open source and cross-platform software for 3D modeling and rendering. You can do your entire project directly in Blender, or use it to create 3D effects for a movie or video. You will find a lot of video tutorials on the web, so even though it isn't simple software, it's very easy to get started. Blender is a very active project and regularly produces short movies to showcase the technology. You can see some of them on [Blender Open Movies][49].
|
||||
|
||||
**18.[Synfig Studio][50]** (2D animation)
|
||||
The first time I used Synfig, it reminded me of the good, old Macromedia Flash editor. Since then, it has grown into a full-featured 2D animation studio. You can use it to produce promotional stories, commercials, presentations, or original intros, outros, and transitions for your videos, or even to work on full animated movies. See [Synfig's portfolio][51] for some examples.
|
||||
|
||||
**19.[TupiTube][52]** (stop-motion, 2D animation)
|
||||
TupiTube is an excellent way to learn the basics of 2D animation. You can transform a set of drawings or other pictures into a video or create an animated GIF or small loops. It's quite simple software, but very complete. Check out [TupiTube's YouTube][53] channel for some tutorials and examples.
|
||||
|
||||
#### Hardware
|
||||
|
||||
Animation uses the same hardware as graphic design, so look at the hardware list in the first section of this article for recommendations.
|
||||
|
||||
One additional note: You will need a powerful GPU for 3D modeling and rendering. The choices can be limited, depending on your platform or PC maker, but don't forget to install the latest drivers. Carefully choose your graphics card: they are expensive and critical for big 3D projects, particularly in the rendering step.
|
||||
|
||||
### Linux options
|
||||
|
||||
If you are a GNU/Linux user, I have some more good options for you. They aren't fully cross-platform, but some of them have a Windows installer, and some can be installed on Mac with Macports.
|
||||
|
||||
**20.[Kdenlive][54]** (video editor)
|
||||
With its last release (a few months ago), Kdenlive became my favorite video editor, especially when I work on a long video on my Linux machine. If you are a regular user of popular non-linear video editors, Kdenlive (which stands for KDE Non-Linear Video Editor) will be easy for you to use. It has good video and audio effects, is great when you need to work on details, and works on BSD and MacOS (although it's aimed at GNU/Linux) and is being ported to Windows.
|
||||
|
||||
**21.[Darktable][55]** (RAW development)
|
||||
Darktable is a very complete alternative to DxO that is made by photographers for photographers. Some research projects are using it as a platform for development and testing of new image processing algorithms. It is a very active project, and I can't wait until it becomes truly cross-platform.
|
||||
|
||||
**22.[MyPaint][56]** (digital painting)
|
||||
MyPaint is like a light table for digital painting. It works well with Wacom devices, and its brush engine is particularly appreciated, so GIMP developers are looking closely at it.
|
||||
|
||||
**23.[Shutter][57]** (desktop screenshots)
|
||||
When I create tutorials, I use a lot of screenshots to illustrate them. My favorite screenshot tool for GNU/Linux is Shutter; actually, I can't find an equivalent in terms of features for Windows or Mac. One missing piece: I would like to see Shutter add a feature to create animated GIF screenshots over a few seconds.
|
||||
|
||||
I hope this has convinced you that open source software is an excellent, viable solution for A/V content producers. If you are using other open source software—or have advice about using cross-platform software and hardware—for audio and video projects, please share your ideas in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/open-source-audio-visual-production-tools
|
||||
|
||||
作者:[Antoine Thomas][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/ttoine
|
||||
[1]:https://opensource.com/resources/what-open-source
|
||||
[2]:https://inkscape.org/
|
||||
[3]:https://inkscape.org/en/gallery/
|
||||
[4]:https://www.gimp.org/
|
||||
[5]:https://gimp-artists.deviantart.com/gallery/
|
||||
[6]:https://krita.org/
|
||||
[7]:https://krita.org/en/features/gallery/
|
||||
[8]:https://www.scribus.net/
|
||||
[9]:https://www.scribus.net/category/made-with-scribus/
|
||||
[10]:http://rawtherapee.com/
|
||||
[11]:http://rawtherapee.com/blog/screenshots
|
||||
[12]:https://www.libreoffice.org/discover/draw/
|
||||
[13]:http://www.wacom.com/en-us
|
||||
[14]:http://www.datacolor.com/photography-design/product-overview/#workflow_2
|
||||
[15]:https://www.ardour.org/
|
||||
[16]:http://ardour.org/features.html
|
||||
[17]:http://harrisonconsoles.com/site/mixbus.html
|
||||
[18]:http://www.audacityteam.org/
|
||||
[19]:http://www.audacityteam.org/about/screenshots/
|
||||
[20]:https://lmms.io/
|
||||
[21]:https://lmms.io/showcase/
|
||||
[22]:https://www.mixxx.org/
|
||||
[23]:https://www.mixxx.org/features/
|
||||
[24]:http://www.musictri.be/Categories/Behringer/Computer-Audio/Interfaces/UMC22/p/P0AUX
|
||||
[25]:https://www.presonus.com/products/audiobox-usb
|
||||
[26]:https://us.focusrite.com/scarlett-range
|
||||
[27]:https://us.focusrite.com/usb-audio-interfaces/scarlett-2i2
|
||||
[28]:https://www.arturia.com/products/audio/audiofuse/overview
|
||||
[29]:https://en.wikipedia.org/wiki/FLAC
|
||||
[30]:https://xiph.org/vorbis/
|
||||
[31]:https://www.videolan.org/
|
||||
[32]:https://www.openshot.org/
|
||||
[33]:https://www.openshot.org/videos/
|
||||
[34]:https://shotcut.com/
|
||||
[35]:https://shotcut.org/tutorials/
|
||||
[36]:http://blendervelvets.org/
|
||||
[37]:http://blendervelvets.org/video-tutorial-new-functions-for-the-blender-velvets/
|
||||
[38]:https://natron.fr/
|
||||
[39]:https://www.youtube.com/playlist?list=PL2n8LbT_b5IeMwi3AIzqG4Rbg8y7d6Amk
|
||||
[40]:https://obsproject.com/
|
||||
[41]:https://opensource.com/article/17/7/obs-studio-pro-level-streaming
|
||||
[42]:https://opensource.com/article/17/9/equipment-recording-presentations
|
||||
[43]:https://opensource.com/article/17/9/equipment-setup-live-presentations
|
||||
[44]:https://www.blackmagicdesign.com/
|
||||
[45]:https://www.epiphan.com/
|
||||
[46]:https://www.webmproject.org/
|
||||
[47]:https://fr.gopro.com/news/gopro-open-sources-the-cineform-codec
|
||||
[48]:https://www.blender.org/
|
||||
[49]:https://www.blender.org/about/projects/
|
||||
[50]:https://www.synfig.org/
|
||||
[51]:https://www.synfig.org/#portfolio
|
||||
[52]:https://maefloresta.com/
|
||||
[53]:https://www.youtube.com/channel/UCBavSfmoZDnqZalr52QZRDw
|
||||
[54]:https://kdenlive.org/
|
||||
[55]:https://www.darktable.org/
|
||||
[56]:http://mypaint.org/
|
||||
[57]:http://shutter-project.org/
|
@ -0,0 +1,96 @@
|
||||
translating---geekpi
|
||||
|
||||
How To Easily Correct Misspelled Bash Commands In Linux
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/02/Correct-Misspelled-Bash-Commands-720x340.png)
|
||||
|
||||
I know, I know! You could just hit the UP arrow to bring up the command you just ran, and navigate to the misspelled word using the LEFT/RIGHT keys, and correct the misspelled word(s), finally hit ENTER key to run it again, right? But, wait. There is another easier way to correct misspelled Bash commands in GNU/Linux. This brief tutorial explains how to do it. Read on.
|
||||
|
||||
### Correct Misspelled Bash Commands In Linux
|
||||
|
||||
Have you run a mistyped command something like below?
|
||||
```
|
||||
$ unme -r
|
||||
bash: unme: command not found
|
||||
|
||||
```
|
||||
|
||||
Did you notice? There is a typo in the above command. I missed the letter “a” in the “uname” command.
|
||||
|
||||
I have done this kind of silly mistakes in many occasions. Before I know this trick, I used to hit UP arrow to bring up the command and go to the misspelled word in the command, correct the spelling and typos and hit the ENTER key to run that command again. But believe me. The below trick is super easy to correct any typos and spelling mistakes in a command you just ran.
|
||||
|
||||
To easily correct the above misspelled command, just run:
|
||||
```
|
||||
$ ^nm^nam^
|
||||
|
||||
```
|
||||
|
||||
This will replace the characters “nm” with “nam” in the “uname” command. Cool, yeah? It’s not only corrects the typos, but also runs the command. Check the following screenshot.
|
||||
|
||||
![][2]
|
||||
|
||||
Use this trick when you made a typo in a command. Please note that it works only in Bash shell.
|
||||
|
||||
**Bonus tip:**
|
||||
|
||||
Have you ever wondered how to automatically correct spelling mistakes and typos when using “cd” command? No? It’s alright! The following trick will explain how to do it.
|
||||
|
||||
This trick will only help to correct the spelling mistakes and typos when using “cd” command.
|
||||
|
||||
Let us say, you want to switch to “Downloads” directory using command:
|
||||
```
|
||||
$ cd Donloads
|
||||
bash: cd: Donloads: No such file or directory
|
||||
|
||||
```
|
||||
|
||||
Oops! There is no such file or directory with name “Donloads”. Well, the correct name was “Downloads”. The “w” is missing in the above command.
|
||||
|
||||
To fix this issue and automatically correct the typos while using cd command, edit your **.bashrc** file:
|
||||
```
|
||||
$ vi ~/.bashrc
|
||||
|
||||
```
|
||||
|
||||
Add the following line at end.
|
||||
```
|
||||
[...]
|
||||
shopt -s cdspell
|
||||
|
||||
```
|
||||
|
||||
Type **:wq** to save and exit the file.
|
||||
|
||||
Finally, run the following command to update the changes.
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
|
||||
```
|
||||
|
||||
Now, if there are any typos or spelling mistakes in the path while using cd command, it will automatically corrects and land you in the correct directory.
|
||||
|
||||
![][3]
|
||||
|
||||
As you see in the above command, I intentionally made a typo (“Donloads” instead of “Downloads”), but Bash automatically detected the correct directory name and cd into it.
|
||||
|
||||
[**Fish**][4] and **Zsh** shells have this feature built-in. So, you don’t need this trick if you use them.
|
||||
|
||||
This trick, however, has some limitations. It works only if you use the correct case. In the above example, if you type “cd donloads” instead of “cd Donloads”, it won’t recognize the correct path. Also, if there were more than one letters missing in the path, it won’t work either.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/easily-correct-misspelled-bash-commands-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2018/02/misspelled-command.png
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2018/02/cd-command.png
|
||||
[4]:https://www.ostechnix.com/install-fish-friendly-interactive-shell-linux/
|
186
sources/tech/20180207 Python Global Keyword (With Examples).md
Normal file
186
sources/tech/20180207 Python Global Keyword (With Examples).md
Normal file
@ -0,0 +1,186 @@
|
||||
Python Global Keyword (With Examples)
|
||||
======
|
||||
Before reading this article, make sure you have got some basics of [Python Global, Local and Nonlocal Variables][1].
|
||||
|
||||
### Introduction to global Keyword
|
||||
|
||||
In Python, `global` keyword allows you to modify the variable outside of the current scope. It is used to create a global variable and make changes to the variable in a local context.
|
||||
|
||||
#### Rules of global Keyword
|
||||
|
||||
The basic rules for `global` keyword in Python are:
|
||||
|
||||
* When we create a variable inside a function, it’s local by default.
|
||||
* When we define a variable outside of a function, it’s global by default. You don’t have to use `global` keyword.
|
||||
* We use `global` keyword to read and write a global variable inside a function.
|
||||
* Use of `global` keyword outside a function has no effect
|
||||
|
||||
|
||||
|
||||
#### Use of global Keyword (With Example)
|
||||
|
||||
Let’s take an example.
|
||||
|
||||
##### Example 1: Accessing global Variable From Inside a Function
|
||||
```
|
||||
c = 1 # global variable
|
||||
|
||||
def add():
|
||||
print(c)
|
||||
|
||||
add()
|
||||
|
||||
```
|
||||
|
||||
When we run above program, the output will be:
|
||||
```
|
||||
1
|
||||
|
||||
```
|
||||
|
||||
However, we may have some scenarios where we need to modify the global variable from inside a function.
|
||||
|
||||
##### Example 2: Modifying Global Variable From Inside the Function
|
||||
```
|
||||
c = 1 # global variable
|
||||
|
||||
def add():
|
||||
c = c + 2 # increment c by 2
|
||||
print(c)
|
||||
|
||||
add()
|
||||
|
||||
```
|
||||
|
||||
When we run above program, the output shows an error:
|
||||
```
|
||||
UnboundLocalError: local variable 'c' referenced before assignment
|
||||
|
||||
```
|
||||
|
||||
This is because we can only access the global variable but cannot modify it from inside the function.
|
||||
|
||||
The solution for this is to use the `global` keyword.
|
||||
|
||||
##### Example 3: Changing Global Variable From Inside a Function using global
|
||||
```
|
||||
c = 0 # global variable
|
||||
|
||||
def add():
|
||||
global c
|
||||
c = c + 2 # increment by 2
|
||||
print("Inside add():", c)
|
||||
|
||||
add()
|
||||
print("In main:", c)
|
||||
|
||||
```
|
||||
|
||||
When we run above program, the output will be:
|
||||
```
|
||||
Inside add(): 2
|
||||
In main: 2
|
||||
|
||||
```
|
||||
|
||||
In the above program, we define c as a global keyword inside the `add()` function.
|
||||
|
||||
Then, we increment the variable c by `1`, i.e `c = c + 2`. After that, we call the `add()` function. Finally, we print global variable c.
|
||||
|
||||
As we can see, change also occured on the global variable outside the function, `c = 2`.
|
||||
|
||||
### Global Variables Across Python Modules
|
||||
|
||||
In Python, we create a single module `config.py` to hold global variables and share information across Python modules within the same program.
|
||||
|
||||
Here is how we can share global variable across the python modules.
|
||||
|
||||
##### Example 4 : Share a global Variable Across Python Modules
|
||||
|
||||
Create a `config.py` file, to store global variables
|
||||
```
|
||||
a = 0
|
||||
b = "empty"
|
||||
|
||||
```
|
||||
|
||||
Create a `update.py` file, to change global variables
|
||||
```
|
||||
import config
|
||||
|
||||
config.a = 10
|
||||
config.b = "alphabet"
|
||||
|
||||
```
|
||||
|
||||
Create a `main.py` file, to test changes in value
|
||||
```
|
||||
import config
|
||||
import update
|
||||
|
||||
print(config.a)
|
||||
print(config.b)
|
||||
|
||||
```
|
||||
|
||||
When we run the `main.py` file, the output will be
|
||||
```
|
||||
10
|
||||
alphabet
|
||||
|
||||
```
|
||||
|
||||
In the above, we create three files: `config.py`, `update.py` and `main.py`.
|
||||
|
||||
The module `config.py` stores global variables of a and b. In `update.py` file, we import the `config.py` module and modify the values of a and b. Similarly, in `main.py` file we import both `config.py` and `update.py` module. Finally, we print and test the values of global variables whether they are changed or not.
|
||||
|
||||
### Global in Nested Functions
|
||||
|
||||
Here is how you can use a global variable in nested function.
|
||||
|
||||
##### Example 5: Using a Global Variable in Nested Function
|
||||
```
|
||||
def foo():
|
||||
x = 20
|
||||
|
||||
def bar():
|
||||
global x
|
||||
x = 25
|
||||
|
||||
print("Before calling bar: ", x)
|
||||
print("Calling bar now")
|
||||
bar()
|
||||
print("After calling bar: ", x)
|
||||
|
||||
foo()
|
||||
print("x in main : ", x)
|
||||
|
||||
```
|
||||
|
||||
The output is :
|
||||
```
|
||||
Before calling bar: 20
|
||||
Calling bar now
|
||||
After calling bar: 20
|
||||
x in main : 25
|
||||
|
||||
```
|
||||
|
||||
In the above program, we declare global variable inside the nested function `bar()`. Inside `foo()` function, x has no effect of global keyword.
|
||||
|
||||
Before and after calling `bar()`, the variable x takes the value of local variable i.e `x = 20`. Outside of the `foo()` function, the variable x will take value defined in the `bar()` function i.e `x = 25`. This is because we have used `global` keyword in x to create global variable inside the `bar()` function (local scope).
|
||||
|
||||
If we make any changes inside the `bar()` function, the changes appears outside the local scope, i.e. `foo()`.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.programiz.com/python-programming/global-keyword
|
||||
|
||||
作者:[programiz][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.programiz.com
|
||||
[1]:https://www.programiz.com/python-programming/global-local-nonlocal-variables
|
155
sources/tech/20180208 Advanced Dnsmasq Tips and Tricks.md
Normal file
155
sources/tech/20180208 Advanced Dnsmasq Tips and Tricks.md
Normal file
@ -0,0 +1,155 @@
|
||||
Advanced Dnsmasq Tips and Tricks
|
||||
======
|
||||
|
||||
!](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner_3.25.47_pm.png?itok=2YaDe86d)
|
||||
|
||||
Many people know and love Dnsmasq and rely on it for their local name services. Today we look at advanced configuration file management, how to test your configurations, some basic security, DNS wildcards, speedy DNS configuration, and some other tips and tricks. Next week, we'll continue with a detailed look at how to configure DNS and DHCP.
|
||||
|
||||
### Testing Configurations
|
||||
|
||||
When you're testing new configurations, you should run Dnsmasq from the command line, rather than as a daemon. This example starts it without launching the daemon, prints command output, and logs all activity:
|
||||
```
|
||||
# dnsmasq --no-daemon --log-queries
|
||||
dnsmasq: started, version 2.75 cachesize 150
|
||||
dnsmasq: compile time options: IPv6 GNU-getopt
|
||||
DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack
|
||||
ipset auth DNSSEC loop-detect inotify
|
||||
dnsmasq: reading /etc/resolv.conf
|
||||
dnsmasq: using nameserver 192.168.0.1#53
|
||||
dnsmasq: read /etc/hosts - 9 addresses
|
||||
|
||||
```
|
||||
|
||||
You can see tons of useful information in this small example, including version, compiled options, system name service files, and its listening address. Ctrl+c stops it. By default, Dnsmasq does not have its own log file, so entries are dumped into multiple locations in `/var/log`. You can use good old `grep` to find Dnsmasq log entries. This example searches `/var/log` recursively, prints the line numbers after the filenames, and excludes `/var/log/dist-upgrade`:
|
||||
```
|
||||
# grep -ir --exclude-dir=dist-upgrade dnsmasq /var/log/
|
||||
|
||||
```
|
||||
|
||||
Note the fun grep gotcha with `--exclude-dir=`: Don't specify the full path, but just the directory name.
|
||||
|
||||
You can give Dnsmasq its own logfile with this command-line option, using whatever file you want:
|
||||
```
|
||||
# dnsmasq --no-daemon --log-queries --log-facility=/var/log/dnsmasq.log
|
||||
|
||||
```
|
||||
|
||||
Or enter it in your Dnsmasq configuration file as `log-facility=/var/log/dnsmasq.log`.
|
||||
|
||||
### Configuration Files
|
||||
|
||||
Dnsmasq is configured in `/etc/dnsmasq.conf`. Your Linux distribution may also use `/etc/default/dnsmasq`, `/etc/dnsmasq.d/`, and `/etc/dnsmasq.d-available/`. (No, there cannot be a universal method, as that is against the will of the Linux Cat Herd Ruling Cabal.) You have a fair bit of flexibility to organize your Dnsmasq configuration in a way that pleases you.
|
||||
|
||||
`/etc/dnsmasq.conf` is the grandmother as well as the boss. Dnsmasq reads it first at startup. `/etc/dnsmasq.conf` can call other configuration files with the `conf-file=` option, for example `conf-file=/etc/dnsmasqextrastuff.conf`, and directories with the `conf-dir=` option, e.g. `conf-dir=/etc/dnsmasq.d`.
|
||||
|
||||
Whenever you make a change in a configuration file, you must restart Dnsmasq.
|
||||
|
||||
You may include or exclude configuration files by extension. The asterisk means include, and the absence of the asterisk means exclude:
|
||||
```
|
||||
conf-dir=/etc/dnsmasq.d/,*.conf, *.foo
|
||||
conf-dir=/etc/dnsmasq.d,.old, .bak, .tmp
|
||||
|
||||
```
|
||||
|
||||
You may store your host configurations in multiple files with the `--addn-hosts=` option.
|
||||
|
||||
Dnsmasq includes a syntax checker:
|
||||
```
|
||||
$ dnsmasq --test
|
||||
dnsmasq: syntax check OK.
|
||||
|
||||
```
|
||||
|
||||
### Useful Configurations
|
||||
|
||||
Always include these lines:
|
||||
```
|
||||
domain-needed
|
||||
bogus-priv
|
||||
|
||||
```
|
||||
|
||||
These prevent packets with malformed domain names and packets with private IP addresses from leaving your network.
|
||||
|
||||
This limits your name services exclusively to Dnsmasq, and it will not use `/etc/resolv.conf` or any other system name service files:
|
||||
```
|
||||
no-resolv
|
||||
|
||||
```
|
||||
|
||||
Reference other name servers. The first example is for a local private domain. The second and third examples are OpenDNS public servers:
|
||||
```
|
||||
server=/fooxample.com/192.168.0.1
|
||||
server=208.67.222.222
|
||||
server=208.67.220.220
|
||||
|
||||
```
|
||||
|
||||
Or restrict just local domains while allowing external lookups for other domains. These are answered only from `/etc/hosts` or DHCP:
|
||||
```
|
||||
local=/mehxample.com/
|
||||
local=/fooxample.com/
|
||||
|
||||
```
|
||||
|
||||
Restrict which network interfaces Dnsmasq listens to:
|
||||
```
|
||||
interface=eth0
|
||||
interface=wlan1
|
||||
|
||||
```
|
||||
|
||||
Dnsmasq, by default, reads and uses `/etc/hosts`. This is a fabulously fast way to configure a lot of hosts, and the `/etc/hosts` file only has to exist on the same computer as Dnsmasq. You can make the process even faster by entering only the hostnames in `/etc/hosts`, and use Dnsmasq to add the domain. `/etc/hosts` looks like this:
|
||||
```
|
||||
127.0.0.1 localhost
|
||||
192.168.0.1 host2
|
||||
192.168.0.2 host3
|
||||
192.168.0.3 host4
|
||||
|
||||
```
|
||||
|
||||
Then add these lines to `dnsmasq.conf`, using your own domain, of course:
|
||||
```
|
||||
expand-hosts
|
||||
domain=mehxample.com
|
||||
|
||||
```
|
||||
|
||||
Dnsmasq will automatically expand the hostnames to fully qualified domain names, for example, host2 to host2.mehxample.com.
|
||||
|
||||
### DNS Wildcards
|
||||
|
||||
In general, DNS wildcards are not a good practice because they invite abuse. But there are times when they are useful, such as inside the nice protected confines of your LAN. For example, Kubernetes clusters are considerably easier to manage with wildcard DNS, unless you enjoy making DNS entries for your hundreds or thousands of applications. Suppose your Kubernetes domain is mehxample.com; in Dnsmasq a wildcard that resolves all requests to mehxample.com looks like this:
|
||||
```
|
||||
address=/mehxample.com/192.168.0.5
|
||||
|
||||
```
|
||||
|
||||
The address to use in this case is the public IP address for your cluster. This answers requests for hosts and subdomains in mehxample.com, except for any that are already configured in DHCP or `/etc/hosts`.
|
||||
|
||||
Next week, we'll go into more detail on managing DNS and DHCP, including different options for different subnets, and providing authoritative name services.
|
||||
|
||||
### Additional Resources
|
||||
|
||||
* [DNS Spoofing with Dnsmasq][1]
|
||||
|
||||
* [Dnsmasq For Easy LAN Name Services][2]
|
||||
|
||||
* [Dnsmasq][3]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/2/advanced-dnsmasq-tips-and-tricks
|
||||
|
||||
作者:[CARLA SCHRODER][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/cschroder
|
||||
[1]:https://www.linux.com/learn/intro-to-linux/2017/7/dns-spoofing-dnsmasq
|
||||
[2]:https://www.linux.com/learn/dnsmasq-easy-lan-name-services
|
||||
[3]:http://www.thekelleys.org.uk/dnsmasq/doc.html
|
243
sources/tech/20180208 Apache Beam- a Python example.md
Normal file
243
sources/tech/20180208 Apache Beam- a Python example.md
Normal file
@ -0,0 +1,243 @@
|
||||
Apache Beam: a Python example
|
||||
======
|
||||
|
||||
![](https://process.filestackapi.com/cache=expiry:max/resize=width:700/compress/EOfIfmx0QlDgc6rDnuNq)
|
||||
|
||||
Nowadays, being able to handle huge amounts of data can be an interesting skill: analytics, user profiling, statistics — virtually any business that needs to extrapolate information from whatever data is, in one way or another, using some big data tools or platforms.
|
||||
|
||||
One of the most interesting tool is Apache Beam, a framework that gives us the instruments to generate procedures to transform, process, aggregate, and manipulate data for our needs.
|
||||
|
||||
Let’s try and see how we can use it in a very simple scenario.
|
||||
|
||||
### The context
|
||||
|
||||
Imagine that we have a database with information about users visiting a website, with each record containing:
|
||||
|
||||
* country of the visiting user
|
||||
* duration of the visit
|
||||
* user name
|
||||
|
||||
|
||||
|
||||
We want to create some reports containing:
|
||||
|
||||
1. for each country, the **number of users** visiting the website
|
||||
2. for each country, the **average visit time**
|
||||
|
||||
|
||||
|
||||
We will use **Apache Beam** , a Google SDK (previously called Dataflow) representing a **programming model** aimed at simplifying the mechanism of large-scale data processing.
|
||||
|
||||
It’s been donated to the Apache Foundation, and called Beam because it’s able to process data in whatever form you need: **batches** and **streams** (b-eam). It gives you the chance to define **pipelines** to process real-time data ( **streams** ) and historical data ( **batches** ).
|
||||
|
||||
The pipeline definition is totally disjointed by the context that you will use to run it, so Beam gives you the chance to choose one of the supported runners you can use:
|
||||
|
||||
* Beam model: local execution of your pipeline
|
||||
* Google Cloud Dataflow: dataflow as a service
|
||||
* Apache Flink
|
||||
* Apache Spark
|
||||
* Apache Gearpump
|
||||
* Apache Hadoop MapReduce
|
||||
* JStorm
|
||||
* IBM Streams
|
||||
|
||||
|
||||
|
||||
We will be running the beam model one, which basically executes everything on your local machine.
|
||||
|
||||
### The programming model
|
||||
|
||||
Though this is not going to be a deep explanation of the DataFlow programming model, it’s necessary to understand what a pipeline is: a set of manipulations being made on an input data set that provides a new set of data. More precisely, a pipeline is made of **transforms** applied to **collections.**
|
||||
|
||||
Straight from the [Apache Beam website][1]:
|
||||
|
||||
> A pipeline encapsulates your entire data processing task, from start to finish. This includes reading input data, transforming that data, and writing output data.
|
||||
|
||||
The pipeline gets data injected from the outside and represents it as **collections** (formally named `PCollection` s ), each of them being
|
||||
|
||||
> a potentially distributed, multi-element, data set
|
||||
|
||||
When one or more `Transform` s are applied to a `PCollection`, a brand new `PCollection` is generated (and for this reason the resulting `PCollection` s are **immutable** objects).
|
||||
|
||||
The first and last step of a pipeline are, of course, the ones that can read and write data to and from several kind of storages — you can find a list [here][2].
|
||||
|
||||
### The application
|
||||
|
||||
We will have the data in a `csv` file, so the first thing we need to do is to read the contents of the file and provide a structured representation of all of the rows.
|
||||
|
||||
A generic row of the `csv` file will be like the following:
|
||||
```
|
||||
United States Of America, 0.5, John Doe
|
||||
|
||||
```
|
||||
|
||||
with the columns being the country, the visit time in seconds, and the user name, respectively.
|
||||
|
||||
Given the data we want to provide, let’s see what our pipeline will be doing and how.
|
||||
|
||||
### Read the input data set
|
||||
|
||||
The first step will be to read the input file.
|
||||
```
|
||||
with apache_beam.Pipeline(options=options) as p:
|
||||
|
||||
rows = (
|
||||
p |
|
||||
ReadFromText(input_filename) |
|
||||
apache_beam.ParDo(Split())
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
In the above context, `p` is an instance of `apache_beam.Pipeline` and the first thing that we do is to apply a built-in transform, `apache_beam.io.textio.ReadFromText` that will load the contents of the file into a `PCollection`. After this, we apply a specific logic, `Split`, to process every row in the input file and provide a more convenient representation (a dictionary, specifically).
|
||||
|
||||
Here’s the `Split` function:
|
||||
```
|
||||
class Split(apache_beam.DoFn):
|
||||
|
||||
def process(self, element):
|
||||
country, duration, user = element.split(",")
|
||||
|
||||
return [{
|
||||
'country': country,
|
||||
'duration': float(duration),
|
||||
'user': user
|
||||
}]
|
||||
|
||||
```
|
||||
|
||||
The `ParDo` transform is a core one, and, as per official Apache Beam documentation:
|
||||
|
||||
`ParDo` is useful for a variety of common data processing operations, including:
|
||||
|
||||
* **Filtering a data set.** You can use `ParDo` to consider each element in a `PCollection` and either output that element to a new collection or discard it.
|
||||
* **Formatting or type-converting each element in a data set.** If your input `PCollection` contains elements that are of a different type or format than you want, you can use `ParDo` to perform a conversion on each element and output the result to a new `PCollection`.
|
||||
* **Extracting parts of each element in a data set.** If you have a`PCollection` of records with multiple fields, for example, you can use a `ParDo` to parse out just the fields you want to consider into a new `PCollection`.
|
||||
* **Performing computations on each element in a data set.** You can use `ParDo` to perform simple or complex computations on every element, or certain elements, of a `PCollection` and output the results as a new `PCollection`.
|
||||
|
||||
|
||||
|
||||
Please read more of this [here][3].
|
||||
|
||||
### Grouping relevant information under proper keys
|
||||
|
||||
At this point, we have a list of valid rows, but we need to reorganize the information under keys that are the countries referenced by such rows. For example, if we have three rows like the following:
|
||||
|
||||
> Spain (ES), 2.2, John Doe> Spain (ES), 2.9, John Wayne> United Kingdom (UK), 4.2, Frank Sinatra
|
||||
|
||||
we need to rearrange the information like this:
|
||||
```
|
||||
{
|
||||
"Spain (ES)": [2.2, 2.9],
|
||||
"United kingdom (UK)": [4.2]
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
If we do this, we have all the information in good shape to make all the calculations we need.
|
||||
|
||||
Here we go:
|
||||
```
|
||||
timings = (
|
||||
rows |
|
||||
apache_beam.ParDo(CollectTimings()) |
|
||||
"Grouping timings" >> apache_beam.GroupByKey() |
|
||||
"Calculating average" >> apache_beam.CombineValues(
|
||||
apache_beam.combiners.MeanCombineFn()
|
||||
)
|
||||
)
|
||||
|
||||
users = (
|
||||
rows |
|
||||
apache_beam.ParDo(CollectUsers()) |
|
||||
"Grouping users" >> apache_beam.GroupByKey() |
|
||||
"Counting users" >> apache_beam.CombineValues(
|
||||
apache_beam.combiners.CountCombineFn()
|
||||
)
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
The classes `CollectTimings` and `CollectUsers` basically filter the rows that are of interest for our goal. They also rearrange each of them in the right form, that is something like:
|
||||
|
||||
> (“Spain (ES)”, 2.2)
|
||||
|
||||
At this point, we are able to use the `GroupByKey` transform, that will create a single record that, incredibly, groups all of the info that shares the same keys:
|
||||
|
||||
> (“Spain (ES)”, (2.2, 2.9))
|
||||
|
||||
Note: the key is always the first element of the tuple.
|
||||
|
||||
The very last missing bit of the logic to apply is the one that has to process the values associated to each key. The built-in transform is `apache_beam.CombineValues`, which is pretty much self explanatory.
|
||||
|
||||
The logics that are applied are `apache_beam.combiners.MeanCombineFn` and `apache_beam.combiners.CountCombineFn` respectively: the former calculates the arithmetic mean, the latter counts the element of a set.
|
||||
|
||||
For the sake of completeness, here is the definition of the two classes `CollectTimings` and `CollectUsers`:
|
||||
```
|
||||
class CollectTimings(apache_beam.DoFn):
|
||||
|
||||
def process(self, element):
|
||||
"""
|
||||
Returns a list of tuples containing country and duration
|
||||
"""
|
||||
|
||||
result = [
|
||||
(element['country'], element['duration'])
|
||||
]
|
||||
return result
|
||||
|
||||
|
||||
class CollectUsers(apache_beam.DoFn):
|
||||
|
||||
def process(self, element):
|
||||
"""
|
||||
Returns a list of tuples containing country and user name
|
||||
"""
|
||||
result = [
|
||||
(element['country'], element['user'])
|
||||
]
|
||||
return result
|
||||
|
||||
```
|
||||
|
||||
Note: the operation of applying multiple times some transforms to a given `PCollection` generates multiple brand new collections. This is called **collection branching**. It’s very well represented here:
|
||||
|
||||
Source: <https://beam.apache.org/images/design-your-pipeline-multiple-pcollections.png>
|
||||
|
||||
Basically, now we have two sets of information — the average visit time for each country and the number of users for each country. What we're missing is a single structure containing all of the information we want.
|
||||
|
||||
Also, having made a pipeline branching, we need to recompose the data. We can do this by using `CoGroupByKey`, which is nothing less than a **join** made on two or more collections that have the same keys.
|
||||
|
||||
The last two transforms are ones that format the info into `csv` entries while the other writes them to a file.
|
||||
|
||||
After this, the resulting `output.txt` file will contain rows like this one:
|
||||
|
||||
`Italy (IT),36,2.23611111111`
|
||||
|
||||
meaning that 36 people visited the website from Italy, spending, on average, 2.23 seconds on the website.
|
||||
|
||||
### The input data
|
||||
|
||||
The data used for this simulation has been procedurally generated: 10,000 rows, with a maximum of 200 different users, spending between 1 and 5 seconds on the website. This was needed to have a rough estimate on the resulting values we obtained. A new article about **pipeline testing** will probably follow.
|
||||
|
||||
### GitHub repository
|
||||
|
||||
The GitHub repository for this article is [here][4].
|
||||
|
||||
The README.md file contains everything needed to try it locally.!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.codementor.io/brunoripa/apache-beam-a-python-example-gapr8smod
|
||||
|
||||
作者:[Bruno Ripa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.codementor.io/brunoripa
|
||||
[1]:https://href.li/?https://beam.apache.org
|
||||
[2]:https://href.li/?https://beam.apache.org/documentation/programming-guide/#pipeline-io
|
||||
[3]:https://beam.apache.org/documentation/programming-guide/#pardo
|
||||
[4]:https://github.com/brunoripa/beam-example
|
@ -0,0 +1,76 @@
|
||||
Become a Hollywood movie hacker with these three command line tools
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals.png?itok=CfBqYBah)
|
||||
|
||||
If you ever spent time growing up watching spy thrillers, action flicks, or crime movies, you developed a clear picture in your mind of what a hacker's computer screen looked like. Rows upon rows of rapidly moving code, streams of grouped hexadecimal numbers flying past like [raining code][1] in The Matrix.
|
||||
|
||||
Perhaps there's a world map with flashing points of light and a few rapidly updating charts thrown in there for good measure. And probably a 3D rotating geometric shape, because why not? If possible, this is all shown on a ridiculous number of monitors in an ergonomically uncomfortable configuration. I think Swordfish sported seven.
|
||||
|
||||
Of course, those of us who pursued technical careers quickly realized that this was all utter nonsense. While many of us have dual monitors (or more), a dashboard of blinky, flashing data is usually pretty antithetical to focusing on work. Writing code, managing projects, and administering systems is not the same thing as day trading. Most of the situations we encounter require a great deal of thinking about the problem we're trying to solve, a good bit of communicating with stakeholders, some researching and organizing information, and very, very little [rapid-fire typing][7].
|
||||
|
||||
That doesn't mean that we sometimes don't feel like we want to be inside of one of those movies. Or maybe, we're just trying to look like we're "being productive."
|
||||
|
||||
**Side note: Of course I mean this article in jest.** If you're actually being evaluated on how busy you look, whether that's at your desk or in meetings, you've got a huge cultural problem at your workplace that needs to be addressed. A culture of manufactured busyness is a toxic culture and one that's almost certainly helping neither the company nor its employees.
|
||||
|
||||
That said, let's have some fun and fill our screens with some panels of good old-fashioned meaningless data and code snippets. (Well, the data might have some meaning, but not without context.) While there are plenty of fancy GUIs for this (consider checking out [Hacker Typer][8] or [GEEKtyper.com][9] for a web-based version), why not just use your standard Linux terminal? For a more old-school look, consider using [Cool Retro Term][10], which is indeed what it sounds like: A cool retro terminal. I'll use Cool Retro Term for the screenshots below because it does indeed look 100% cooler.
|
||||
|
||||
### Genact
|
||||
|
||||
The first tool we'll look at is Genact. Genact simply plays back a sequence of your choosing, slowly and indefinitely, letting your code “compile” while you go out for a coffee break. The sequence it plays is up to you, but included by default are a cryptocurrency mining simulator, Composer PHP dependency manager, kernel compiler, downloader, memory dump, and more. My favorite, though, is the setting which displays SimCity loading messages. So as long as no one checks too closely, you can spend all afternoon waiting on your computer to finish reticulating splines.
|
||||
|
||||
Genact has [releases][11] available for Linux, OS X, and Windows, and the Rust [source code][12] is available on GitHub under an [MIT license][13].
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/genact.gif)
|
||||
|
||||
### Hollywood
|
||||
|
||||
Hollywood takes a more straightforward approach. It essentially creates a random number and configuration of split screens in your terminal and launches busy looking applications like htop, directory trees, source code files, and others, and switch them out every few seconds. It's put together as a shell script, so it's fairly straightforward to modify as you wish.
|
||||
|
||||
The [source code][14] for Hollywood can be found on GitHub under an [Apache 2.0][15] license.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/hollywood.gif)
|
||||
|
||||
### Blessed-contrib
|
||||
|
||||
My personal favorite isn't actually an application designed for this purpose. Instead, it's the demo file for a Node.js-based terminal dashboard building library called Blessed-contrib. Unlike the other two, I actually have used Blessed-contrib's library for doing something that resembles actual work, as opposed to pretend-work, as it is a quite helpful library and set of widgets for displaying information at the command line. But it's also easy to fill with dummy data to fulfill your dream of simulating the computer from WarGames.
|
||||
|
||||
The [source code][16] for Blessed-contrib can be found on GitHub under an [MIT license][17].
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/blessed.gif)
|
||||
|
||||
Of course, while these tools make it easy, there are plenty of ways to fill up your screen with nonsense. One of the most common tools you'll see in movies is Nmap, an open source security scanner. In fact, it is so overused as the tool to demonstrate on-screen hacking in Hollywood that the makers have created a page listing some of the movies it has [appeared in][18], from The Matrix Reloaded to The Bourne Ultimatum, The Girl with the Dragon Tattoo, and even Die Hard 4.
|
||||
|
||||
You can create your own combination, of course, using a terminal multiplexer like screen or tmux to fire up whatever selection of data-spitting applications you wish.
|
||||
|
||||
What's your go-to screen for looking busy?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/command-line-tools-productivity
|
||||
|
||||
作者:[Jason Baker][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jason-baker
|
||||
[1]:http://tvtropes.org/pmwiki/pmwiki.php/Main/MatrixRainingCode
|
||||
[2]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[3]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[4]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[5]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[6]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
|
||||
[7]:http://tvtropes.org/pmwiki/pmwiki.php/Main/RapidFireTyping
|
||||
[8]:https://hackertyper.net/
|
||||
[9]:http://geektyper.com
|
||||
[10]:https://github.com/Swordfish90/cool-retro-term
|
||||
[11]:https://github.com/svenstaro/genact/releases
|
||||
[12]:https://github.com/svenstaro/genact
|
||||
[13]:https://github.com/svenstaro/genact/blob/master/LICENSE
|
||||
[14]:https://github.com/dustinkirkland/hollywood
|
||||
[15]:http://www.apache.org/licenses/LICENSE-2.0
|
||||
[16]:https://github.com/yaronn/blessed-contrib
|
||||
[17]:http://opensource.org/licenses/MIT
|
||||
[18]:https://nmap.org/movies/
|
118
sources/tech/20180208 How to Create a Sudo User on CentOS 7.md
Normal file
118
sources/tech/20180208 How to Create a Sudo User on CentOS 7.md
Normal file
@ -0,0 +1,118 @@
|
||||
How to Create a Sudo User on CentOS 7
|
||||
======
|
||||
![How to create a sudo user on CentOS 7][1]
|
||||
|
||||
We’ll guide you, how to create a sudo user on CentOS 7. Sudo is a Linux command line program that allows you to execute commands as superuser or another system user. The configuration file offers detailed access permissions, including enabling commands only from the invoking terminal; requiring a password per user or group; requiring re-entry of a password every time or never requiring a password at all for a particular command line. It can also be configured to permit passing arguments or multiple commands. In this tutorial we will show you how to create a sudo user on CentOS 7.
|
||||
|
||||
### Steps to Create a New Sudo User on CentOS 7
|
||||
|
||||
#### 1. Connect via SSH
|
||||
|
||||
First of all, [connect to your server via SSH][2]. Once you are logged in, you need to add a new system user.
|
||||
|
||||
#### 2. Add New User in CentOS
|
||||
|
||||
You can add a new system user using the following command:
|
||||
```
|
||||
# adduser newuser
|
||||
|
||||
```
|
||||
|
||||
You need to replace `newuser` with the name of the user you want to add. Also, you need to set up a password for the newly added user.
|
||||
|
||||
#### 3. Create a Strong Password
|
||||
|
||||
To set up a password you can use the following command:
|
||||
```
|
||||
# passwd newuser
|
||||
|
||||
```
|
||||
|
||||
Make sure you are using a [strong password][3], otherwise the password will fail against the dictionary check. You will be asked to enter the password again and once you enter it you will be notified that the authentication tokens are updated successfully:
|
||||
```
|
||||
# passwd newuser
|
||||
Changing password for user newuser.
|
||||
New password:
|
||||
Retype new password:
|
||||
passwd: all authentication tokens updated successfully.
|
||||
|
||||
```
|
||||
|
||||
#### 4. Add User to the Wheel Group in CentOS
|
||||
|
||||
The wheel group is a special user group that allows all members in the group to run all commands. Therefore, you need to add the new user to this group so it can run commands as superuser. You can do that by using the following command:
|
||||
```
|
||||
# usermod -aG wheel newuser
|
||||
|
||||
```
|
||||
|
||||
Again, make sure you are using the name of the actual user instead of `newuser`.
|
||||
Now, use `visudo` to open and edit the `/etc/sudoers` file. Make sure that the line that starts with `%wheel` is not commented. It should look exactly like this:
|
||||
```
|
||||
### Allows people in group wheel to run all commands
|
||||
%wheel ALL=(ALL) ALL
|
||||
|
||||
```
|
||||
|
||||
Now that your new user is set up you can switch to that user and test if everything is OK.
|
||||
|
||||
#### 5. Switch to the sudo User
|
||||
|
||||
To switch to the new user, run the following command:
|
||||
```
|
||||
# su - newuser
|
||||
|
||||
```
|
||||
|
||||
Now run a command that usually doesn’t work for regular users like the one below:
|
||||
```
|
||||
$ ls -la /root/
|
||||
|
||||
```
|
||||
|
||||
You will get the following error message:
|
||||
```
|
||||
ls: cannot open directory /root/: Permission denied
|
||||
|
||||
```
|
||||
|
||||
Try to run the same command, now with using `sudo`
|
||||
```
|
||||
$ sudo ls -ls /root/
|
||||
|
||||
```
|
||||
|
||||
You will need to enter the password for the new user to proceed. If everything is OK, the command will list all the content in the `/root` directory. Another way to test this is to run the following command:
|
||||
```
|
||||
$ sudo whoami
|
||||
|
||||
```
|
||||
|
||||
The output of the command should be similar to the one below:
|
||||
```
|
||||
$ sudo whoami
|
||||
root
|
||||
|
||||
```
|
||||
|
||||
Congratulations, now you have a sudo user which you can use to manage your CentOS 7, operating system.
|
||||
|
||||
Of course, you don’t have to create a sudo user on CentOS 7, if you use one of our [CentOS 7 Hosting][4] services, in which case you can simply ask our expert Linux admins to create a sudo user on CentOS 7, for you. They are available 24×7 and will take care of your request immediately.
|
||||
|
||||
**PS**. If you liked this post on **how to create a sudo user on CentOS 7** , please share it with your friends on the social networks using the buttons on the left or simply leave a reply below. Thanks.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.rosehosting.com/blog/how-to-create-a-sudo-user-on-centos-7/
|
||||
|
||||
作者:[RoseHosting][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.rosehosting.com
|
||||
[1]:https://www.rosehosting.com/blog/wp-content/uploads/2018/02/How-to-create-a-sudo-user-on-CentOS-7.jpg
|
||||
[2]:https://www.rosehosting.com/blog/connect-to-your-linux-vps-via-ssh/
|
||||
[3]:https://www.rosehosting.com/blog/generate-password-linux-command-line/
|
||||
[4]:https://www.rosehosting.com/centos-vps.html
|
@ -0,0 +1,271 @@
|
||||
How to Install and Configure XWiki on Ubuntu 16.04
|
||||
======
|
||||
|
||||
XWiki is a free and open source wiki software written in Java runs on a servlet container like Tomcat, JBoss etc. XWiki uses databases such as MySQL or PostgreSQL to store its information. XWiki allows us to store structured data and execute the server script within wiki interface. You can host multiple blogs and manage or view your files and folders using XWiki.
|
||||
|
||||
XWiki comes with lots of features, some of them are listed below:
|
||||
|
||||
* Supports version control and ACL.
|
||||
* Allows you to search the full wiki using wildcards.
|
||||
* Easily export wiki pages to PDF, ODT, RTF, XML and HTML.
|
||||
* Content organization and content import.
|
||||
* Page editing using WYSIWYG editor.
|
||||
|
||||
|
||||
|
||||
### Requirements
|
||||
|
||||
* A server running Ubuntu 16.04.
|
||||
* A non-root user with sudo privileges.
|
||||
|
||||
|
||||
|
||||
Before starting, you will need to update the Ubuntu repository to the latest version. You can do this using the following command:
|
||||
|
||||
```
|
||||
sudo apt-get update -y
|
||||
sudo apt-get upgrade -y
|
||||
```
|
||||
|
||||
Once the repository is updated, restart the system to apply all the updates.
|
||||
|
||||
### Install Java
|
||||
|
||||
Xwiki is a Java-based application, so you will need to install Java 8 first. By default Java 8 is not available in the Ubuntu repository. You can install Java 8 by adding the webupd8team PPA repository to your system.
|
||||
|
||||
First, add the PPA by running the following command:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:webupd8team/java
|
||||
```
|
||||
|
||||
Next, update the repository with the following command:
|
||||
|
||||
```
|
||||
sudo apt-get update -y
|
||||
```
|
||||
|
||||
Once the repository is up to date, you can install Java 8 by running the following command:
|
||||
|
||||
```
|
||||
sudo apt-get install oracle-java8-installer -y
|
||||
```
|
||||
|
||||
After installing Java, you can check the version of Java with the following command:
|
||||
|
||||
```
|
||||
java -version
|
||||
```
|
||||
|
||||
You should see the following output:
|
||||
```
|
||||
Java version "1.8.0_91"
|
||||
Java(TM) SE Runtime Environment (build 1.8.0_91-b14)
|
||||
Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode)
|
||||
|
||||
```
|
||||
|
||||
### Download and Install Xwiki
|
||||
|
||||
Next, you will need to download the setup file provided by XWiki. You can download it using the following command:
|
||||
|
||||
```
|
||||
wget <http://download.forge.ow2.org/xwiki/xwiki-enterprise-installer-generic-8.1-standard.jar>
|
||||
```
|
||||
|
||||
Once the download is completed, you can install the downloaded package file using the java command as shown below:
|
||||
|
||||
```
|
||||
sudo java -jar xwiki-enterprise-installer-generic-8.1-standard.jar
|
||||
```
|
||||
|
||||
You should see the following output:
|
||||
```
|
||||
28 Jan, 2018 6:57:37 PM INFO: Logging initialized at level 'INFO'
|
||||
28 Jan, 2018 6:57:37 PM INFO: Commandline arguments:
|
||||
28 Jan, 2018 6:57:37 PM INFO: Detected platform: ubuntu_linux,version=3.19.0-25-generic,arch=x64,symbolicName=null,javaVersion=1.7.0_151
|
||||
28 Jan, 2018 6:57:37 PM WARNING: Failed to determine hostname and IP address
|
||||
Welcome to the installation of XWiki Enterprise 8.1!
|
||||
The homepage is at: http://xwiki.org/
|
||||
|
||||
Press 1 to continue, 2 to quit, 3 to redisplay
|
||||
|
||||
```
|
||||
|
||||
Now, press **`1`** to continue the installation, you should see the following output:
|
||||
```
|
||||
Please read the following information:
|
||||
|
||||
XWiki Enterprise - Readme
|
||||
|
||||
|
||||
XWiki Enterprise Overview
|
||||
XWiki Enterprise is a second generation Wiki engine, features professional features like
|
||||
Wiki, Blog, Comments, User Rights, LDAP Authentication, PDF Export, and a lot more.
|
||||
XWiki Enterprise also includes an advanced form and scripting engine which makes it an ideal
|
||||
development environment for constructing data-based intranet applications. It has powerful
|
||||
extensibility features, supports scripting, extensions and is based on a highly modular
|
||||
architecture. The scripting engine allows to access a powerful API for accessing the XWiki
|
||||
repository in read and write mode.
|
||||
XWiki Enterprise is used by major companies around the world and has strong
|
||||
Support for a professional usage of XWiki.
|
||||
Pointers
|
||||
Here are some pointers to get you started with XWiki once you have finished installing it:
|
||||
|
||||
The documentation can be found on the XWiki.org web site
|
||||
If you notice any issue please file a an issue in our issue tracker
|
||||
If you wish to talk to XWiki users or developers please use our
|
||||
Mailing lists & Forum
|
||||
You can also access XWiki's
|
||||
source code
|
||||
If you need commercial support please visit the
|
||||
Support page
|
||||
|
||||
|
||||
|
||||
Press 1 to continue, 2 to quit, 3 to redisplay
|
||||
|
||||
```
|
||||
|
||||
Now, press **`1`** to continue the installation, you should see the following output:
|
||||
```
|
||||
See the NOTICE file distributed with this work for additional
|
||||
information regarding copyright ownership.
|
||||
This is free software; you can redistribute it and/or modify it
|
||||
under the terms of the GNU Lesser General Public License as
|
||||
published by the Free Software Foundation; either version 2.1 of
|
||||
the License, or (at your option) any later version.
|
||||
This software is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
Lesser General Public License for more details.
|
||||
You should have received a copy of the GNU Lesser General Public
|
||||
License along with this software; if not, write to the Free
|
||||
Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
|
||||
02110-1301 USA, or see the FSF site: http://www.fsf.org.
|
||||
|
||||
Press 1 to accept, 2 to reject, 3 to redisplay
|
||||
|
||||
```
|
||||
|
||||
Now, press **`1`** to accept the license agreement, you should see the following output:
|
||||
```
|
||||
Select the installation path: [/usr/local/XWiki Enterprise 8.1]
|
||||
|
||||
Press 1 to continue, 2 to quit, 3 to redisplay
|
||||
|
||||
```
|
||||
|
||||
Now, press enter and press **1** to select default installation path, you should see the following output:
|
||||
```
|
||||
[x] Pack 'Core' required
|
||||
????????????????????????????????????????????????????????????????????????????????
|
||||
[x] Include optional pack 'Default Wiki'
|
||||
????????????????????????????????????????????????????????????????????????????????
|
||||
Enter Y for Yes, N for No:
|
||||
Y
|
||||
Press 1 to continue, 2 to quit, 3 to redisplay
|
||||
|
||||
```
|
||||
|
||||
Now, press **`Y`** and press **`1`** to continue the installation, you should see the following output:
|
||||
```
|
||||
[ Starting to unpack ]
|
||||
[ Processing package: Core (1/2) ]
|
||||
[ Processing package: Default Wiki (2/2) ]
|
||||
[ Unpacking finished ]
|
||||
|
||||
```
|
||||
|
||||
Now, you will be asked to create shortcuts for the user, you can press ' **`Y'`** to add them. Next, you will be asked to generate an automatic installation script, just press Enter to select default value, once the installation is finished, you should see the following output:
|
||||
```
|
||||
????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
|
||||
Generate an automatic installation script
|
||||
????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
|
||||
Enter Y for Yes, N for No:
|
||||
Y
|
||||
Select the installation script (path must be absolute)[/usr/local/XWiki Enterprise 8.1/auto-install.xml]
|
||||
|
||||
Installation was successful
|
||||
application installed on /usr/local/XWiki Enterprise 8.1
|
||||
[ Writing the uninstaller data ... ]
|
||||
[ Console installation done ]
|
||||
|
||||
```
|
||||
|
||||
Now, XWiki is installed on your system, it's time to start XWiki startup script as shown below:
|
||||
|
||||
```
|
||||
cd /usr/local/XWiki Enterprise 8.1
|
||||
sudo bash start_xwiki.sh
|
||||
```
|
||||
|
||||
Please, wait for sometime to start processes. Now, you should see some messages on terminal as shown below:
|
||||
```
|
||||
start_xwiki.sh: 79: start_xwiki.sh:
|
||||
Starting Jetty on port 8080, please wait...
|
||||
2018-01-28 19:12:41.842:INFO::main: Logging initialized @1266ms
|
||||
2018-01-28 19:12:42.905:INFO:oejs.Server:main: jetty-9.2.13.v20150730
|
||||
2018-01-28 19:12:42.956:INFO:oejs.AbstractNCSARequestLog:main: Opened /usr/local/XWiki Enterprise 8.1/data/logs/2018_01_28.request.log
|
||||
2018-01-28 19:12:42.965:INFO:oejdp.ScanningAppProvider:main: Deployment monitor [file:/usr/local/XWiki%20Enterprise%208.1/jetty/contexts/] at interval 0
|
||||
2018-01-28 19:13:31,485 [main] INFO o.x.s.s.i.EmbeddedSolrInstance - Starting embedded Solr server...
|
||||
2018-01-28 19:13:31,507 [main] INFO o.x.s.s.i.EmbeddedSolrInstance - Using Solr home directory: [data/solr]
|
||||
2018-01-28 19:13:43,371 [main] INFO o.x.s.s.i.EmbeddedSolrInstance - Started embedded Solr server.
|
||||
2018-01-28 19:13:46.556:INFO:oejsh.ContextHandler:main: Started [email protected]{/xwiki,file:/usr/local/XWiki%20Enterprise%208.1/webapps/xwiki/,AVAILABLE}{/xwiki}
|
||||
2018-01-28 19:13:46.697:INFO:oejsh.ContextHandler:main: Started [email protected]{/,file:/usr/local/XWiki%20Enterprise%208.1/webapps/root/,AVAILABLE}{/root}
|
||||
2018-01-28 19:13:46.776:INFO:oejs.ServerConnector:main: Started [email protected]{HTTP/1.1}{0.0.0.0:8080}
|
||||
|
||||
```
|
||||
|
||||
XWiki is now up and running, it's time to access XWiki web interface.
|
||||
|
||||
### Access XWiki
|
||||
|
||||
XWiki runs on port **8080** , so you will need to allow port 8080 through the firewall. First, enable the UFW firewall with the following command:
|
||||
|
||||
```
|
||||
sudo ufw enable
|
||||
```
|
||||
|
||||
Next, allow port **8080** through the UFW firewall with the following command:
|
||||
|
||||
```
|
||||
sudo ufw allow 8080/tcp
|
||||
```
|
||||
|
||||
Next, reload the firewall rules to apply all the changes by running the following command:
|
||||
|
||||
```
|
||||
sudo ufw reload
|
||||
```
|
||||
|
||||
You can get the status of the UFW firewall with the following command:
|
||||
|
||||
```
|
||||
sudo ufw status
|
||||
```
|
||||
|
||||
Now, open your web browser and type the URL **<http://your-server-ip:8080>** , you will be redirected to the XWiki home page as shown below:
|
||||
|
||||
[![XWiki Dashboard][1]][2]
|
||||
|
||||
You can stop the XWiki server at any time by pressing **`Ctrl + C`** button in the terminal.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Congratulations! you have successfully installed and configured XWiki on Ubuntu 16.04 server. I hope you can now easily host your own wiki site using XWiki on Ubuntu 16.04 server. For more information, you can check the XWiki official documentation page at <https://www.xwiki.org/xwiki/bin/view/Documentation/>. Feel free to comments me if you have any questions.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/how-to-install-and-configure-xwiki-on-ubuntu-1604/
|
||||
|
||||
作者:[Hitesh Jethva][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/images/how_to_install_and_configure_xwiki_on_ubuntu_1604/Screenshot-of-xwiki-dashboard.png
|
||||
[2]:https://www.howtoforge.com/images/how_to_install_and_configure_xwiki_on_ubuntu_1604/big/Screenshot-of-xwiki-dashboard.png
|
@ -0,0 +1,332 @@
|
||||
How to start writing macros in LibreOffice Basic
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code1.png?itok=aP7t4ntl)
|
||||
|
||||
I have long promised to write about the scripting language [Basic][1] and creating macros in LibreOffice. This article is devoted to the types of data used in LibreOffice Basic, and to a greater extent, descriptions of variables and the rules for using them. I will try to provide enough information for advanced as well as novice users.
|
||||
|
||||
(And, I would like to thank everyone who commented on and offered recommendations on the Russian article, especially those who helped answer difficult questions.)
|
||||
|
||||
### Variable naming conventions
|
||||
|
||||
Variable names cannot contain more than 255 characters. They should start with either upper- or lower-case letters of the Latin alphabet, and they can include underscores ("_") and numerals. Other punctuation or characters from non-Latin alphabets can cause a syntax error or a BASIC runtime error if names are not put within square brackets.
|
||||
|
||||
Here are some examples of correct variable names:
|
||||
```
|
||||
MyNumber=5
|
||||
|
||||
MyNumber5=15
|
||||
|
||||
MyNumber_5=20
|
||||
|
||||
_MyNumber=96
|
||||
|
||||
[My Number]=20.5
|
||||
|
||||
[5MyNumber]=12
|
||||
|
||||
[Number,Mine]=12
|
||||
|
||||
[DéjàVu]="It seems that I have seen it!"
|
||||
|
||||
[Моя переменная]="The first has went!"
|
||||
|
||||
[Мой % от зделки]=0.0001
|
||||
|
||||
```
|
||||
|
||||
Note: In examples that contain square brackets, if you remove the brackets, macros will show a window with an error. As you can see, you can use localized variable names. Whether it makes sense to do so is up to you.
|
||||
|
||||
### Declaring variables
|
||||
|
||||
Strictly speaking, it is not necessary to declare variables in LibreOffice Basic (except for arrays). If you write a macro from a pair of lines to work with small documents, you don't need to declare variables, as the variable will automatically be declared as the variant type. For longer macros or those that will work in large documents, it is strongly recommended that you declare variables. First, it increases the readability of the text. Second, it allows you to control variables that can greatly facilitate the search for errors. Third, the variant type is very resource-intensive, and considerable time is needed for the hidden conversion. In addition, the variant type does not choose the optimal variable type for data, which increases the workload of computer resources.
|
||||
|
||||
Basic can automatically assign a variable type by its prefix (the first letter in the name) to simplify the work if you prefer to use the Hungarian notation. For this, the statement **DefXXX** is used; **XXX** is the letter type designation. A statement with a letter will work in the module, and it must be specified before subprograms and functions appear. There are 11 types:
|
||||
```
|
||||
DefBool - for boolean variables;
|
||||
DefInt - for integer variables of type Integer;
|
||||
DefLng - for integer variables of type Long Integer;
|
||||
DefSng - for variables with a single-precision floating point;
|
||||
DefDbl - for variables with double-precision floating-point type Double;
|
||||
DefCur - for variables with a fixed point of type Currency;
|
||||
DefStr - for string variables;
|
||||
DefDate - for date and time variables;
|
||||
DefVar - for variables of Variant type;
|
||||
DefObj - for object variables;
|
||||
DefErr - for object variables containing error information.
|
||||
|
||||
```
|
||||
|
||||
If you already have an idea of the types of variables in LibreOffice Basic, you probably noticed that there is no **Byte** type in this list, but there is a strange beast with the **Error** type. Unfortunately, you just need to remember this; I have not yet discovered why this is true. This method is convenient because the type is assigned to the variables automatically. But it does not allow you to find errors related to typos in variable names. In addition, it will not be possible to specify non-Latin letters; that is, all names of variables in square brackets that need to be declared must be declared explicitly.
|
||||
|
||||
To avoid typos when using declared variables explicitly, you can use the statement **OPTION EXPLICIT**. This statement should be the first line of code in the module. All other commands, except comments, should be placed after it. This statement tells the interpreter that all variables must be declared explicitly; otherwise, it produces an error. Naturally, this statement makes it meaningless to use the **Def** statement in the code.
|
||||
|
||||
A variable is declared using the statement **Dim**. You can declare several variables simultaneously, even different types, if you separate their names with commas. To determine the type of a variable with an explicit declaration, you can use either a corresponding keyword or a type-declaration sign after the name. If a type-declaration sign or a keyword is not used after the variable, then the **Variant** type is automatically assigned to it. For example:
|
||||
```
|
||||
Dim iMyVar 'variable of Variant type
|
||||
Dim iMyVar1 As Integer, iMyVar2 As Integer 'in both cases Integer type
|
||||
Dim iMyVar3, iMyVar4 As Integer 'in this case the first variable
|
||||
'is Variant, and the second is Integer
|
||||
```
|
||||
|
||||
### Variable types
|
||||
|
||||
LibreOffice Basic supports seven classes of variables:
|
||||
|
||||
* Logical variables containing one of the values: **TRUE** or **FALSE**
|
||||
* Numeric variables containing numeric values. They can be integer, integer-positive, floating-point, and fixed-point
|
||||
* String variables containing character strings
|
||||
* Date variables can contain a date and/or time in the internal format
|
||||
* Object variables can contain objects of different types and structures
|
||||
* Arrays
|
||||
* Abstract type **Variant**
|
||||
|
||||
|
||||
|
||||
#### Logical variables – Boolean
|
||||
|
||||
Variables of the **Boolean** type can contain only one of two values: **TRUE** or **FALSE**. In the numerical equivalent, the value FALSE corresponds to the number 0, and the value TRUE corresponds to **-1** (minus one). Any value other than zero passed to a variable of the Boolean type will be converted to **TRUE** ; that is, converted to a minus one. You can explicitly declare a variable in the following way:
|
||||
```
|
||||
Dim MyBoolVar As Boolean
|
||||
```
|
||||
|
||||
I did not find a special symbol for it. For an implicit declaration, you can use the **DefBool** statement. For example:
|
||||
```
|
||||
DefBool b 'variables beginning with b by default are the type Boolean
|
||||
```
|
||||
|
||||
The initial value of the variable is set to **FALSE**. A Boolean variable requires one byte of memory.
|
||||
|
||||
#### Integer variables
|
||||
|
||||
There are three types of integer variables: **Byte** , **Integer** , and **Long Integer**. These variables can only contain integers. When you transfer numbers with a fraction into such variables, they are rounded according to the rules of classical arithmetic (not to the larger side, as it stated in the help section). The initial value for these variables is 0 (zero).
|
||||
|
||||
#### Byte
|
||||
|
||||
Variables of the **Byte** type can contain only integer-positive values in the range from 0 to 255. Do not confuse this type with the physical size of information in bytes. Although we can write down a hexadecimal number to a variable, the word "Byte" indicates only the dimensionality of the number. You can declare a variable of this type as follows:
|
||||
```
|
||||
Dim MyByteVar As Byte
|
||||
```
|
||||
|
||||
There is no a type-declaration sign for this type. There is no the statement Def of this type. Because of its small dimension, this type will be most convenient for a loop index, the values of which do not go beyond the range. A **Byte** variable requires one byte of memory.
|
||||
|
||||
#### Integer
|
||||
|
||||
Variables of the Integer type can contain integer values from -32768 to 32767. They are convenient for fast calculations in integers and are suitable for a loop index. **%** is a type-declaration sign. You can declare a variable of this type in the following ways:
|
||||
```
|
||||
Dim MyIntegerVar%
|
||||
Dim MyIntegerVar As Integer
|
||||
```
|
||||
|
||||
For an implicit declaration, you can use the **DefInt** statement. For example:
|
||||
```
|
||||
DefInt i 'variables starting with i by default have type Integer
|
||||
```
|
||||
|
||||
An Integer variable requires two bytes of memory.
|
||||
|
||||
#### Long integer
|
||||
|
||||
Variables of the Long Integer type can contain integer values from -2147483648 to 2147483647. Long Integer variables are convenient in integer calculations when the range of type Integer is insufficient for the implementation of the algorithm. **&** is a type-declaration sign. You can declare a variable of this type in the following ways:
|
||||
```
|
||||
Dim MyLongVar&
|
||||
Dim MyLongVar As Long
|
||||
|
||||
```
|
||||
|
||||
For an implicit declaration, you can use the **DefLng** statement. For example:
|
||||
```
|
||||
DefLng l 'variables starting with l have Long by default
|
||||
```
|
||||
|
||||
A Long Integer variable requires four bytes of memory.
|
||||
|
||||
#### Numbers with a fraction
|
||||
|
||||
All variables of these types can take positive or negative values of numbers with a fraction. The initial value for them is 0 (zero). As mentioned above, if a number with a fraction is assigned to a variable capable of containing only integers, LibreOffice Basic rounds the number according to the rules of classical arithmetic.
|
||||
|
||||
#### Single
|
||||
|
||||
Single variables can take positive or negative values in the range from 3.402823x10E+38 to 1.401293x10E-38. Values of variables of this type are in single-precision floating-point format. In this format, only eight numeric characters are stored, and the rest is stored as a power of ten (the number order). In the Basic IDE debugger, you can see only 6 decimal places, but this is a blatant lie. Computations with variables of the Single type take longer than Integer variables, but they are faster than computations with variables of the Double type. A type-declaration sign is **!**. You can declare a variable of this type in the following ways:
|
||||
```
|
||||
Dim MySingleVar!
|
||||
Dim MySingleVar As Single
|
||||
```
|
||||
|
||||
For an implicit declaration, you can use the **DefSng** statement. For example:
|
||||
```
|
||||
DefSng f 'variables starting with f have the Single type by default
|
||||
```
|
||||
|
||||
A single variable requires four bytes of memory.
|
||||
|
||||
#### Double
|
||||
|
||||
Variables of the Double type can take positive or negative values in the range from 1.79769313486231598x10E308 to 1.0x10E-307. Why such a strange range? Most likely in the interpreter, there are additional checks that lead to this situation. Values of variables of the Double type are in double-precision floating-point format and can have 15 decimal places. In the Basic IDE debugger, you can see only 14 decimal places, but this is also a blatant lie. Variables of the Double type are suitable for precise calculations. Calculations require more time than the Single type. A type-declaration sign is **#**. You can declare a variable of this type in the following ways:
|
||||
```
|
||||
Dim MyDoubleVar#
|
||||
Dim MyDoubleVar As Double
|
||||
```
|
||||
|
||||
For an implicit declaration, you can use the **DefDbl** statement. For example:
|
||||
```
|
||||
DefDbl d 'variables beginning with d have the type Double by default
|
||||
```
|
||||
|
||||
A variable of the Double type requires 8 bytes of memory.
|
||||
|
||||
#### Currency
|
||||
|
||||
Variables of the Currency type are displayed as numbers with a fixed point and have 15 signs in the integral part of a number and 4 signs in fractional. The range of values includes numbers from -922337203685477.6874 to +92337203685477.6874. Variables of the Currency type are intended for exact calculations of monetary values. A type-declaration sign is **@**. You can declare a variable of this type in the following ways:
|
||||
```
|
||||
Dim MyCurrencyVar@
|
||||
Dim MyCurrencyVar As Currency
|
||||
```
|
||||
|
||||
For an implicit declaration, you can use the **DefCur** statement. For example:
|
||||
```
|
||||
DefCur c 'variables beginning with c have the type Currency by default
|
||||
```
|
||||
|
||||
A Currency variable requires 8 bytes of memory.
|
||||
|
||||
#### String
|
||||
|
||||
Variables of the String type can contain strings in which each character is stored as the corresponding Unicode value. They are used to work with textual information, and in addition to printed characters (symbols), they can also contain non-printable characters. I do not know the maximum size of the line. Mike Kaganski experimentally set the value to 2147483638 characters, after which LibreOffice falls. This corresponds to almost 4 gigabytes of characters. A type-declaration sign is **$**. You can declare a variable of this type in the following ways:
|
||||
```
|
||||
Dim MyStringVar$
|
||||
Dim MyStringVar As String
|
||||
```
|
||||
|
||||
For an implicit declaration, you can use the **DefStr** statement. For example:
|
||||
```
|
||||
DefStr s 'variables starting with s have the String type by default
|
||||
```
|
||||
|
||||
The initial value of these variables is an empty string (""). The memory required to store string variables depends on the number of characters in the variable.
|
||||
|
||||
#### Date
|
||||
|
||||
Variables of the Date type can contain only date and time values stored in the internal format. In fact, this internal format is the double-precision floating-point format (Double), where the integer part is the number of days, and the fractional is part of the day (that is, 0.00001157407 is one second). The value 0 is equal to 30.12.1899. The Basic interpreter automatically converts it to a readable version when outputting, but not when loading. You can use the Dateserial, Datevalue, Timeserial, or Timevalue functions to quickly convert to the internal format of the Date type. To extract a certain part from a variable in the Date format, you can use the Day, Month, Year, Hour, Minute, or Second functions. The internal format allows us to compare the date and time values by calculating the difference between two numbers. There is no a type-declaration sing for the Date type, so if you explicitly define it, you need to use the Date keyword.
|
||||
```
|
||||
Dim MyDateVar As Date
|
||||
```
|
||||
|
||||
For an implicit declaration, you can use the **DefDate** statement. For example:
|
||||
```
|
||||
DefDate y 'variables starting with y have the Date type by default
|
||||
```
|
||||
|
||||
A Date variable requires 8 bytes of memory.
|
||||
|
||||
**Types of object variables**
|
||||
|
||||
We can take two variables types of LibreOffice Basic to Objects.
|
||||
|
||||
#### Objects
|
||||
|
||||
Variables of the Object type are variables that store objects. In general, the object is any isolated part of the program that has the structure, properties, and methods of access and data processing. For example, a document, a cell, a paragraph, and dialog boxes are objects. They have a name, size, properties, and methods. In turn, these objects also consist of objects, which in turn can also consist of objects. Such a "pyramid" of objects is often called an object model, and it allows us, when developing small objects, to combine them into larger ones. Through a larger object, we have access to smaller ones. This allows us to operate with our documents, to create and process them while abstracting from a specific document. There is no a type-declaration sing for the Object type, so for an explicit definition, you need to use the Object keyword.
|
||||
```
|
||||
Dim MyObjectVar As Object
|
||||
```
|
||||
|
||||
For an implicit declaration, you can use the **DefObj** statement. For example:
|
||||
```
|
||||
DefObj o 'variables beginning with o have the type Object by default
|
||||
```
|
||||
|
||||
The variable of type Object does not store in itself an object but is only a reference to it. The initial value for this type of variables is Null.
|
||||
|
||||
#### Structures
|
||||
|
||||
The structure is essentially an object. If you look in the Basic IDE debugger, most (but not all) are the Object type. Some are not; for example, the structure of the Error has the type Error. But roughly speaking, the structures in LibreOffice Basic are simply grouped into one object variable, without special access methods. Another significant difference is that when declaring a variable of the Structure type, we must specify its name, rather than the Object. For example, if MyNewStructure is the name of a structure, the declaration of its variable will look like this:
|
||||
```
|
||||
Dim MyStructureVar As MyNewStructure
|
||||
```
|
||||
|
||||
There are a lot of built-in structures, but the user can create personal ones. Structures can be convenient when we need to operate with sets of heterogeneous information that should be treated as a single whole. For example, to create a tPerson structure:
|
||||
```
|
||||
Type tPerson
|
||||
Name As String
|
||||
Age As Integer
|
||||
Weight As Double
|
||||
End Type
|
||||
|
||||
```
|
||||
|
||||
The definition of the structure should go before subroutines and functions that use it.
|
||||
|
||||
To fill a structure, you can use, for example, the built-in structure com.sun.star.beans.PropertyValue:
|
||||
```
|
||||
Dim oProp As New com.sun.star.beans.PropertyValue
|
||||
OProp.Name = "Age" 'Set the Name
|
||||
OProp.Value = "Amy Boyer" 'Set the Property
|
||||
```
|
||||
|
||||
For a simpler filling of the structure, you can use the **With** operator.
|
||||
```
|
||||
Dim oProp As New com.sun.star.beans.PropertyValue
|
||||
With oProp
|
||||
.Name = "Age" 'Set the Name
|
||||
.Value = "Amy Boyer" 'Set the Property
|
||||
End With
|
||||
```
|
||||
|
||||
The initial value is only for each variable in the structure and corresponds to the type of the variable.
|
||||
|
||||
#### Variant
|
||||
|
||||
This is a virtual type of variables. The Variant type is automatically selected for the data to be operated on. The only problem is that the interpreter does not need to save our resources, and it does not offer the most optimal variants of variable types. For example, it does not know that 1 can be written in Byte, and 100000 in Long Integer, although it reproduces a type if the value is passed from another variable with the declared type. Also, the transformation itself is quite resource-intensive. Therefore, this type of variable is the slowest of all. If you need to declare this kind of variable, you can use the **Variant** keyword. But you can omit the type description altogether; the Variant type will be assigned automatically. There is no a type-declaration sign for this type.
|
||||
```
|
||||
Dim MyVariantVar
|
||||
Dim MyVariantVar As Variant
|
||||
```
|
||||
|
||||
For an implicit declaration, you can use the **DefVar** statement. For example:
|
||||
```
|
||||
DefVar v 'variables starting with v have the Variant type by default
|
||||
```
|
||||
|
||||
This variables type is assigned by default to all undeclared variables.
|
||||
|
||||
#### Arrays
|
||||
|
||||
Arrays are a special type of variable in the form of a data set, reminiscent of a mathematical matrix, except that the data can be of different types and allow one to access its elements by index (element number). Of course, a one-dimensional array will be similar to a column or row, and a two-dimensional array will be like a table. There is one feature of arrays in LibreOffice Basic that distinguishes it from other programming languages. Since we have an abstract type of variant, then the elements of the array do not need to be homogeneous. That is, if there is an array MyArray and it has three elements numbered from 0 to 2, and we write the name in the first element of MyArray(0), the age in the second MyArray(1), and the weight in the third MyArray(2), we can have, respectively, the following type values: String for MyArray(0), Integer for MyArray(1), and Double for MyArray(2). In this case, the array will resemble a structure with the ability to access the element by its index. Array elements can also be homogeneous: Other arrays, objects, structures, strings, or any other data type can be used in LibreOffice Basic.
|
||||
|
||||
Arrays must be declared before they are used. Although the index space can be in the range of type Integer—from -32768 to 32767—by default, the initial index is selected as 0. You can declare an array in several ways:
|
||||
|
||||
| Dim MyArrayVar(5) as string | String array with 6 elements from 0 to 5 |
|
||||
| Dim MyArrayVar$(5) | Same as the previous |
|
||||
| Dim MyArrayVar(1 To 5) as string | String array with 5 elements from 1 to 5 |
|
||||
| Dim MyArrayVar(5,5) as string | Two-dimensional array of rows with 36 elements with indexes in each level from 0 to 5 |
|
||||
| Dim MyArrayVar$(-4 To 5, -4 To 5) | Two-dimensional strings array with 100 elements with indexes in each level from -4 to 5 |
|
||||
| Dim MyArrayVar() | Empty array of the Variant type |
|
||||
|
||||
You can change the lower bound of an array (the index of the first element of the array) by default using the **Option Base** statement; that must be specified before using subprograms, functions, and defining user structures. Option Base can take only two values, 0 or 1, which must follow immediately after the keywords. The action applies only to the current module.
|
||||
|
||||
### Learn more
|
||||
|
||||
If you are just starting out in programming, Wikipedia provides general information about the [array][2], structure, and many other topics.
|
||||
|
||||
For a more in-depth study of LibreOffice Basic, [Andrew Pitonyak's][3] website is a top resource, as is the [Basic Programmer's guide][4]. You can also use the LibreOffice [online help][1]. Completed popular macros can be found in the [Macros][5] section of The Document Foundation's wiki, where you can also find additional links on the topic.
|
||||
|
||||
For more tips, or to ask questions, visit [Ask LibreOffice][6] and [OpenOffice forum][7].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/variables-data-types-libreoffice-basic
|
||||
|
||||
作者:[Lera Goncharuk][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/tagezi
|
||||
[1]:https://helponline.libreoffice.org/latest/en-US/text/sbasic/shared/main0601.html?DbPAR=BASIC
|
||||
[2]:https://en.wikipedia.org/wiki/Array_data_structure
|
||||
[3]:http://www.pitonyak.org/book/
|
||||
[4]:https://wiki.documentfoundation.org/images/d/dd/BasicGuide_OOo3.2.0.odt
|
||||
[5]:https://wiki.documentfoundation.org/Macros
|
||||
[6]:https://ask.libreoffice.org/en/questions/scope:all/sort:activity-desc/tags:basic/page:1/
|
||||
[7]:https://forum.openoffice.org/en/forum/viewforum.php?f=20&sid=74f5894a7d7942953cd99d978d54e75b
|
@ -0,0 +1,255 @@
|
||||
A review of Virtual Labs virtualization solutions for MOOCs – WebLog Pro Olivier Berger
|
||||
======
|
||||
### 1 Introduction
|
||||
|
||||
This is a memo that tries to capture some of the experience gained in the [FLIRT project][3] on the topic of Virtual Labs for MOOCs (Massive Open Online Courses).
|
||||
|
||||
In this memo, we try to draw an overview of some benefits and concerns with existing approaches at using virtualization techniques for running Virtual Labs, as distributions of tools made available for distant learners.
|
||||
|
||||
We describe 3 main technical architectures: (1) running Virtual Machine images locally on a virtual machine manager, or (2) displaying the remote execution of similar virtual machines on a IaaS cloud, and (3) the potential of connecting to the remote execution of minimized containers on a remote PaaS cloud.
|
||||
|
||||
We then elaborate on some perspectives for locally running ports of applications to the WebAssembly virtual machine of the modern Web browsers.
|
||||
|
||||
Disclaimer: This memo doesn’t intend to point to extensive literature on the subject, so part of our analysis may be biased by our particular context.
|
||||
|
||||
### 2 Context : MOOCs
|
||||
|
||||
Many MOOCs (Massive Open Online Courses) include a kind of “virtual laboratory” for learners to experiment with tools, as a way to apply the knowledge, practice, and be more active in the learning process. In quite a few (technical) disciplines, this can consist in using a set of standard applications in a professional domain, which represent typical tools that would be used in real life scenarii.
|
||||
|
||||
Our main perspective will be that of a MOOC editor and of MOOC production teams which want to make “virtual labs” available for MOOC participants.
|
||||
|
||||
Such a “virtual lab” would typically contain installations of existing applications, pre-installed and configured, and loaded with scenario data in order to perform a lab.
|
||||
|
||||
The main constraint here is that such labs would typically be fabricated with limited software development expertise and funds[1][4]. Thus we consider here only the assembly of existing “normal” applications and discard the option of developping novel “serious games” and simulator applications for such MOOCs.
|
||||
|
||||
#### 2.1 The FLIRT project
|
||||
|
||||
The [FLIRT project][5] groups a consortium of 19 partners in Industry, SMEs and Academia to work on a collection of MOOCs and SPOCs for professional development in Networks and Telecommunications. Lead by Institut Mines Telecom, it benefits from the funding support of the French “Investissements d’avenir” programme.
|
||||
|
||||
As part of the FLIRT roadmap, we’re leading an “innovation task” focused on Virtual Labs in the context of the Cloud. This memo was produced as part of this task.
|
||||
|
||||
#### 2.2 Some challenges in virtual labs design for distant learning
|
||||
|
||||
Virtual Labs used in distance learning contexts require the use of software applications in autonomy, either running on a personal, or professional computer. In general, the technical skills of participants may be diverse. So much for the quality (bandwith, QoS, filtering, limitations: firewaling) of the hardware and networks they use at home or at work. It’s thus very optimistic to seek for one solution fits all strategy.
|
||||
|
||||
Most of the time there’s a learning curve on getting familiar with the tools which students will have to use, which constitutes as many challenges to overcome for beginners. These tools may not be suited for beginners, but they will still be selected by the trainers as they’re representative of the professional context being taught.
|
||||
|
||||
In theory, this usability challenge should be addressed by devising an adapted pedagogical approach, especially in a context of distance learning, so that learners can practice the labs on their own, without the presence of a tutor or professor. Or some particular prerequisite skills could be required (“please follow System Administration 101 before applying to this course”).
|
||||
|
||||
Unfortunately there are many cases where instructors basically just translate to a distant learning scenario, previous lab resources that had previously been devised for in presence learning. This lets learner faced with many challenges to overcome. The only support resource is often a regular forum on the MOOC’s LMS (Learning Management System).
|
||||
|
||||
My intuition[2][6] is that developing ad-hoc simulators for distant education would probably be more efficient and easy to use for learners. But that would require a too high investment for the designers of the courses.
|
||||
|
||||
In the context of MOOCs which are mainly free to participate to, not much investment is possible in devising ad-hoc lab applications, and instructors have to rely on existing applications, tools and scenarii to deliver a cheap enough environment. Furthermore, technical or licensing constraints[3][7] may lead to selecting lab tools which may not be easy to learn, but have the great advantage or being freely redistributable[4][8].
|
||||
|
||||
### 3 Virtual Machines for Virtual Labs
|
||||
|
||||
The learners who will try unattended learning in such typical virtual labs will face difficulties in making specialized applications run. They must overcome the technical details of downloading, installing and configuring programs, before even trying to perform a particular pedagogical scenario linked to the matter studied.
|
||||
|
||||
To diminish these difficulties, one traditional approach for implementing labs in MOOCs has been to assemble in advance a Virtual Machine image. This already made image can then be downloaded and run with a virtual machine simulator (like [VirtualBox][9][5][10]).
|
||||
|
||||
The pre-loaded VM will already have everything ready for use, so that the learners don’t have to install anything on their machines.
|
||||
|
||||
An alternative is to let learners download and install the needed software tools themselves, but this leads to so many compatibility issues or technical skill prerequisites, that this is often not advised, and mentioned only as a fallback option.
|
||||
|
||||
#### 3.1 Downloading and installation issues
|
||||
|
||||
Experience shows[2][11] that such virtual machines also bring some issues. Even if installation of every piece of software is no longer required, learners still need to be able to run the VM simulator on a wide range of diverse hardware, OSes and configurations. Even managing to download the VMs, still causes many issues (lack admin privileges, weight vs download speed, memory or CPU load, disk space, screen configurations, firewall filtering, keayboard layout, etc.).
|
||||
|
||||
These problems aren’t generally faced by the majority of learners, but the impacted minority is not marginal either, and they generally will produce a lot of support requests for the MOOC team (usually in the forums), which needs to be anticipated by the community managers.
|
||||
|
||||
The use of VMs is no show stopper for most, but can be a serious problem for a minority of learners, and is then no silver bullet.
|
||||
|
||||
Some general usability issues may also emerge if users aren’t used to the look and feel of the enclosed desktop. For instance, the VM may consist of a GNU/Linux desktop, whereas users would use a Windows or Mac OS system.
|
||||
|
||||
#### 3.2 Fabrication issues for the VM images
|
||||
|
||||
On the MOOC team’s side, the fabrication of a lightweight, fast, tested, license-free and easy to use VM image isn’t necessarily easy.
|
||||
|
||||
Software configurations tend to rot as time passes, and maintenance may not be easy when the later MOOC editions evolutions lead to the need to maintain the virtual lab scenarii years later.
|
||||
|
||||
Ideally, this would require adopting an “industrial” process in building (and testing) the lab VMs, but this requires quite an expertise (system administration, packaging, etc.) that may or not have been anticipated at the time of building the MOOC (unlike video editing competence, for instance).
|
||||
|
||||
Our experiment with the [Vagrant][12] technology [[0][13]] and Debian packaging was interesting in this respect, as it allowed us to use a well managed “script” to precisely control the build of a minimal VM image.
|
||||
|
||||
### 4 Virtual Labs as a Service
|
||||
|
||||
To overcome the difficulties in downloading and running Virtual Machines on one’s local computer, we have started exploring the possibility to run these applications in a kind of Software as a Service (SaaS) context, “on the cloud”.
|
||||
|
||||
But not all applications typically used in MOOC labs are already available for remote execution on the cloud (unless the course deals precisely with managing email in GMail).
|
||||
|
||||
We have then studied the option to use such an approach not for a single application, but for a whole virtual “desktop” which would be available on the cloud.
|
||||
|
||||
#### 4.1 IaaS deployments
|
||||
|
||||
A way to achieve this goal is to deploy Virtual Machine images quite similar to the ones described above, on the cloud, in an Infrastructure as a Service (IaaS) context[6][14], to offer access to remote desktops for every learners.
|
||||
|
||||
There are different technical options to achieve this goal, but a simplified description of the architecture can be seen as just running Virtual Machines on a single IaaS platform instead of on each learner’s computer. Access to the desktop and application interfaces is made possible with the use of Web pages (or other dedicated lightweight clients) which will display a “full screen” display of the remote desktop running for the user on the cloud VM. Under the hood, the remote display of a Linux desktop session is made with technologies like [VNC][15] and [RDP][16] connecting to a [Guacamole][17] server on the remote VM.
|
||||
|
||||
In the context of the FLIRT project, we have made early experiments with such an architecture. We used the CloVER solution by our partner [ProCAN][18] which provides a virtual desktops broker between [OpenEdX][19] and an [OpenStack][20] IaaS public platform.
|
||||
|
||||
The expected benefit is that users don’t have to install anything locally, as the only tool needed locally is a Web browser (displaying a full-screen [HTML5 canvas][21] displaying the remote desktop run by the Guacamole server running on the cloud VM.
|
||||
|
||||
But there are still some issues with such an approach. First, the cost of operating such an infrastructure : Virtual Machines need to be hosted on a IaaS platform, and that cost of operation isn’t null[7][22] for the MOOC editor, compared to the cost of VirtualBox and a VM running on the learner’s side (basically zero for the MOOC editor).
|
||||
|
||||
Another issue, which could be more problematic lies in the need for a reliable connection to the Internet during the whole sequences of lab execution by the learners[8][23]. Even if Guacamole is quite efficient at compressing rendering traffic, some basic connectivity is needed during the whole Lab work sessions, preventing some mobile uses for instance.
|
||||
|
||||
One other potential annoyance is the potential delays for making a VM available to a learner (provisioning a VM), when huge VMs images need to be copied inside the IaaS platform when a learner connects to the Virtual Lab activity for the first time (several minutes delays). This may be worse if the VM image is too big (hence the need for optimization of the content[9][24]).
|
||||
|
||||
However, the fact that all VMs are running on a platform under the control of the MOOC editor allows new kind of features for the MOOC. For instance, learners can submit results of their labs directly to the LMS without the need to upload or copy-paste results manually. This can help monitor progress or perform evaluation or grading.
|
||||
|
||||
The fact that their VMs run on the same platform also allows new kinds of pedagogical scenarii, as VMs of multiple learners can be interconnected, allowing cooperative activities between learners. The VM images may then need to be instrumented and deployed in particular configurations, which may require the use of a dedicated broker like CloVER to manage such scenarii.
|
||||
|
||||
For the records, we have yet to perform a rigorous benchmarking of such a solution in order to evaluate its benefits, or constraints given particular contexts. In FLIRT, our main focus will be in the context of SPOCs for professional training (a bit different a context than public MOOCs).
|
||||
|
||||
Still this approach doesn’t solve the VMs fabrication issues for the MOOC staff. Installing software inside a VM, be it local inside a VirtualBox simulator of over the cloud through a remote desktop display, makes not much difference. This relies mainly on manual operations and may not be well managed in terms of quality of the process (reproducibility, optimization).
|
||||
|
||||
#### 4.2 PaaS deployments using containers
|
||||
|
||||
Some key issues in the IaaS context described above, are the cost of operation of running full VMs, and long provisioning delays.
|
||||
|
||||
We’re experimenting with new options to address these issues, through the use of [Linux containers][25] running on a PaaS (Platform as a Service) platform, instead of full-fleshed Virtual Machines[10][26].
|
||||
|
||||
The main difference, with containers instead of Virtual Machines, lies in the reduced size of images, and much lower CPU load requirements, as the container remove the need for one layer of virtualization. Also, the deduplication techniques at the heart of some virtual file-systems used by container platforms lead to really fast provisioning, avoiding the need to wait for the labs to start.
|
||||
|
||||
The traditional making of VMs, done by installing packages and taking a snapshot, was affordable for the regular teacher, but involved manual operations. In this respect, one other major benefit of containers is the potential for better industrialization of the virtual lab fabrication, as they are generally not assembled manually. Instead, one uses a “scripting” approach in describing which applications and their dependencies need to be put inside a container image. But this requires new competence from the Lab creators, like learning the [Docker][27] technology (and the [OpenShift][28] PaaS, for instance), which may be quite specialized. Whereas Docker containers tend to become quite popular in Software Development faculty (through the “[devops][29]” hype), they may be a bit new to other field instructors.
|
||||
|
||||
The learning curve to mastering the automation of the whole container-based labs installation needs to be evaluated. There’s a trade-off to consider in adopting technology like Vagrant or Docker: acquiring container/PaaS expertise vs quality of industrialization and optimization. The production of a MOOC should then require careful planning if one has to hire or contract with a PaaS expert for setting up the Virtual Labs.
|
||||
|
||||
We may also expect interesting pedagogical benefits. As containers are lightweight, and platforms allow to “easily” deploy multiple interlinked containers (over dedicated virtual networks), this enables the setup of more realistic scenarii, where each learner may be provided with multiple “nodes” over virtual networks (all running their individual containers). This would be particularly interesting for Computer Networks or Security teaching for instance, where each learner may have access both to client and server nodes, to study client-server protocols, for instance. This is particularly interesting for us in the context of our FLIRT project, where we produce a collection of Computer Networks courses.
|
||||
|
||||
Still, this mode of operation relies on a good connectivity of the learners to the Cloud. In contexts of distance learning in poorly connected contexts, the PaaS architecture doesn’t solve that particular issue compared to the previous IaaS architecture.
|
||||
|
||||
### 5 Future server-less Virtual Labs with WebAssembly
|
||||
|
||||
As we have seen, the IaaS or PaaS based Virtual Labs running on the Cloud offer alternatives to installing local virtual machines on the learner’s computer. But they both require to be connected for the whole duration of the Lab, as the applications would be executed on the remote servers, on the Cloud (either inside VMs or containers).
|
||||
|
||||
We have been thinking of another alternative which could allow the deployment of some Virtual Labs on the local computers of the learners without the hassles of downloading and installing a Virtual Machine manager and VM image. We envision the possibility to use the infrastructure provided by modern Web browsers to allow running the lab’s applications.
|
||||
|
||||
At the time of writing, this architecture is still highly experimental. The main idea is to rebuild the applications needed for the Lab so that they can be run in the “generic” virtual machine present in the modern browsers, the [WebAssembly][30] and Javascript execution engine.
|
||||
|
||||
WebAssembly is a modern language which seeks for maximum portability, and as its name hints, is a kind of assembly language for the Web platform. What is of interest for us is that WebAssembly is portable on most modern Web browsers, making it a very interesting platform for portability.
|
||||
|
||||
Emerging toolchains allow recompiling applications written in languages like C or C++ so that they can be run on the WebAssembly virtual machine in the browser. This is interesting as it doesn’t require modifying the source code of these programs. Of course, there are limitations, in the kind of underlying APIs and libraries compatible with that platform, and on the sandboxing of the WebAssembly execution engine enforced by the Web browser.
|
||||
|
||||
Historically, WebAssembly has been developped so as to allow running games written in C++ for a framework like Unity, in the Web browser.
|
||||
|
||||
In some contexts, for instance for tools with an interactive GUI, and processing data retrieved from files, and which don’t need very specific interaction with the underlying operating system, it seems possible to port these programs to WebAssembly for running them inside the Web browser.
|
||||
|
||||
We have to experiment deeper with this technology to validate its potential for running Virtual Labs in the context of a Web browser.
|
||||
|
||||
We used a similar approach in the past in porting a Relational Database course lab to the Web browser, for standalone execution. A real database would run in the minimal SQLite RDBMS, recompiled to JavaScript[11][31]. Instead of having to download, install and run a VM with a RDBMS, the students would only connect to a Web page, which would load the DBMS in memory, and allow performing the lab SQL queries locally, disconnected from any third party server.
|
||||
|
||||
In a similar manner, we can think for instance, of a Lab scenario where the Internet packet inspector features of the Wireshark tool would run inside the WebAssembly virtual machine, to allow dissecting provided capture files, without having to install Wireshard, directly into the Web browser.
|
||||
|
||||
We expect to publish a report on that last experiment in the future with more details and results.
|
||||
|
||||
### 6 Conclusion
|
||||
|
||||
The most promising architecture for Virtual Lab deployments seems to be the use of containers on a PaaS platform for deploying virtual desktops or virtual application GUIs available in the Web browser.
|
||||
|
||||
This would allow the controlled fabrication of Virtual Labs containing the exact bits needed for learners to practice while minimizing the delays.
|
||||
|
||||
Still the need for always-on connectivity can be a problem.
|
||||
|
||||
Also, the potential for inter-networked containers allowing the kind of multiple nodes and collaborative scenarii we described, would require a lot of expertise to develop, and management platforms for the MOOC operators, which aren’t yet mature.
|
||||
|
||||
We hope to be able to report on our progress in the coming months and years on those aspects.
|
||||
|
||||
### 7 References
|
||||
|
||||
|
||||
|
||||
[0]
|
||||
Olivier Berger, J Paul Gibson, Claire Lecocq and Christian Bac “Designing a virtual laboratory for a relational database MOOC”. International Conference on Computer Supported Education, SCITEPRESS, 23-25 may 2015, Lisbonne, Portugal, 2015, vol. 7, pp. 260-268, ISBN 978-989-758-107-6 – [DOI: 10.5220/0005439702600268][1] ([preprint (HTML)][2])
|
||||
|
||||
### 8 Copyright
|
||||
|
||||
[![Creative Commons License](https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png)][45]
|
||||
|
||||
This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][46]
|
||||
|
||||
.
|
||||
|
||||
### Footnotes:
|
||||
|
||||
[1][32] – The FLIRT project also works on business model aspects of MOOC or SPOC production in the context of professional development, but the present memo starts from a minimalitic hypothesis where funding for course production is quite limited.
|
||||
|
||||
[2][33] – research-based evidence needed
|
||||
|
||||
[3][34] – In typical MOOCs which are free to participate, the VM should include only gratis tools, which typically means a GNU/Linux distribution loaded with applications available under free and open source licenses.
|
||||
|
||||
[4][35] – Typically, Free and Open Source software, aka Libre Software
|
||||
|
||||
[5][36] – VirtualBox is portable on many operating systems, making it a very popular solution for such a need
|
||||
|
||||
[6][37] – the IaaS platform could typically be an open cloud for MOOCs or a private cloud for SPOCs (for closer monitoring of student activity or security control reasons).
|
||||
|
||||
[7][38] – Depending of the expected use of the lab by learners, this cost may vary a lot. The size and configuration required for the included software may have an impact (hence the need to minimize the footprint of the VM images). With diminishing costs in general this may not be a show stopper. Refer to marketing figures of commercial IaaS offerings for accurate figures. Attention to additional licensing costs if the OS of the VM isn’t free software, or if other licenses must be provided for every learners.
|
||||
|
||||
[8][39] – The needs for always-on connectivity may not be a problem for professional development SPOCs where learners connect from enterprise networks for instance. It may be detrimental when MOOCs are very popular in southern countries where high bandwidth is both unreliable and expensive.
|
||||
|
||||
[9][40] – In this respect, providing a full Linux desktop inside the VM doesn’t necessarily make sense. Instead, running applications full-screen may be better, avoiding installation of whole desktop environments like Gnome or XFCE… but which has usability consequences. Careful tuning and testing is needed in any case.
|
||||
|
||||
[10][41] – The availability of container based architectures is quite popular in the industry, but has not yet been deployed to a large scale in the context of large public MOOC hosting platforms, to our knowledge, at the time of writing. There are interesting technical challenges which the FLIRT project tries to tackle together with its partner ProCAN.
|
||||
|
||||
[11][42] – See the corresponding paragraph [http://www-inf.it-sudparis.eu/PROSE/csedu2015/#standalone-sql-env][43] in [0][44]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/
|
||||
|
||||
作者:[Author;Olivier Berger;Télécom Sudparis][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www-public.tem-tsp.eu
|
||||
[1]:http://dx.doi.org/10.5220/0005439702600268
|
||||
[2]:http://www-inf.it-sudparis.eu/PROSE/csedu2015/
|
||||
[3]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#org50fdc1a
|
||||
[4]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.1
|
||||
[5]:http://flirtmooc.wixsite.com/flirt-mooc-telecom
|
||||
[6]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.2
|
||||
[7]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.3
|
||||
[8]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.4
|
||||
[9]:http://virtualbox.org
|
||||
[10]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.5
|
||||
[11]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.2
|
||||
[12]:https://www.vagrantup.com/
|
||||
[13]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#orgde5af50
|
||||
[14]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.6
|
||||
[15]:https://en.wikipedia.org/wiki/Virtual_Network_Computing
|
||||
[16]:https://en.wikipedia.org/wiki/Remote_Desktop_Protocol
|
||||
[17]:http://guacamole.apache.org/
|
||||
[18]:https://www.procan-group.com/
|
||||
[19]:https://open.edx.org/
|
||||
[20]:http://openstack.org/
|
||||
[21]:https://en.wikipedia.org/wiki/Canvas_element
|
||||
[22]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.7
|
||||
[23]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.8
|
||||
[24]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.9
|
||||
[25]:https://www.redhat.com/en/topics/containers
|
||||
[26]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.10
|
||||
[27]:https://en.wikipedia.org/wiki/Docker_(software)
|
||||
[28]:https://www.openshift.com/
|
||||
[29]:https://en.wikipedia.org/wiki/DevOps
|
||||
[30]:http://webassembly.org/
|
||||
[31]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.11
|
||||
[32]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.1
|
||||
[33]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.2
|
||||
[34]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.3
|
||||
[35]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.4
|
||||
[36]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.5
|
||||
[37]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.6
|
||||
[38]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.7
|
||||
[39]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.8
|
||||
[40]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.9
|
||||
[41]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.10
|
||||
[42]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.11
|
||||
[43]:http://www-inf.it-sudparis.eu/PROSE/csedu2015/#standalone-sql-env
|
||||
[44]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#orgde5af50
|
||||
[45]:http://creativecommons.org/licenses/by-nc-sa/4.0/
|
||||
[46]:http://creativecommons.org/licenses/by-nc-sa/4.0/
|
48
sources/tech/20180209 Gnome without chrome-gnome-shell.md
Normal file
48
sources/tech/20180209 Gnome without chrome-gnome-shell.md
Normal file
@ -0,0 +1,48 @@
|
||||
Gnome without chrome-gnome-shell
|
||||
======
|
||||
|
||||
New laptop, has a touchscreen, can be folded into a tablet, I heard gnome-shell would be a good choice of desktop environment, and I managed to tweak it enough that I can reuse existing habits.
|
||||
|
||||
I have a big problem, however, with how it encourages one to download random extensions off the internet and run them as part of the whole desktop environment. I have an even bigger problem with [gnome-core][1] having a hard dependency on [chrome-gnome-shell][2], a plugin which cannot be disabled without root editing files in `/etc`, which exposes parts of my destktop environment to websites.
|
||||
|
||||
Visit [this site][3] and it will know which extensions you have installed, and it will be able to install more. I do not trust that, I do not need that, I do not want that. I am horrified by the idea of that.
|
||||
|
||||
[I made a workaround.][4]
|
||||
|
||||
How can one do the same for firefox?
|
||||
|
||||
### Description
|
||||
|
||||
chrome-gnome-shell is a hard dependency of gnome-core, and it installs a browser plugin that one may not want, and mandates its use by system-wide chrome policies.
|
||||
|
||||
I consider having chrome-gnome-shell an unneeded increase of the attack surface of my system, in exchange for the dubious privilege of being able to download and execute, as my main user, random unreviewed code.
|
||||
|
||||
This package satifies the chrome-gnome-shell dependency, but installs nothing.
|
||||
|
||||
Note that after installing this package you need to purge chrome-gnome-shell if it was previously installed, to have it remove its chromium policy files in /etc/chromium
|
||||
|
||||
### Instructions
|
||||
```
|
||||
apt install equivs
|
||||
equivs-build contain-gnome-shell
|
||||
sudo dpkg -i contain-gnome-shell_1.0_all.deb
|
||||
sudo dpkg --purge chrome-gnome-shell
|
||||
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.enricozini.org/blog/2018/debian/gnome-without-chrome-gnome-shell/
|
||||
|
||||
作者:[Enrico Zini][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.enricozini.org/
|
||||
[1]:https://packages.debian.org/gnome-core
|
||||
[2]:https://packages.debian.org/chrome-gnome-shell
|
||||
[3]:https://extensions.gnome.org/
|
||||
[4]:https://salsa.debian.org/enrico/contain-gnome-shell
|
@ -0,0 +1,439 @@
|
||||
How to Install Gogs Go Git Service on Ubuntu 16.04
|
||||
======
|
||||
|
||||
Gogs is free and open source Git service written in Go language. Gogs is a painless self-hosted git service that allows you to create and run your own Git server on a minimal hardware server. Gogs web-UI is very similar to GitHub and offers support for MySQL, PostgreSQL, and SQLite database.
|
||||
|
||||
In this tutorial, we will show you step-by-step how to install and configure your own Git service using Gogs on Ubuntu 16.04. This tutorial will cover details including, how to install Go on Ubuntu system, install PostgreSQL, and install and configure Nginx web server as a reverse proxy for Go application.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
* Ubuntu 16.04
|
||||
* Root privileges
|
||||
|
||||
|
||||
|
||||
### What we will do
|
||||
|
||||
1. Update and Upgrade System
|
||||
2. Install and Configure PostgreSQL
|
||||
3. Install Go and Git
|
||||
4. Install Gogs
|
||||
5. Configure Gogs
|
||||
6. Running Gogs as a Service
|
||||
7. Install and Configure Nginx as a Reverse Proxy
|
||||
8. Testing
|
||||
|
||||
|
||||
|
||||
Before going any further, update all Ubuntu repositories and upgrade all packages.
|
||||
|
||||
Run the apt commands below.
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt upgrade
|
||||
```
|
||||
|
||||
### Step 2 - Install and Configure PostgreSQL
|
||||
|
||||
Gogs offers support for MySQL, PostgreSQL, SQLite3, MSSQL, and TiDB database systems.
|
||||
|
||||
In this guide, we will be using PostgreSQL as a database for our Gogs installations.
|
||||
|
||||
Install PostgreSQL using the apt command below.
|
||||
|
||||
```
|
||||
sudo apt install -y postgresql postgresql-client libpq-dev
|
||||
```
|
||||
|
||||
After the installation is complete, start the PostgreSQL service and enable it to launch everytime at system boot.
|
||||
|
||||
```
|
||||
systemctl start postgresql
|
||||
systemctl enable postgresql
|
||||
```
|
||||
|
||||
PostgreSQL database has been installed on an Ubuntu system.
|
||||
|
||||
Next, we need to create a new database and user for Gogs.
|
||||
|
||||
Login as the 'postgres' user and run the 'psql' command to get the PostgreSQL shell.
|
||||
|
||||
```
|
||||
su - postgres
|
||||
psql
|
||||
```
|
||||
|
||||
Create a new user named 'git', and give the user privileges for 'CREATEDB'.
|
||||
|
||||
```
|
||||
CREATE USER git CREATEDB;
|
||||
\password git
|
||||
```
|
||||
|
||||
Create a database named 'gogs_production', and set the 'git' user as the owner of the database.
|
||||
|
||||
```
|
||||
CREATE DATABASE gogs_production OWNER git;
|
||||
```
|
||||
|
||||
[![Create the Gogs database][1]][2]
|
||||
|
||||
New PostgreSQL database 'gogs_production' and user 'git' for Gogs installation has been created.
|
||||
|
||||
### Step 3 - Install Go and Git
|
||||
|
||||
Install Git from the repository using the apt command below.
|
||||
|
||||
```
|
||||
sudo apt install git
|
||||
```
|
||||
|
||||
Now add new user 'git' to the system.
|
||||
|
||||
```
|
||||
sudo adduser --disabled-login --gecos 'Gogs' git
|
||||
```
|
||||
|
||||
Login as the 'git' user and create a new 'local' directory.
|
||||
|
||||
```
|
||||
su - git
|
||||
mkdir -p /home/git/local
|
||||
```
|
||||
|
||||
Go to the 'local' directory and download 'Go' (the latest version) using the wget command as shown below.
|
||||
|
||||
```
|
||||
cd ~/local
|
||||
wget <https://dl.google.com/go/go1.9.2.linux-amd64.tar.gz>
|
||||
```
|
||||
|
||||
[![Install Go and Git][3]][4]
|
||||
|
||||
Extract the go compressed file, then remove it.
|
||||
|
||||
```
|
||||
tar -xf go1.9.2.linux-amd64.tar.gz
|
||||
rm -f go1.9.2.linux-amd64.tar.gz
|
||||
```
|
||||
|
||||
'Go' binary file has been downloaded in the '~/local/go' directory. Now we need to setup the environment - we need to define the 'GOROOT' and 'GOPATH directories so we can run a 'go' command on the system under 'git' user.
|
||||
|
||||
Run all of the following commands.
|
||||
|
||||
```
|
||||
cd ~/
|
||||
echo 'export GOROOT=$HOME/local/go' >> $HOME/.bashrc
|
||||
echo 'export GOPATH=$HOME/go' >> $HOME/.bashrc
|
||||
echo 'export PATH=$PATH:$GOROOT/bin:$GOPATH/bin' >> $HOME/.bashrc
|
||||
```
|
||||
|
||||
And reload Bash by running the 'source ~/.bashrc' command as shown below.
|
||||
|
||||
```
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
Make sure you're using Bash as your default shell.
|
||||
|
||||
[![Install Go programming language][5]][6]
|
||||
|
||||
Now run the 'go' command for checking the version.
|
||||
|
||||
```
|
||||
go version
|
||||
```
|
||||
|
||||
And make sure you get the result as shown in the following screenshot.
|
||||
|
||||
[![Check the go version][7]][8]
|
||||
|
||||
Go is now installed on the system under 'git' user.
|
||||
|
||||
### Step 4 - Install Gogs Go Git Service
|
||||
|
||||
Login as the 'git' user and download 'Gogs' from GitHub using the 'go' command.
|
||||
|
||||
```
|
||||
su - git
|
||||
go get -u github.com/gogits/gogs
|
||||
```
|
||||
|
||||
The command will download all Gogs source code in the 'GOPATH/src' directory.
|
||||
|
||||
Go to the '$GOPATH/src/github.com/gogits/gogs' directory and build gogs using commands below.
|
||||
|
||||
```
|
||||
cd $GOPATH/src/github.com/gogits/gogs
|
||||
go build
|
||||
```
|
||||
|
||||
And make sure you get no error.
|
||||
|
||||
Now run Gogs Go Git Service using the command below.
|
||||
|
||||
```
|
||||
./gogs web
|
||||
```
|
||||
|
||||
The command will run Gogs on the default port 3000.
|
||||
|
||||
[![Install Gogs Go Git Service][9]][10]
|
||||
|
||||
Open your web browser and type your server IP address with port 3000, mine is <http://192.168.33.10:3000/>
|
||||
|
||||
And you should get the result as shown below.
|
||||
|
||||
[![Gogs web installer][11]][12]
|
||||
|
||||
Gogs is installed on the Ubuntu system. Now back to your terminal and press 'Ctrl + c' to exit.
|
||||
|
||||
### Step 5 - Configure Gogs Go Git Service
|
||||
|
||||
In this step, we will create a custom configuration for Gogs.
|
||||
|
||||
Goto the Gogs installation directory and create a new 'custom/conf' directory.
|
||||
|
||||
```
|
||||
cd $GOPATH/src/github.com/gogits/gogs
|
||||
mkdir -p custom/conf/
|
||||
```
|
||||
|
||||
Copy default configuration to the custom directory and edit it using [vim][13].
|
||||
|
||||
```
|
||||
cp conf/app.ini custom/conf/app.ini
|
||||
vim custom/conf/app.ini
|
||||
```
|
||||
|
||||
In the ' **[server]** ' section, change the server 'HOST_ADDR' with '127.0.0.1'.
|
||||
```
|
||||
[server]
|
||||
PROTOCOL = http
|
||||
DOMAIN = localhost
|
||||
ROOT_URL = %(PROTOCOL)s://%(DOMAIN)s:%(HTTP_PORT)s/
|
||||
HTTP_ADDR = 127.0.0.1
|
||||
HTTP_PORT = 3000
|
||||
|
||||
```
|
||||
|
||||
In the ' **[database]** ' section, change everything with your own database info.
|
||||
```
|
||||
[database]
|
||||
DB_TYPE = postgres
|
||||
HOST = 127.0.0.1:5432
|
||||
NAME = gogs_production
|
||||
USER = git
|
||||
PASSWD = [email protected]#
|
||||
|
||||
```
|
||||
|
||||
Save and exit.
|
||||
|
||||
Now verify the configuration by running the command as shown below.
|
||||
|
||||
```
|
||||
./gogs web
|
||||
```
|
||||
|
||||
And make sure you get the result as following.
|
||||
|
||||
[![Configure the service][14]][15]
|
||||
|
||||
Gogs is now running with our custom configuration, under 'localhost' with port 3000.
|
||||
|
||||
### Step 6 - Running Gogs as a Service
|
||||
|
||||
In this step, we will configure Gogs as a service on Ubuntu system. We will create a new service file configuration 'gogs.service' under the '/etc/systemd/system' directory.
|
||||
|
||||
Go to the '/etc/systemd/system' directory and create a new service file 'gogs.service' using the [vim][13] editor.
|
||||
|
||||
```
|
||||
cd /etc/systemd/system
|
||||
vim gogs.service
|
||||
```
|
||||
|
||||
Paste the following gogs service configuration there.
|
||||
```
|
||||
[Unit]
|
||||
Description=Gogs
|
||||
After=syslog.target
|
||||
After=network.target
|
||||
After=mariadb.service mysqld.service postgresql.service memcached.service redis.service
|
||||
|
||||
[Service]
|
||||
# Modify these two values and uncomment them if you have
|
||||
# repos with lots of files and get an HTTP error 500 because
|
||||
# of that
|
||||
###
|
||||
#LimitMEMLOCK=infinity
|
||||
#LimitNOFILE=65535
|
||||
Type=simple
|
||||
User=git
|
||||
Group=git
|
||||
WorkingDirectory=/home/git/go/src/github.com/gogits/gogs
|
||||
ExecStart=/home/git/go/src/github.com/gogits/gogs/gogs web
|
||||
Restart=always
|
||||
Environment=USER=git HOME=/home/git
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
```
|
||||
|
||||
Save and exit.
|
||||
|
||||
Now reload the systemd services.
|
||||
|
||||
```
|
||||
systemctl daemon-reload
|
||||
```
|
||||
|
||||
Start gogs service and enable it to launch everytime at system boot using the systemctl command.
|
||||
|
||||
```
|
||||
systemctl start gogs
|
||||
systemctl enable gogs
|
||||
```
|
||||
|
||||
[![Run gogs as a service][16]][17]
|
||||
|
||||
Gogs is now running as a service on Ubuntu system.
|
||||
|
||||
Check it using the commands below.
|
||||
|
||||
```
|
||||
netstat -plntu
|
||||
systemctl status gogs
|
||||
```
|
||||
|
||||
And you should get the result as shown below.
|
||||
|
||||
[![Gogs is listening on the network interface][18]][19]
|
||||
|
||||
### Step 7 - Configure Nginx as a Reverse Proxy for Gogs
|
||||
|
||||
In this step, we will configure Nginx as a reverse proxy for Gogs. We will be using Nginx packages from its own repository.
|
||||
|
||||
Add Nginx repository using the add-apt command.
|
||||
|
||||
```
|
||||
sudo add-apt-repository -y ppa:nginx/stable
|
||||
```
|
||||
|
||||
Now update all Ubuntu repositories and install Nginx using the apt command below.
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt install nginx -y
|
||||
```
|
||||
|
||||
Next, goto the '/etc/nginx/sites-available' directory and create new virtual host file 'gogs'.
|
||||
|
||||
```
|
||||
cd /etc/nginx/sites-available
|
||||
vim gogs
|
||||
```
|
||||
|
||||
Paste the following configuration there.
|
||||
```
|
||||
server {
|
||||
listen 80;
|
||||
server_name git.hakase-labs.co;
|
||||
|
||||
location / {
|
||||
proxy_pass http://localhost:3000;
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Save and exit.
|
||||
|
||||
**Note:**
|
||||
|
||||
Change the 'server_name' line with your own domain name.
|
||||
|
||||
Now activate a new virtual host and test the nginx configuration.
|
||||
|
||||
```
|
||||
ln -s /etc/nginx/sites-available/gogs /etc/nginx/sites-enabled/
|
||||
nginx -t
|
||||
```
|
||||
|
||||
Make sure there is no error, then restart the Nginx service.
|
||||
|
||||
```
|
||||
systemctl restart nginx
|
||||
```
|
||||
|
||||
[![Nginx reverse proxy for gogs][20]][21]
|
||||
|
||||
### Step 8 - Testing
|
||||
|
||||
Open your web browser and type your gogs URL, mine is <http://git.hakase-labs.co>
|
||||
|
||||
Now you will get the installation page. On top of the page, type all of your PostgreSQL database info.
|
||||
|
||||
[![Gogs installer][22]][23]
|
||||
|
||||
Now scroll to the bottom, and click the 'Admin account settings' dropdown.
|
||||
|
||||
Type your admin user, password, and email.
|
||||
|
||||
[![Type in the gogs install settings][24]][25]
|
||||
|
||||
Then click the 'Install Gogs' button.
|
||||
|
||||
And you will be redirected to the Gogs user Dashboard as shown below.
|
||||
|
||||
[![Gogs dashboard][26]][27]
|
||||
|
||||
Below is Gogs 'Admin Dashboard'.
|
||||
|
||||
[![Browse the Gogs dashboard][28]][29]
|
||||
|
||||
Gogs is now installed with PostgreSQL database and Nginx web server on Ubuntu 16.04 server
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/tutorial/how-to-install-gogs-go-git-service-on-ubuntu-1604/
|
||||
|
||||
作者:[Muhammad Arul][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.howtoforge.com/tutorial/server-monitoring-with-shinken-on-ubuntu-16-04/
|
||||
[1]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/1.png
|
||||
[2]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/1.png
|
||||
[3]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/2.png
|
||||
[4]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/2.png
|
||||
[5]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/3.png
|
||||
[6]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/3.png
|
||||
[7]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/4.png
|
||||
[8]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/4.png
|
||||
[9]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/5.png
|
||||
[10]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/5.png
|
||||
[11]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/6.png
|
||||
[12]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/6.png
|
||||
[13]:https://www.howtoforge.com/vim-basics
|
||||
[14]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/7.png
|
||||
[15]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/7.png
|
||||
[16]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/8.png
|
||||
[17]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/8.png
|
||||
[18]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/9.png
|
||||
[19]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/9.png
|
||||
[20]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/10.png
|
||||
[21]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/10.png
|
||||
[22]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/11.png
|
||||
[23]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/11.png
|
||||
[24]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/12.png
|
||||
[25]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/12.png
|
||||
[26]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/13.png
|
||||
[27]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/13.png
|
||||
[28]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/14.png
|
||||
[29]:https://www.howtoforge.com/images/how_to_install_gogs_go_git_service_on_ubuntu_1604/big/14.png
|
@ -0,0 +1,121 @@
|
||||
How to setup and configure network bridge on Debian Linux
|
||||
======
|
||||
|
||||
I am new Debian Linux user. I want to setup Bridge for virtualised environments (KVM) running on Debian Linux. How do I setup network bridging in /etc/network/interfaces on Debian Linux 9.x server?
|
||||
|
||||
If you want to assign IP addresses to your virtual machines and make them accessible from your LAN you need to setup network bridge. By default, a private network bridge created when using KVM. You need to set up interfaces manually, avoiding conflicts with, network manager.
|
||||
|
||||
### How to install the brctl
|
||||
|
||||
Type the following [nixcmdn name=”apt”]/[apt-get command][1]:
|
||||
`$ sudo apt install bridge-utils`
|
||||
|
||||
### How to setup network bridge on Debian Linux
|
||||
|
||||
You need to edit /etc/network/interface file. However, I recommend to drop a brand new config in /etc/network/interface.d/ directory. The procedure to configure network bridge on Debian Linux is as follows:
|
||||
|
||||
#### Step 1 – Find out your physical interface
|
||||
|
||||
Use the [ip command][2]:
|
||||
`$ ip -f inet a s`
|
||||
Sample outputs:
|
||||
```
|
||||
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
|
||||
inet 192.168.2.23/24 brd 192.168.2.255 scope global eno1
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
|
||||
eno1 is my physical interface.
|
||||
|
||||
#### Step 2 – Update /etc/network/interface file
|
||||
|
||||
Make sure only lo (loopback is active in /etc/network/interface). Remove any config related to eno1. Here is my config file printed using [cat command][3]:
|
||||
`$ cat /etc/network/interface`
|
||||
```
|
||||
# This file describes the network interfaces available on your system
|
||||
# and how to activate them. For more information, see interfaces(5).
|
||||
|
||||
source /etc/network/interfaces.d/*
|
||||
|
||||
# The loopback network interface
|
||||
auto lo
|
||||
iface lo inet loopback
|
||||
```
|
||||
|
||||
|
||||
#### Step 3 – Configuring bridging (br0) in /etc/network/interfaces.d/br0
|
||||
|
||||
Create a text file using a text editor such as vi command:
|
||||
`$ sudo vi /etc/network/interfaces.d/br0`
|
||||
Append the following config:
|
||||
```
|
||||
## static ip config file for br0 ##
|
||||
auto br0
|
||||
iface br0 inet static
|
||||
address 192.168.2.23
|
||||
broadcast 192.168.2.255
|
||||
netmask 255.255.255.0
|
||||
gateway 192.168.2.254
|
||||
# If the resolvconf package is installed, you should not edit
|
||||
# the resolv.conf configuration file manually. Set name server here
|
||||
#dns-nameservers 192.168.2.254
|
||||
# If you have muliple interfaces such as eth0 and eth1
|
||||
# bridge_ports eth0 eth1
|
||||
bridge_ports eno1
|
||||
bridge_stp off # disable Spanning Tree Protocol
|
||||
bridge_waitport 0 # no delay before a port becomes available
|
||||
bridge_fd 0 # no forwarding delay
|
||||
```
|
||||
|
||||
If you want bridge to get an IP address using DHCP:
|
||||
```
|
||||
## DHCP ip config file for br0 ##
|
||||
auto br0
|
||||
|
||||
# Bridge setup
|
||||
iface br0 inet dhcp
|
||||
bridge_ports eno1
|
||||
```
|
||||
|
||||
|
||||
[Save and close the file in vi/vim][4].
|
||||
|
||||
#### Step 4 – [Restart networking service in Linux][5]
|
||||
|
||||
Before you restart the networking service make sure firewall is disabled. The firewall may refer to older interface such as eno1. Once service restarted, you must update firewall rule for interface br0. Type the following restart the networking service:
|
||||
`$ sudo systemctl restart network-manager`
|
||||
Verify that service has been restarted:
|
||||
`$ systemctl status network-manager`
|
||||
Look for new br0 interface and routing table with the help of [ip command][2]:
|
||||
`$ ip a s $ ip r $ ping -c 2 cyberciti.biz`
|
||||
Sample outputs:
|
||||
![](https://www.cyberciti.biz/media/new/faq/2018/02/How-to-setup-and-configure-network-bridge-on-Debian-Linux.jpg)
|
||||
You can also use the brctl command to view info about your bridges:
|
||||
`$ brctl show`
|
||||
Show current bridges:
|
||||
`$ bridge link`
|
||||
![](https://www.cyberciti.biz/media/new/faq/2018/02/Show-current-bridges-and-what-interfaces-they-are-connected-to-on-Linux.jpg)
|
||||
|
||||
### About the author
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the **latest tutorials on SysAdmin, Linux/Unix and open source topics via[RSS/XML feed][6]** or [weekly email newsletter][7].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/how-to-configuring-bridging-in-debian-linux/
|
||||
|
||||
作者:[Vivek GIte][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info)
|
||||
[2]:https://www.cyberciti.biz/faq/linux-ip-command-examples-usage-syntax/ (See Linux/Unix ip command examples for more info)
|
||||
[3]:https://www.cyberciti.biz/faq/linux-unix-appleosx-bsd-cat-command-examples/ (See Linux/Unix cat command examples for more info)
|
||||
[4]:https://www.cyberciti.biz/faq/linux-unix-vim-save-and-quit-command/
|
||||
[5]:https://www.cyberciti.biz/faq/linux-restart-network-interface/
|
||||
[6]:https://www.cyberciti.biz/atom/atom.xml
|
||||
[7]:https://www.cyberciti.biz/subscribe-to-weekly-linux-unix-newsletter-for-sysadmin/
|
@ -0,0 +1,299 @@
|
||||
How to use Twine and SugarCube to create interactive adventure games
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_gaming_games_roundup_news.png?itok=KM0ViL0f)
|
||||
|
||||
Storytelling is an innate part of human nature. It's an idle pastime, it's an art form, it's a communication tool, it's a form of therapy and bonding. We all love to tell stories—you're reading one now—and the most powerful technologies we have are generally the things that enable us to express our creative ideas. The open source project [Twine][1] is a tool for doing just that.
|
||||
|
||||
Twine is an interactive story generator. It uses HTML, CSS, and Javascript to create self-contained adventure games, in the spirit of classics like [Zork][2] and [Colossal Cave][3]. Since Twine is largely an amalgamation of several open technologies, it is flexible enough to do a lot of multimedia tricks, rendering games a lot more like [HyperCard][4] than you might normally expect from HTML.
|
||||
|
||||
### Installing Twine
|
||||
|
||||
You can use Twine online or download it locally from its website. Unzip the download and click the `Twine` application icon to start it.
|
||||
|
||||
The default starting interface is pretty intuitive. Read its introductory material, then click the big green `+Story` button on the right to create a new story.
|
||||
|
||||
### Hello world
|
||||
|
||||
The basics are simple. A new storyboard contains one node, or "passage" in Twine's terminology, called `Untitled passage`. Roll over this passage to see the node's options, then click the pencil icon to edit its contents.
|
||||
|
||||
Name the passage something to indicate its position in your story. In the previous version of Twine, the starting passage had to be named **Start** , but in Twine 2, any title will work. It's still a good idea to make it sensible, so stick with something like `Start` or `Home` or `init`.
|
||||
|
||||
For the text contents of this story, type:
|
||||
```
|
||||
Hello [[world]]
|
||||
|
||||
```
|
||||
|
||||
If you're familiar with [wikitext][5], you can probably already guess that the word "world" in this passage is actually a link to another passage.
|
||||
|
||||
Your edits are saved automatically, so you can just close the editing dialogue box when finished. Back in your storyboard, Twine has detected that you've created a link and has provided a new passage for you, called `world`.
|
||||
|
||||
![developing story in Twine][7]
|
||||
|
||||
Developing a story in Twine
|
||||
|
||||
Open the new passage for editing and enter the text:
|
||||
```
|
||||
This was made with Twine.
|
||||
|
||||
```
|
||||
|
||||
To test your very short story, click the play button in the lower-right corner of the Twine window.
|
||||
|
||||
It's not much, but it's a start!
|
||||
|
||||
You can add more navigational choices by adding another link in double brackets, which generates a new passage, until you tell whatever tale you want to tell. It really is as simple as that.
|
||||
|
||||
To publish your adventure, click the story title in the lower-left corner of the storyboard window and select **Publish to file**. This saves your whole project as one HTML file. Upload that one file to your website, or send it to friends and have them open it in a web browser, and you've just made and delivered your very first text adventure.
|
||||
|
||||
### Advanced Twineage
|
||||
|
||||
Knowing only enough to build this `hello world` story, you can make a great text-based adventure consisting of exploration and choices. As quick starts go, that's not too bad. Like all good open source technology, there's no ceiling on this, and you can take it much much farther with a few additional tricks.
|
||||
|
||||
Twine projects work as well as they do partly because of a JavaScript backend called Harlowe. It adds all the pretty transitions and some UI styling, handles basic multimedia functions, and provides some special macros to reduce the amount of code you would have to write for some advanced tasks. This is open source, though, so naturally there are alternatives.
|
||||
|
||||
[SugarCube][8] is an alternate JavaScript library for Twine that handles media, media playback functions, advanced linking for passages, UI elements, save files, and much more. It can turn your basic text adventure into a multimedia extravaganza rivaling such adventure games as Myst or Beneath the Steel Sky.
|
||||
|
||||
### Installing SugarCube
|
||||
|
||||
To install the SugarCube backend for your project:
|
||||
|
||||
* [Download the SugarCube library][9]. Even though Twine ships with an earlier version of SugarCube, you should download the latest version.
|
||||
|
||||
* Once you've downloaded it, unzip the archive and place it in a sensible location. If you're not used to keeping files organized or [managing creative assets][10] for project development, put the unzipped SugarCube directory into your Twine directory for safekeeping.
|
||||
|
||||
* The SugarCube directory contains only a few files, with the actual code in `format.js`. If you're on Linux, right-click on the file and select **Copy**.
|
||||
|
||||
* In Twine, return to your project library by clicking the house icon in the lower-left corner of the Twine window.
|
||||
|
||||
* Click the **Formats** button in the right sidebar of Twine. In the **Add a New Format** tab, paste in the file path to `format.js` and click the green **Add** button.
|
||||
|
||||
![Install Sugarcube add format][12]
|
||||
|
||||
Installing Sugarcube: Click the Add button to add a new format in Twine
|
||||
|
||||
If you're not on Linux, type the file path manually in this format:
|
||||
|
||||
`file:///home/your-username/path/to/SugarCube-2/format.js`
|
||||
|
||||
|
||||
|
||||
|
||||
### Using SugarCube
|
||||
|
||||
To switch a project to SugarCube, enter the storyboard mode of your project.
|
||||
|
||||
In the story board view, click the title of your storyboard in the lower-left corner of the Twine window and select **Change Story Format**.
|
||||
|
||||
In the **Story format** window that appears, select the SugarCube 2.x option.
|
||||
|
||||
![Story format sugarcube][14]
|
||||
|
||||
Select SugarCube in the Story Format window
|
||||
|
||||
### Images
|
||||
|
||||
Before adding images, audio, or video to a Twine project, create a project directory in which to keep copies of your assets. This is vital, because these assets remain separate from the HTML file that Twine exports, so the final step of creating your story will be to take your exported HTML file and drop it in place alongside all the media it needs. If you're used to programming, video editing, or web design, this is a familiar discipline, but if you're new to this kind of content creation, you may not have encountered this before, so be especially diligent in organizing your assets.
|
||||
|
||||
Create a project directory somewhere. Inside this directory, create a subdirectory called **img** for your images, `audio` for your audio, `video` for video, and so on.
|
||||
|
||||
![Create a directory in Twine][16]
|
||||
|
||||
Create subdirectories for your project files in Twine
|
||||
|
||||
For this example, I use an image from [openclipart.org][17]. You can use this, or something similar. Regardless of what you use, place your image in your **img** directory.
|
||||
|
||||
Continuing with the hello_world project, you can add an image to one of the passages using SugarCube's image syntax:
|
||||
```
|
||||
<img src="img/earth.svg" alt="An image of the world." />
|
||||
|
||||
Hello [[world]].
|
||||
|
||||
```
|
||||
|
||||
If you try to play your project after adding your images, you'll find that all the image links are broken. This is because Twine is located outside of your project directory. To test a multimedia Twine project, export it as a file and place the file in your project directory. Don't put it inside any of the subdirectories you created; simply place it in your project directory and open it in a web browser.
|
||||
|
||||
![View media in sugarcube][19]
|
||||
|
||||
Previewing media files added to Twine project
|
||||
|
||||
Other media files function in basically the same way, utilizing HTML5 media tags to display the media and SugarCube macros to control when playback begins and ends.
|
||||
|
||||
### Variables and programming
|
||||
|
||||
You can do a lot by leading a player to one passage or another depending on what choices they have made, but you can cut down on how many passages you need by using variables.
|
||||
|
||||
If you have never programmed before, take a moment to read through my [introduction to programming concepts][20]. The article uses Python, but all the same concepts apply to Twine and basically any other programming language you're likely to encounter.
|
||||
|
||||
For example, since the hello_world story is initially set on Earth, the next step in the story could be to offer a variety of trips to other worlds. Each time the reader returns to Earth, the game can display a tally of the worlds they have visited. This would be essentially impossible to do linearly, because you would never be able to tell which path a reader has taken in their exploration. For instance, one reader might visit Mars first, then Mercury. Another might never go to Mars at all, instead visiting Jupiter, Saturn, and then Mercury. You would have to make one passage for every possible combination, and that solution simply doesn't scale.
|
||||
|
||||
With variables, however, you can track a reader's progress and display messages accordingly.
|
||||
|
||||
To make this work, you must set a variable each time a reader reaches a new planet. In the game universe of the hello_world game, planets are actually open source projects, so each time a user visits a passage about an open source project, set a variable to "prove" that the reader has visited.
|
||||
|
||||
Variables in SugarCube syntax are set with the <<set>> macro. SugarCube has lots of macros, and they're all handy. This example project uses a few.
|
||||
|
||||
Change the second passage you created to provide the reader a few new options for exploration:
|
||||
```
|
||||
This was made in [[Twine]] on [[Linux]].
|
||||
|
||||
<<choice Start "Return to Earth.">>
|
||||
|
||||
```
|
||||
|
||||
You're using the <<choice>> macro here, which links any string of text straight back to a given passage. In this case, the <<choice>> macro links the string "Return to Earth" to the Start passage.
|
||||
|
||||
In the new passage, insert this text:
|
||||
```
|
||||
Twine is an interactive story framework. It runs on all operating systems, but I prefer to use it on [[Linux]].
|
||||
|
||||
|
||||
|
||||
<<set $twine to true>>
|
||||
|
||||
<<choice Start "Return to Earth.">>
|
||||
|
||||
```
|
||||
|
||||
In this code, you use the <<set>> macro to create a new variable called `$twine`. This variable is a Boolean, because you're just setting it to "true". You'll see why that's significant soon.
|
||||
|
||||
In the `Linux` passage, enter this text:
|
||||
```
|
||||
Linux is an open source [[Unix]]-like operating system.
|
||||
|
||||
|
||||
|
||||
<<set $linux to true>>
|
||||
|
||||
<<choice Start "Return to Earth.">>
|
||||
|
||||
```
|
||||
|
||||
And in the `Unix` passage:
|
||||
```
|
||||
BSD is an open source version of AT&T's Unix operating system.
|
||||
|
||||
|
||||
|
||||
<<set $bsd to true>>
|
||||
|
||||
<<choice Start "Return to Earth.">>
|
||||
|
||||
```
|
||||
|
||||
Now that the story has five passages for a reader to explore, it's time to use SugarCube to detect which variable has been set each time a reader returns to Earth.
|
||||
|
||||
To detect the state of a variable and generate HTML accordingly, use the <<if>> macro.
|
||||
```
|
||||
<img src="img/earth.png" alt="An image of the world." />
|
||||
|
||||
|
||||
|
||||
Hello [[world]].
|
||||
|
||||
<ul>
|
||||
|
||||
<<if $twine is trueliPlanet Twine/li/if>>
|
||||
|
||||
<<if $linux is trueliPlanet Linux/li/if>>
|
||||
|
||||
<<if $bsd is trueliPlanet BSD/li/if>>
|
||||
|
||||
</ul>
|
||||
|
||||
```
|
||||
|
||||
For testing purposes, you can press the Play button in the lower-right corner. You won't see your image, but look past that in the interest of testing.
|
||||
|
||||
![complex story board][22]
|
||||
|
||||
A more complex story board
|
||||
|
||||
Navigate through the story, returning to Earth periodically. Notice that a tally of each place you visited appears at the bottom of the Start passage each time you return.
|
||||
|
||||
There's nothing explaining why the list of places visited is appearing, though. Can you figure out how to explain the tally of explored passages to the reader?
|
||||
|
||||
You could just preface the tally list with an introductory sentence like "So far you have visited:" but when the user first arrives, the list will be empty so your introductory sentence will be introducing nothing.
|
||||
|
||||
A better way to manage it is with one more variable to indicate that the user has left Earth.
|
||||
|
||||
Change the `world` passage:
|
||||
```
|
||||
This was made in [[Twine]] on [[Linux]].
|
||||
|
||||
|
||||
|
||||
<<set $offworld to true>>
|
||||
|
||||
<<choice Start "Return to Earth.">>
|
||||
|
||||
```
|
||||
|
||||
Then use another <<if>> macro to detect whether or not the `$offworld` variable is set to `true`.
|
||||
|
||||
The way Twine parses wikitext sometimes results in more blank lines than you intend, so to compress the list of places visited, use the <<nobr>> macro to prevent line breaks.
|
||||
```
|
||||
<img src="img/earth.png" alt="An image of the world." />
|
||||
|
||||
|
||||
|
||||
Hello [[world]].
|
||||
|
||||
<<nobr>>
|
||||
|
||||
<<ul>>
|
||||
|
||||
<<if $twine is trueliPlanet Twine/li/if>>
|
||||
|
||||
<<if $linux is trueliPlanet Linux/li/if>>
|
||||
|
||||
<<if $bsd is trueliPlanet BSD/li/if>>
|
||||
|
||||
<</ul>>
|
||||
|
||||
<</nobr>>
|
||||
|
||||
```
|
||||
|
||||
Try playing the story again. Notice that the reader isn't welcomed back to Earth until they have left Earth.
|
||||
|
||||
### Explore everything
|
||||
|
||||
SugarCube is a powerful engine. Using it is often a question of knowing what's available rather than not having the ability to do something. Luckily, its documentation is very good, so refer to its [macro][23] list often.
|
||||
|
||||
You can make further modifications to your project by changing the CSS stylesheet. To do this, click the title of your project in story board mode and select **Edit Story Stylesheet**. If you're familiar with JavaScript, you can also script your stories with the **Edit Story JavaScript**.
|
||||
|
||||
There's no limit to what Twine can do as your interactive fiction engine. It can create text adventures, and it can serve as a prototype for more complex games, point-and-click RPGs, business presentations, [late night talk show supplements][24], and just about anything else you can imagine. Explore the [Twine wiki][25], take a look at other people's works on the [Interactive Fiction Database][26], and then make your own.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/twine-gaming
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/seth
|
||||
[1]:https://twinery.org/
|
||||
[2]:http://i7-dungeon.sourceforge.net/index.html
|
||||
[3]:https://opensource.com/article/17/6/revisit-colossal-cave-adventure-open-adventure
|
||||
[4]:https://en.wikipedia.org/wiki/HyperCard
|
||||
[5]:https://www.mediawiki.org/wiki/Wikitext
|
||||
[7]:https://opensource.com/sites/default/files/images/life-uploads/start.jpg (starting a story in Twine)
|
||||
[8]:http://www.motoslave.net/sugarcube/
|
||||
[9]:https://www.motoslave.net/sugarcube/2
|
||||
[10]:https://opensource.com/article/17/7/managing-creative-assets-planter
|
||||
[12]:https://opensource.com/sites/default/files/images/life-uploads/add.png (install sugarcube add format)
|
||||
[14]:https://opensource.com/sites/default/files/images/life-uploads/format.png (story format sugarcube)
|
||||
[16]:https://opensource.com/sites/default/files/images/life-uploads/dir.png (Creating directories in Twine)
|
||||
[17]:https://openclipart.org/detail/10912/earth-globe-oceania
|
||||
[19]:https://opensource.com/sites/default/files/images/life-uploads/sugarcube.png (view media sugarcube twine)
|
||||
[20]:https://opensource.com/article/17/10/python-101
|
||||
[22]:https://opensource.com/sites/default/files/images/life-uploads/complexer_0.png (complex story board)
|
||||
[23]:https://www.motoslave.net/sugarcube/2/docs/macros.html
|
||||
[24]:http://www.cbs.com/shows/the-late-show-with-stephen-colbert/escape-from-the-man-sized-cabinet/
|
||||
[25]:https://twinery.org/wiki/twine2:guide
|
||||
[26]:http://ifdb.tads.org/
|
@ -0,0 +1,80 @@
|
||||
Linux rmdir Command for Beginners (with Examples)
|
||||
======
|
||||
|
||||
So we've already discussed [the rm command][1] that's primarily used for deleting files and directories from the Linux command line. However, there's another, related command line utility that is specifically aimed at removing directories. The tool in question is **rmdir** , and in this tutorial, we will discuss the basics of it using some easy to understand examples.
|
||||
|
||||
#### Linux rmdir command
|
||||
|
||||
As the name suggests, the rmdir command is focused at removing directories, although empty-ones only. Following is its syntax:
|
||||
|
||||
```
|
||||
rmdir [OPTION]... DIRECTORY...
|
||||
```
|
||||
|
||||
And here's how the man page explains it:
|
||||
```
|
||||
Remove the DIRECTORY(ies), if they are empty.
|
||||
|
||||
```
|
||||
|
||||
The following Q&A-styled examples should give you a good idea on how this utility works.
|
||||
|
||||
#### Q1. How rmdir works?
|
||||
|
||||
That's pretty straight forward - just pass the directory name as input to the command. For example:
|
||||
|
||||
```
|
||||
rmdir test-dir
|
||||
```
|
||||
|
||||
[![How rmdir works][2]][3]
|
||||
|
||||
#### Q2. How to make rmdir ignore non-empty directories.
|
||||
|
||||
BY default, the rmdir command throws an error if you try deleting a non-empty directory. However, if you want, you can suppress this behavior of rmdir using the --ignore-fail-on-non-empty option.
|
||||
|
||||
For example:
|
||||
|
||||
[![How to make rmdir ignore non-empty directories][4]][5]
|
||||
|
||||
#### Q3. How to make rmdir remove parent directories as well?
|
||||
|
||||
Just like in the case of [mkdir][6], you can also ask rmdir to perform its operation on parent directories. What that means is, you can also delete parent directories of a directory in one go. This feature is accessible through the -p command line option.
|
||||
|
||||
For example, the following command will delete both 'test' and 'test-dir' directories.
|
||||
|
||||
```
|
||||
rmdir -p test/test-dir/
|
||||
```
|
||||
|
||||
**Note** : For this operation to work, all parent directories should not contain anything other than the empty-directory being deleted.
|
||||
|
||||
#### Q4. What is the difference between rmdir and rm -r ?
|
||||
|
||||
If you remember, you can also delete directories using the rm command by enabling the -r option it provides. So what's the difference between that and rmdir? Well, the answer is rmdir only works in the case of empty directories - there's no way whatsoever you can use to make rmdir delete non-empty directories.
|
||||
|
||||
So rmdir is a useful in tool in those situations where you otherwise need to check if a directory is empty before deleting it.
|
||||
|
||||
#### Conclusion
|
||||
|
||||
As you'll agree, rmdir isn't a complex command to understand and use. Plus, it offers only a handful command line options. We've discussed almost all of them here, so practice the examples mentioned in this article, and you should be good to go. Just in case you need, [here's the man page][7] for rmdir.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-rmdir-command/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/linux-rm-command/
|
||||
[2]:https://www.howtoforge.com/images/command-tutorial/rm-basic-usage1.png
|
||||
[3]:https://www.howtoforge.com/images/command-tutorial/big/rm-basic-usage1.png
|
||||
[4]:https://www.howtoforge.com/images/command-tutorial/rmdir-ignore-nonempty.png
|
||||
[5]:https://www.howtoforge.com/images/command-tutorial/big/rmdir-ignore-nonempty.png
|
||||
[6]:https://www.howtoforge.com/linux-mkdir-command/
|
||||
[7]:https://linux.die.net/man/1/rmdir
|
192
sources/tech/20180210 How to create AWS ec2 key using Ansible.md
Normal file
192
sources/tech/20180210 How to create AWS ec2 key using Ansible.md
Normal file
@ -0,0 +1,192 @@
|
||||
How to create AWS ec2 key using Ansible
|
||||
======
|
||||
|
||||
I wanted to create Amazon EC2 Key pair using Ansible tool. I do not want to use AWS CLI. Is it possible to create AWS ec2 key using Ansible?
|
||||
|
||||
You need to use ec2_key module of Ansible. This module has a dependency on python-boto version 2.5 or above. boto is nothing but a python interface to Amazon Web Services using API. You can use boto for services like Amazon S3, Amazon EC2 and others. In short, you need ansible installed along with boto module. Let us see how to install boto and use it with Ansbile.
|
||||
|
||||
### Step 1 – [Install latest version of Ansible on Ubuntu Linux][1]
|
||||
|
||||
You must [configure the PPA on your system to install the latest version of ansible][2]. To manage the repositories that you install software from various PPA (Personal Package Archives). It allow you to upload Ubuntu source packages to be built and published as an apt repository by Launchpad. Type the following [apt-get command][3] or [apt command][4]:
|
||||
```
|
||||
$ sudo apt update
|
||||
$ sudo apt upgrade
|
||||
$ sudo apt install software-properties-common
|
||||
```
|
||||
Next add ppa:ansible/ansible to your system’s Software Source:
|
||||
```
|
||||
$ sudo apt-add-repository ppa:ansible/ansible
|
||||
```
|
||||
Update your repos and install ansible:
|
||||
```
|
||||
$ sudo apt update
|
||||
$ sudo apt install ansible
|
||||
```
|
||||
Install boto:
|
||||
```
|
||||
$ pip3 install boto3
|
||||
```
|
||||
|
||||
#### A note about installing Ansible on CentOS/RHEL 7.x
|
||||
|
||||
You [need to setup EPEL repo on a CentOS and RHEL 7.x][5] along with the [yum command][6]:
|
||||
```
|
||||
$ cd /tmp
|
||||
$ wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
|
||||
$ ls *.rpm
|
||||
$ sudo yum install epel-release-latest-7.noarch.rpm
|
||||
$ sudo yum install ansible
|
||||
```
|
||||
Install boto:
|
||||
```
|
||||
$ pip install boto3
|
||||
```
|
||||
|
||||
### Step 2 – Configure boto
|
||||
|
||||
You need to setup AWS credentials/API keys. See “[AWS Security Credentials][7]” documents on how to create a programmatic API key. Create a directory called ~/.aws using the mkdir command and setup API keys:
|
||||
```
|
||||
$ mkdir -pv ~/.aws/
|
||||
$ vi ~/.aws/credentials
|
||||
```
|
||||
```
|
||||
[default]
|
||||
aws_access_key_id = YOUR-ACCESS-KEY-HERE
|
||||
aws_secret_access_key = YOUR-SECRET-ACCESS-KEY-HERE
|
||||
```
|
||||
|
||||
Also setup default [AWS region][8]:
|
||||
`$ vi ~/.aws/config`
|
||||
Sample outputs:
|
||||
```
|
||||
[default]
|
||||
region = us-west-1
|
||||
```
|
||||
|
||||
Test your boto setup with API by creating a simple python program named test-boto.py:
|
||||
```
|
||||
#!/usr/bin/python3
|
||||
# A simple program to test boto and print s3 bucket names
|
||||
import boto3
|
||||
t = boto3.resource('s3')
|
||||
for b in t.buckets.all():
|
||||
print(b.name)
|
||||
```
|
||||
|
||||
Run it as follows:
|
||||
`$ python3 test-boto.py`
|
||||
Sample outputs:
|
||||
```
|
||||
nixcraft-images
|
||||
nixcraft-backups-cbz
|
||||
nixcraft-backups-forum
|
||||
|
||||
```
|
||||
|
||||
The output confirmed that Python-boto working correctly using AWS API.
|
||||
|
||||
### Step 3 – Create AWS ec2 key using Ansible
|
||||
|
||||
Create a playbook named ec2.key.yml as follows:
|
||||
```
|
||||
---
|
||||
- hosts: local
|
||||
connection: local
|
||||
gather_facts: no
|
||||
tasks:
|
||||
|
||||
- name: Create a new EC2 key
|
||||
ec2_key:
|
||||
name: nixcraft-key
|
||||
region: us-west-1
|
||||
register: ec2_key_result
|
||||
|
||||
- name: Save private key
|
||||
copy: content="{{ ec2_key_result.key.private_key }}" dest="./aws.nixcraft.pem" mode=0600
|
||||
when: ec2_key_result.changed
|
||||
```
|
||||
|
||||
Where,
|
||||
|
||||
* ec2_key: – Maintains ec2 key pair.
|
||||
* name: nixcraft_key – Name of the key pair.
|
||||
* region: us-west-1 – The AWS region to use.
|
||||
* register: ec2_key_result : Save result of generated key to ec2_key_result variable.
|
||||
* copy: content="{{ ec2_key_result.key.private_key }}" dest="./aws.nixcraft.pem" mode=0600 : Sets the contents of ec2_key_result.key.private_key to a file named aws.nixcraft.pem in the current directory. Set mode of the file to 0600 (unix file permissions).
|
||||
* when: ec2_key_result.changed : Only save when ec2_key_result changed is set to true. We don’t want to overwrite our key file.
|
||||
|
||||
|
||||
|
||||
You must create hosts file as follows too:
|
||||
```
|
||||
[local]
|
||||
localhost
|
||||
|
||||
```
|
||||
|
||||
Run your playbook as follows:
|
||||
`$ ansible-playbook -i hosts ec2.key.yml`
|
||||
![](https://www.cyberciti.biz/media/new/faq/2018/02/How-to-create-AWS-ec2-key-using-Ansible.jpg)
|
||||
At the end you should have a private key named aws.nixcraft.pem that you can use with AWS EC2. To view your key use the [cat command][9]:
|
||||
```
|
||||
$ cat aws.nixcraft.pem
|
||||
```
|
||||
If you have EC2 VM, use it as follows:
|
||||
```
|
||||
$ ssh -i aws.nixcraft.pem user@ec2-vm-dns-name
|
||||
```
|
||||
|
||||
#### Finding out info about python data structure variable names such as ec2_key_result.changed and ec2_key_result.key.private_key
|
||||
|
||||
You must be wondering how come I am using variable names such as ec2_key_result.changed and ec2_key_result.key.private_key. Are they defined somewhere? Values are returned from API calls. Simply run the ansible-playbook command with the -v option to see such info:
|
||||
`$ ansible-playbook -v -i hosts ec2.key.yml`
|
||||
![](https://www.cyberciti.biz/media/new/faq/2018/02/ansible-verbose-output.jpg)
|
||||
|
||||
### How do I delete a key?
|
||||
|
||||
Use the following ec2-key-delete.yml:
|
||||
```
|
||||
---
|
||||
- hosts: local
|
||||
connection: local
|
||||
gather_facts: no
|
||||
tasks:
|
||||
|
||||
- name: Delete a EC2 key
|
||||
ec2_key:
|
||||
name: nixcraft-key
|
||||
region: us-west-1
|
||||
# absent means delete keypair
|
||||
state: absent
|
||||
```
|
||||
|
||||
Run it as follows:
|
||||
`$ ansible-playbook -i hosts ec2-key-delete.yml`
|
||||
|
||||
|
||||
### about the author
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the **latest tutorials on SysAdmin, Linux/Unix and open source topics via[RSS/XML feed][10]** or [weekly email newsletter][11].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/how-to-create-aws-ec2-key-using-ansible/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz
|
||||
[1]:https://www.cyberciti.biz/faq/how-to-install-and-configure-latest-version-of-ansible-on-ubuntu-linux/
|
||||
[2]:https://www.cyberciti.biz/faq/ubuntu-sudo-add-apt-repository-command-not-found-error/
|
||||
[3]:https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html (See Linux/Unix apt-get command examples for more info)
|
||||
[4]:https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/ (See Linux/Unix apt command examples for more info)
|
||||
[5]:https://www.cyberciti.biz/faq/installing-rhel-epel-repo-on-centos-redhat-7-x/
|
||||
[6]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info)
|
||||
[7]:https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html
|
||||
[8]:https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html
|
||||
[9]:https://www.cyberciti.biz/faq/linux-unix-appleosx-bsd-cat-command-examples/ (See Linux/Unix cat command examples for more info)
|
||||
[10]:https://www.cyberciti.biz/atom/atom.xml
|
||||
[11]:https://www.cyberciti.biz/subscribe-to-weekly-linux-unix-newsletter-for-sysadmin/
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user