mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-24 02:20:09 +08:00
Merge branch 'LCTT-master' (Conflicting files)
This commit is contained in:
commit
7bce641c2b
@ -0,0 +1,199 @@
|
||||
如何在 Linux 系统查询机器最近重启时间
|
||||
======
|
||||
|
||||
在你的 Linux 或类 UNIX 系统中,你是如何查询系统上次重新启动的日期和时间?怎样显示系统关机的日期和时间? `last` 命令不仅可以按照时间从近到远的顺序列出该会话的特定用户、终端和主机名,而且还可以列出指定日期和时间登录的用户。输出到终端的每一行都包括用户名、会话终端、主机名、会话开始和结束的时间、会话持续的时间。要查看 Linux 或类 UNIX 系统重启和关机的时间和日期,可以使用下面的命令。
|
||||
|
||||
- `last` 命令
|
||||
- `who` 命令
|
||||
|
||||
|
||||
### 使用 who 命令来查看系统重新启动的时间/日期
|
||||
|
||||
你需要在终端使用 [who][1] 命令来打印有哪些人登录了系统,`who` 命令同时也会显示上次系统启动的时间。使用 `last` 命令来查看系统重启和关机的日期和时间,运行:
|
||||
|
||||
```
|
||||
$ who -b
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
system boot 2017-06-20 17:41
|
||||
```
|
||||
|
||||
使用 `last` 命令来查询最近登录到系统的用户和系统重启的时间和日期。输入:
|
||||
|
||||
```
|
||||
$ last reboot | less
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
[![Fig.01: last command in action][2]][2]
|
||||
|
||||
或者,尝试输入:
|
||||
|
||||
```
|
||||
$ last reboot | head -1
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
reboot system boot 4.9.0-3-amd64 Sat Jul 15 19:19 still running
|
||||
```
|
||||
|
||||
`last` 命令通过查看文件 `/var/log/wtmp` 来显示自 wtmp 文件被创建时的所有登录(和登出)的用户。每当系统重新启动时,这个伪用户 `reboot` 就会登录。因此,`last reboot` 命令将会显示自该日志文件被创建以来的所有重启信息。
|
||||
|
||||
### 查看系统上次关机的时间和日期
|
||||
|
||||
可以使用下面的命令来显示上次关机的日期和时间:
|
||||
|
||||
```
|
||||
$ last -x|grep shutdown | head -1
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
shutdown system down 2.6.15.4 Sun Apr 30 13:31 - 15:08 (01:37)
|
||||
```
|
||||
|
||||
命令中,
|
||||
|
||||
* `-x`:显示系统关机和运行等级改变信息
|
||||
|
||||
|
||||
这里是 `last` 命令的其它的一些选项:
|
||||
|
||||
```
|
||||
$ last
|
||||
$ last -x
|
||||
$ last -x reboot
|
||||
$ last -x shutdown
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
![Fig.01: How to view last Linux System Reboot Date/Time ][3]
|
||||
|
||||
### 查看系统正常的运行时间
|
||||
|
||||
评论区的读者建议的另一个命令如下:
|
||||
|
||||
```
|
||||
$ uptime -s
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
2017-06-20 17:41:51
|
||||
```
|
||||
|
||||
### OS X/Unix/FreeBSD 查看最近重启和关机时间的命令示例
|
||||
|
||||
在终端输入下面的命令:
|
||||
|
||||
```
|
||||
$ last reboot
|
||||
```
|
||||
|
||||
在 OS X 示例输出结果如下:
|
||||
|
||||
```
|
||||
reboot ~ Fri Dec 18 23:58
|
||||
reboot ~ Mon Dec 14 09:54
|
||||
reboot ~ Wed Dec 9 23:21
|
||||
reboot ~ Tue Nov 17 21:52
|
||||
reboot ~ Tue Nov 17 06:01
|
||||
reboot ~ Wed Nov 11 12:14
|
||||
reboot ~ Sat Oct 31 13:40
|
||||
reboot ~ Wed Oct 28 15:56
|
||||
reboot ~ Wed Oct 28 11:35
|
||||
reboot ~ Tue Oct 27 00:00
|
||||
reboot ~ Sun Oct 18 17:28
|
||||
reboot ~ Sun Oct 18 17:11
|
||||
reboot ~ Mon Oct 5 09:35
|
||||
reboot ~ Sat Oct 3 18:57
|
||||
|
||||
|
||||
wtmp begins Sat Oct 3 18:57
|
||||
```
|
||||
|
||||
查看关机日期和时间,输入:
|
||||
|
||||
```
|
||||
$ last shutdown
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
shutdown ~ Fri Dec 18 23:57
|
||||
shutdown ~ Mon Dec 14 09:53
|
||||
shutdown ~ Wed Dec 9 23:20
|
||||
shutdown ~ Tue Nov 17 14:24
|
||||
shutdown ~ Mon Nov 16 21:15
|
||||
shutdown ~ Tue Nov 10 13:15
|
||||
shutdown ~ Sat Oct 31 13:40
|
||||
shutdown ~ Wed Oct 28 03:10
|
||||
shutdown ~ Sun Oct 18 17:27
|
||||
shutdown ~ Mon Oct 5 09:23
|
||||
|
||||
|
||||
wtmp begins Sat Oct 3 18:57
|
||||
```
|
||||
|
||||
### 如何查看是谁重启和关闭机器?
|
||||
|
||||
你需要[启用 psacct 服务然后运行下面的命令][4]来查看执行过的命令(包括用户名),在终端输入 [lastcomm][5] 命令查看信息
|
||||
|
||||
```
|
||||
# lastcomm userNameHere
|
||||
# lastcomm commandNameHere
|
||||
# lastcomm | more
|
||||
# lastcomm reboot
|
||||
# lastcomm shutdown
|
||||
### 或者查看重启和关机时间
|
||||
# lastcomm | egrep 'reboot|shutdown'
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
reboot S X root pts/0 0.00 secs Sun Dec 27 23:49
|
||||
shutdown S root pts/1 0.00 secs Sun Dec 27 23:45
|
||||
```
|
||||
|
||||
我们可以看到 root 用户在当地时间 12 月 27 日星期二 23:49 在 pts/0 重新启动了机器。
|
||||
|
||||
### 参见
|
||||
|
||||
* 更多信息可以查看 man 手册(`man last`)和参考文章 [如何在 Linux 服务器上使用 tuptime 命令查看历史和统计的正常的运行时间][6]。
|
||||
|
||||
### 关于作者
|
||||
|
||||
作者是 nixCraft 的创立者,同时也是一名经验丰富的系统管理员,也是 Linux,类 Unix 操作系统 shell 脚本的培训师。他曾与全球各行各业的客户工作过,包括 IT,教育,国防和空间研究以及非营利部门等等。你可以在 [Twitter][7]、[Facebook][8]、[Google+][9] 关注他。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/tips/linux-last-reboot-time-and-date-find-out.html
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[amwps290](https://github.com/amwps290)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/faq/unix-linux-who-command-examples-syntax-usage/ "See Linux/Unix who command examples for more info"
|
||||
[2]:https://www.cyberciti.biz/tips/wp-content/uploads/2006/04/last-reboot.jpg
|
||||
[3]:https://www.cyberciti.biz/media/new/tips/2006/04/check-last-time-system-was-rebooted.jpg
|
||||
[4]:https://www.cyberciti.biz/tips/howto-log-user-activity-using-process-accounting.html
|
||||
[5]:https://www.cyberciti.biz/faq/linux-unix-lastcomm-command-examples-usage-syntax/ "See Linux/Unix lastcomm command examples for more info"
|
||||
[6]:https://www.cyberciti.biz/hardware/howto-see-historical-statistical-uptime-on-linux-server/
|
||||
[7]:https://twitter.com/nixcraft
|
||||
[8]:https://facebook.com/nixcraft
|
||||
[9]:https://plus.google.com/+CybercitiBiz
|
@ -0,0 +1,119 @@
|
||||
如何在 Linux/Unix 的 Bash 中打开或关闭 ls 命令颜色显示
|
||||
======
|
||||
|
||||
如何在 Linux 或类 Unix 操作系统上的 bash shell 中打开或关闭文件名称颜色(`ls` 命令颜色)?
|
||||
|
||||
大多数现代 Linux 发行版和 Unix 系统都有一个定义了文件名称颜色的别名。然后,`ls` 命令负责在屏幕上显示文件、目录和其他文件系统对象的颜色。
|
||||
|
||||
默认情况下,文件类型不会用颜色区分。你需要在 Linux 上将 `--color` 选项传递给 `ls` 命令。如果你正在使用基于 OS X 或 BSD 的系统,请将 `-G` 选项传递给 `ls` 命令。打开或关闭颜色的语法如下。
|
||||
|
||||
### 如何关闭 ls 命令的颜色
|
||||
|
||||
输入以下命令:
|
||||
|
||||
```
|
||||
$ ls --color=none
|
||||
```
|
||||
|
||||
或者用 `unalias` 命令删除别名:
|
||||
|
||||
```
|
||||
$ unalias ls
|
||||
```
|
||||
|
||||
请注意,下面的 bash shell 别名被定义为用 `ls` 命令显示颜色。这个组合使用 [alias 命令][1]和 [grep 命令][2]:
|
||||
|
||||
```
|
||||
$ alias | grep ls
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
alias l='ls -CF'
|
||||
alias la='ls -A'
|
||||
alias ll='ls -alF'
|
||||
alias ls='ls --color=auto'
|
||||
```
|
||||
|
||||
### 如何打开 ls 命令的颜色
|
||||
|
||||
使用以下任何一个命令:
|
||||
|
||||
```
|
||||
$ ls --color=auto
|
||||
$ ls --color=tty
|
||||
```
|
||||
|
||||
如果你想要的话,[可以定义 bash shell 别名][3]:
|
||||
|
||||
```
|
||||
alias ls='ls --color=auto'
|
||||
```
|
||||
|
||||
你可以在 `~/.bash_profile` 或 [~/.bashrc 文件][4] 中添加或删除 `ls` 别名 。使用文本编辑器(如 vi)编辑文件:
|
||||
|
||||
```
|
||||
$ vi ~/.bashrc
|
||||
```
|
||||
|
||||
追加下面的代码:
|
||||
|
||||
```
|
||||
# my ls command aliases #
|
||||
alias ls = 'ls --color=auto'
|
||||
```
|
||||
|
||||
[在 Vi/Vim 文本编辑器中保存并关闭文件即可][5]。
|
||||
|
||||
### 关于 *BSD/macOS/Apple OS X 中 ls 命令的注意点
|
||||
|
||||
将 `-G` 选项传递给 `ls` 命令以在 {Free、Net、Open} BSD 或 macOS 和 Apple OS X Unix 操作系统家族上启用彩色输出:
|
||||
|
||||
```
|
||||
$ ls -G
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
[![How to enable colorized output for the ls command in Mac OS X Terminal][6]][7]
|
||||
|
||||
*如何在 Mac OS X 终端中为 ls 命令启用彩色输出*
|
||||
|
||||
### 如何临时跳过 ls 命令彩色输出?
|
||||
|
||||
你可以使用以下任何一种语法[暂时禁用 bash shell 别名][8]:
|
||||
|
||||
```
|
||||
\ls
|
||||
/bin/ls
|
||||
command ls
|
||||
'ls'
|
||||
```
|
||||
|
||||
#### 关于作者
|
||||
|
||||
作者是 nixCraft 的创建者,经验丰富的系统管理员,也是 Linux 操作系统/Unix shell 脚本的培训师。他曾与全球客户以及IT、教育、国防和太空研究以及非营利部门等多个行业合作。在 [Twitter][9]、[Facebook][10]、[Google +][11] 上关注他。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/how-to-turn-on-or-off-colors-in-bash/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/tips/bash-aliases-mac-centos-linux-unix.html (See Linux/Unix alias command examples for more info)
|
||||
[2]:https://www.cyberciti.biz/faq/howto-use-grep-command-in-linux-unix/ (See Linux/Unix grep command examples for more info)
|
||||
[3]:https://www.cyberciti.biz/tips/bash-aliases-mac-centos-linux-unix.html
|
||||
[4]:https://bash.cyberciti.biz/guide/~/.bashrc
|
||||
[5]:https://www.cyberciti.biz/faq/linux-unix-vim-save-and-quit-command/
|
||||
[6]:https://www.cyberciti.biz/media/new/faq/2016/01/color-ls-for-Mac-OS-X.jpg
|
||||
[7]:https://www.cyberciti.biz/faq/apple-mac-osx-terminal-color-ls-output-option/
|
||||
[8]:https://www.cyberciti.biz/faq/bash-shell-temporarily-disable-an-alias/
|
||||
[9]:https://twitter.com/nixcraft
|
||||
[10]:https://facebook.com/nixcraft
|
||||
[11]:https://plus.google.com/+CybercitiBiz
|
@ -0,0 +1,158 @@
|
||||
在 Linux 上检测 IDE/SATA SSD 硬盘的传输速度
|
||||
======
|
||||
|
||||
你知道你的硬盘在 Linux 下传输有多快吗?不打开电脑的机箱或者机柜,你知道它运行在 SATA I (150 MB/s) 、 SATA II (300 MB/s) 还是 SATA III (6.0Gb/s) 呢?
|
||||
|
||||
你能够使用 `hdparm` 和 `dd` 命令来检测你的硬盘速度。它为各种硬盘的 ioctls 提供了命令行界面,这是由 Linux 系统的 ATA / IDE / SATA 设备驱动程序子系统所支持的。有些选项只能用最新的内核才能正常工作(请确保安装了最新的内核)。我也推荐使用最新的内核源代码的包含头文件来编译 `hdparm` 命令。
|
||||
|
||||
### 如何使用 hdparm 命令来检测硬盘的传输速度
|
||||
|
||||
以 root 管理员权限登录并执行命令:
|
||||
|
||||
```
|
||||
$ sudo hdparm -tT /dev/sda
|
||||
```
|
||||
|
||||
或者,
|
||||
|
||||
```
|
||||
$ sudo hdparm -tT /dev/hda
|
||||
```
|
||||
|
||||
输出:
|
||||
|
||||
```
|
||||
/dev/sda:
|
||||
Timing cached reads: 7864 MB in 2.00 seconds = 3935.41 MB/sec
|
||||
Timing buffered disk reads: 204 MB in 3.00 seconds = 67.98 MB/sec
|
||||
```
|
||||
|
||||
为了检测更精准,这个操作应该**重复2-3次** 。这显示了无需访问磁盘,直接从 Linux 缓冲区缓存中读取的速度。这个测量实际上是被测系统的处理器、高速缓存和存储器的吞吐量的指标。这是一个 [for 循环的例子][1],连续运行测试 3 次:
|
||||
|
||||
```
|
||||
for i in 1 2 3; do hdparm -tT /dev/hda; done
|
||||
```
|
||||
|
||||
这里,
|
||||
|
||||
* `-t` :执行设备读取时序
|
||||
* `-T` :执行缓存读取时间
|
||||
* `/dev/sda` :硬盘设备文件
|
||||
|
||||
|
||||
要 [找出 SATA 硬盘的连接速度][2] ,请输入:
|
||||
|
||||
```
|
||||
sudo hdparm -I /dev/sda | grep -i speed
|
||||
```
|
||||
|
||||
输出:
|
||||
|
||||
```
|
||||
* Gen1 signaling speed (1.5Gb/s)
|
||||
* Gen2 signaling speed (3.0Gb/s)
|
||||
* Gen3 signaling speed (6.0Gb/s)
|
||||
|
||||
```
|
||||
|
||||
以上输出表明我的硬盘可以使用 1.5Gb/s、3.0Gb/s 或 6.0Gb/s 的速度。请注意,您的 BIOS/主板必须支持 SATA-II/III 才行:
|
||||
|
||||
```
|
||||
$ dmesg | grep -i sata | grep 'link up'
|
||||
```
|
||||
|
||||
[![Linux Check IDE SATA SSD Hard Disk Transfer Speed][3]][3]
|
||||
|
||||
### dd 命令
|
||||
|
||||
你使用 `dd` 命令也可以获取到相应的速度信息:
|
||||
|
||||
```
|
||||
dd if=/dev/zero of=/tmp/output.img bs=8k count=256k
|
||||
rm /tmp/output.img
|
||||
```
|
||||
|
||||
输出:
|
||||
|
||||
```
|
||||
262144+0 records in
|
||||
262144+0 records out
|
||||
2147483648 bytes (2.1 GB) copied, 23.6472 seconds, `90.8 MB/s`
|
||||
```
|
||||
|
||||
下面是 [推荐的 dd 命令参数][4]:
|
||||
|
||||
```
|
||||
dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync
|
||||
|
||||
## GNU dd syntax ##
|
||||
dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
|
||||
|
||||
## OR alternate syntax for GNU/dd ##
|
||||
dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync
|
||||
```
|
||||
|
||||
这是上面命令的第三个命令的输出结果:
|
||||
|
||||
```
|
||||
1+0 records in
|
||||
1+0 records out
|
||||
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.23889 s, 253 MB/s
|
||||
```
|
||||
|
||||
### “磁盘与存储” - GUI 工具
|
||||
|
||||
您还可以使用位于“系统>管理>磁盘实用程序”菜单中的磁盘实用程序。请注意,在最新版本的 Gnome 中,它简称为“磁盘”。
|
||||
|
||||
#### 如何使用 Linux 上的“磁盘”测试我的硬盘的性能?
|
||||
|
||||
要测试硬盘的速度:
|
||||
|
||||
1. 从“活动概览”中打开“磁盘”(按键盘上的 super 键并键入“disks”)
|
||||
2. 从“左侧窗格”的列表中选择“磁盘”
|
||||
3. 选择菜单按钮并从菜单中选择“测试磁盘性能……”
|
||||
4. 单击“开始性能测试……”并根据需要调整传输速率和访问时间参数。
|
||||
5. 选择“开始性能测试”来测试从磁盘读取数据的速度。需要管理权限请输入密码。
|
||||
|
||||
以上操作的快速视频演示:
|
||||
|
||||
https://www.cyberciti.biz/tips/wp-content/uploads/2007/10/disks-performance.mp4
|
||||
|
||||
#### 只读 Benchmark (安全模式下)
|
||||
|
||||
然后,选择 > 只读:
|
||||
|
||||
![Fig.01: Linux Benchmarking Hard Disk Read Only Test Speed][5]
|
||||
|
||||
上述选项不会销毁任何数据。
|
||||
|
||||
#### 读写的 Benchmark(所有数据将丢失,所以要小心)
|
||||
|
||||
访问“系统>管理>磁盘实用程序菜单>单击性能测试>单击开始读/写性能测试按钮:
|
||||
|
||||
![Fig.02:Linux Measuring read rate, write rate and access time][6]
|
||||
|
||||
### 作者
|
||||
|
||||
作者是 nixCraft 的创造者,是经验丰富的系统管理员,也是 Linux 操作系统/ Unix shell 脚本的培训师。他曾与全球客户以及 IT,教育,国防和空间研究以及非营利部门等多个行业合作。在Twitter,Facebook和Google+上关注他。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/tips/how-fast-is-linux-sata-hard-disk.html
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[MonkeyDEcho](https://github.com/MonkeyDEcho)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/faq/bash-for-loop/
|
||||
[2]:https://www.cyberciti.biz/faq/linux-command-to-find-sata-harddisk-link-speed/
|
||||
[3]:https://www.cyberciti.biz/tips/wp-content/uploads/2007/10/Linux-Check-IDE-SATA-SSD-Hard-Disk-Transfer-Speed.jpg
|
||||
[4]:https://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd-command/
|
||||
[5]:https://www.cyberciti.biz/media/new/tips/2007/10/Linux-Hard-Disk-Speed-Benchmark.png (Linux Benchmark Hard Disk Speed)
|
||||
[6]:https://www.cyberciti.biz/media/new/tips/2007/10/Linux-Hard-Disk-Read-Write-Benchmark.png (Linux Hard Disk Benchmark Read / Write Rate and Access Time)
|
||||
[7]:https://twitter.com/nixcraft
|
||||
[8]:https://facebook.com/nixcraft
|
||||
[9]:https://plus.google.com/+CybercitiBiz
|
@ -1,30 +1,29 @@
|
||||
Linux / Unix Bash Shell List All Builtin Commands
|
||||
如何列出所有的 Bash Shell 内置命令
|
||||
======
|
||||
|
||||
Builtin commands contained within the bash shell itself. How do I list all built-in bash commands on Linux / Apple OS X / *BSD / Unix like operating systems without reading large size bash man page?
|
||||
内置命令包含在 bash shell 本身里面。我该如何在 Linux / Apple OS X / *BSD / Unix 类操作系统列出所有的内置 bash 命令,而不用去读大篇的 bash 操作说明页?
|
||||
|
||||
A shell builtin is nothing but command or a function, called from a shell, that is executed directly in the shell itself. The bash shell executes the command directly, without invoking another program. You can view information for Bash built-ins with help command. There are different types of built-in commands.
|
||||
shell 内置命令就是一个命令或一个函数,从 shell 中调用,它直接在 shell 中执行。 bash shell 直接执行该命令而无需调用其他程序。你可以使用 `help` 命令查看 Bash 内置命令的信息。以下是几种不同类型的内置命令。
|
||||
|
||||
### 内置命令的类型
|
||||
|
||||
### built-in command types
|
||||
1. Bourne Shell 内置命令:内置命令继承自 Bourne Shell。
|
||||
2. Bash 内置命令:特定于 Bash 的内置命令表。
|
||||
3. 修改 Shell 行为:修改 shell 属性和可选行为的内置命令。
|
||||
4. 特别的内置命令:由 POSIX 特别分类的内置命令。
|
||||
|
||||
1. Bourne Shell Builtins: Builtin commands inherited from the Bourne Shell.
|
||||
2. Bash Builtins: Table of builtins specific to Bash.
|
||||
3. Modifying Shell Behavior: Builtins to modify shell attributes and optional behavior.
|
||||
4. Special Builtins: Builtin commands classified specially by POSIX.
|
||||
### 如何查看所有 bash 内置命令
|
||||
|
||||
有以下的命令:
|
||||
|
||||
|
||||
### How to see all bash builtins
|
||||
|
||||
Type the following command:
|
||||
```
|
||||
$ help
|
||||
$ help | less
|
||||
$ help | grep read
|
||||
```
|
||||
|
||||
Sample outputs:
|
||||
样例输出:
|
||||
|
||||
```
|
||||
GNU bash, version 4.1.5(1)-release (x86_64-pc-linux-gnu)
|
||||
These shell commands are defined internally. Type `help' to see this list.
|
||||
@ -74,20 +73,32 @@ A star (*) next to a name means that the command is disabled.
|
||||
help [-dms] [pattern ...] { COMMANDS ; }
|
||||
```
|
||||
|
||||
### Viewing information for Bash built-ins
|
||||
另外一种选择是使用下列命令:
|
||||
|
||||
```
|
||||
compgen -b
|
||||
compgen -b | more
|
||||
```
|
||||
|
||||
### 查看 Bash 的内置命令信息
|
||||
|
||||
运行以下得到详细信息:
|
||||
|
||||
To get detailed info run:
|
||||
```
|
||||
help command
|
||||
help read
|
||||
```
|
||||
To just get a list of all built-ins with a short description, execute:
|
||||
|
||||
`$ help -d`
|
||||
要仅得到所有带简短描述的内置命令的列表,执行如下:
|
||||
|
||||
### Find syntax and other options for builtins
|
||||
```
|
||||
$ help -d
|
||||
```
|
||||
|
||||
### 查找内置命令的语法和其他选项
|
||||
|
||||
使用下列语法去找出更多的相关内置命令:
|
||||
|
||||
Use the following syntax ' to find out more about the builtins commands:
|
||||
```
|
||||
help name
|
||||
help cd
|
||||
@ -97,7 +108,8 @@ help read
|
||||
help :
|
||||
```
|
||||
|
||||
Sample outputs:
|
||||
样例输出:
|
||||
|
||||
```
|
||||
:: :
|
||||
Null command.
|
||||
@ -108,9 +120,10 @@ Sample outputs:
|
||||
Always succeeds
|
||||
```
|
||||
|
||||
### Find out if a command is internal (builtin) or external
|
||||
### 找出一个命令是内部的(内置)还是外部的
|
||||
|
||||
使用 `type` 命令或 `command` 命令:
|
||||
|
||||
Use the type command or command command:
|
||||
```
|
||||
type -a command-name-here
|
||||
type -a cd
|
||||
@ -119,13 +132,14 @@ type -a :
|
||||
type -a ls
|
||||
```
|
||||
|
||||
或者:
|
||||
|
||||
OR
|
||||
```
|
||||
type -a cd uname : ls uname
|
||||
```
|
||||
|
||||
Sample outputs:
|
||||
样例输出:
|
||||
|
||||
```
|
||||
cd is a shell builtin
|
||||
uname is /bin/uname
|
||||
@ -140,7 +154,8 @@ l ()
|
||||
|
||||
```
|
||||
|
||||
OR
|
||||
或者:
|
||||
|
||||
```
|
||||
command -V ls
|
||||
command -V cd
|
||||
@ -149,17 +164,17 @@ command -V foo
|
||||
|
||||
[![View list bash built-ins command info on Linux or Unix][1]][1]
|
||||
|
||||
### about the author
|
||||
### 关于作者
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][2], [Facebook][3], [Google+][4].
|
||||
作者是 nixCraft 网站的发起人和经验丰富的系统管理员,以及 Linux 操作系统/Unix shell 脚本编程培训师。他与全球客户以及包括 IT、教育、国防和空间研究以及非营利部门在内的各个行业合作。可以在 [Twitter][2]、[Facebook][3]、 [Google+][4] 上关注他。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/linux-unix-bash-shell-list-all-builtin-commands/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[KarenMrzhang](https://github.com/KarenMrzhang)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,108 @@
|
||||
Manjaro Gaming:当 Manjaro 的才华遇上 Linux 游戏
|
||||
======
|
||||
|
||||
[![见见 Manjaro Gaming, 一个专门为游戏者设计的 Linux 发行版,带有 Manjaro 的所有才能。][1]][1]
|
||||
|
||||
[在 Linux 上玩转游戏][2]? 没错,这是非常可行的,我们正致力于为游戏人群打造一个新的 Linux 发行版。
|
||||
|
||||
Manjaro Gaming 是一个专门为游戏人群设计的,带有 Manjaro 所有才能的发行版。之前用过 Manjaro Linux 的人一定知道为什么这对于游戏人群来说是一个如此好的一个消息。
|
||||
|
||||
[Manjaro][3] 是一个 Linux 发行版,它基于最流行的 Linux 发行版之一 —— [Arch Linux][4]。 Arch Linux 因它的前沿性所带来的轻量、强大、高度定制和最新的体验而闻名于世。尽管这些都非常赞,但是也正是因为 Arch Linux 提倡这种 DIY (do it yourself)方式,导致一个主要的缺点,那就是用户想用好它,需要处理一定的技术问题。
|
||||
|
||||
Manjaro 把这些要求全都剥离开去,让 Arch 对新手更亲切,同时也为老手提供了 Arch 所有的高端与强大功能。总之,Manjaro 是一个用户友好型的 Linux 发行版,工作起来行云流水。
|
||||
|
||||
Manjaro 会成为一个强大并且极度适用于游戏的原因:
|
||||
|
||||
* Manjaro 自动检测计算的硬件(例如,显卡)
|
||||
* 自动安装必要的驱动和软件(例如,显示驱动)
|
||||
* 预安装播放媒体文件的编码器
|
||||
* 专用的软件库提供完整测试过的稳定软件包。
|
||||
|
||||
Manjaro Gaming 打包了 Manjaro 的所有强大特性以及各种小工具和软件包,以使得在 Linux 上做游戏即顺畅又享受。
|
||||
|
||||
![Manjaro Gaming 内部][5]
|
||||
|
||||
### 优化
|
||||
|
||||
Manjaro Gaming 做了一些优化:
|
||||
|
||||
* Manjaro Gaming 使用高度定制化的 XFCE 桌面环境,拥有一个黑暗风格主题。
|
||||
* 禁用睡眠模式,防止用手柄上玩游戏或者观看一个长过场动画时计算机自动休眠。
|
||||
|
||||
### 软件
|
||||
|
||||
维持 Manjaro 工作起来行云流水的传统,Manjaro Gaming 打包了各种开源软件包,提供游戏人群经常需要用到的功能。其中一部分软件有:
|
||||
|
||||
* [KdenLIVE][6]:用于编辑游戏视频的视频编辑软件
|
||||
* [Mumble][7]:给游戏人群使用的视频聊天软件
|
||||
* [OBS Studio][8]:用于录制视频或在 [Twitch][9] 上直播游戏用的软件
|
||||
* [OpenShot][10]:Linux 上强大的视频编辑器
|
||||
* [PlayOnLinux][11]:使用 [Wine][12] 作为后端,在 Linux 上运行 Windows 游戏的软件
|
||||
* [Shutter][13]:多种功能的截图工具
|
||||
|
||||
### 模拟器
|
||||
|
||||
Manjaro Gaming 自带很多的游戏模拟器:
|
||||
|
||||
* [DeSmuME][14]:Nintendo DS 任天堂 DS 模拟器
|
||||
* [Dolphin Emulator][15]:GameCube 和 Wii 模拟器
|
||||
* [DOSBox][16]:DOS 游戏模拟器
|
||||
* [FCEUX][17]:任天堂娱乐系统(NES)、 红白机(FC)和 FC 磁盘系统(FDS)模拟器
|
||||
* Gens/GS:世嘉模拟器
|
||||
* [PCSXR][18]:PlayStation 模拟器
|
||||
* [PCSX2][19]:Playstation 2 模拟器
|
||||
* [PPSSPP][20]:PSP 模拟器
|
||||
* [Stella][21]:Atari 2600 VCS (雅达利)模拟器
|
||||
* [VBA-M][22]:Gameboy 和 GameboyAdvance 模拟器
|
||||
* [Yabause][23]:世嘉土星模拟器
|
||||
* [ZSNES][24]:超级任天堂模拟器
|
||||
|
||||
### 其它
|
||||
|
||||
还有一些终端插件 —— Color、ILoveCandy 和 Screenfetch。也包括带有 Retro Conky(LCTT 译注:复古 Conky)风格的 [Conky 管理器][25]。
|
||||
|
||||
注意:上面提到的所有功能并没有全部包含在 Manjaro Gaming 的现行发行版中(版本 16.03)。部分功能计划将在下一版本中纳入 —— Manjaro Gaming 16.06(LCTT 译注:本文发表于 2016 年 5 月)。
|
||||
|
||||
### 下载
|
||||
|
||||
Manjaro Gaming 16.06 将会是 Manjaro Gaming 的第一个正式版本。如果你现在就有兴趣尝试,你可以在 Sourceforge 的[项目页面][26]中下载。去那里然后下载它的 ISO 文件吧。
|
||||
|
||||
你觉得 Gaming Linux 发行版怎么样?想尝试吗?告诉我们!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/manjaro-gaming-linux/
|
||||
|
||||
作者:[Munif Tanjim][a]
|
||||
译者:[XLCYun](https://github.com/XLCYun)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/munif/
|
||||
[1]:https://itsfoss.com/wp-content/uploads/2016/06/Manjaro-Gaming.jpg
|
||||
[2]:https://linux.cn/article-7316-1.html
|
||||
[3]:https://manjaro.github.io/
|
||||
[4]:https://www.archlinux.org/
|
||||
[5]:https://itsfoss.com/wp-content/uploads/2016/06/Manjaro-Gaming-Inside-1024x576.png
|
||||
[6]:https://kdenlive.org/
|
||||
[7]:https://www.mumble.info
|
||||
[8]:https://obsproject.com/
|
||||
[9]:https://www.twitch.tv/
|
||||
[10]:http://www.openshot.org/
|
||||
[11]:https://www.playonlinux.com
|
||||
[12]:https://www.winehq.org/
|
||||
[13]:http://shutter-project.org/
|
||||
[14]:http://desmume.org/
|
||||
[15]:https://dolphin-emu.org
|
||||
[16]:https://www.dosbox.com/
|
||||
[17]:http://www.fceux.com/
|
||||
[18]:https://pcsxr.codeplex.com
|
||||
[19]:http://pcsx2.net/
|
||||
[20]:http://www.ppsspp.org/
|
||||
[21]:http://stella.sourceforge.net/
|
||||
[22]:http://vba-m.com/
|
||||
[23]:https://yabause.org/
|
||||
[24]:http://www.zsnes.com/
|
||||
[25]:https://itsfoss.com/conky-gui-ubuntu-1304/
|
||||
[26]:https://sourceforge.net/projects/mgame/
|
@ -0,0 +1,96 @@
|
||||
使用 iftop 命令监控网络带宽
|
||||
======
|
||||
|
||||
系统管理员需要监控 IT 基础设施来确保一切正常运行。我们需要监控硬件,也就是内存、硬盘和 CPU 等的性能,我们也必须监控我们的网络。我们需要确保我们的网络不被过度使用,否则我们的程序,网站可能无法正常工作。在本教程中,我们将学习使用 `iftop`。
|
||||
|
||||
(**推荐阅读**:[**使用 Nagios** 进行资源监控][1]、[**用于检查系统信息的工具**][2] 、[**要监控的重要日志**][3] )
|
||||
|
||||
`iftop` 是网络监控工具,它提供实时带宽监控。 `iftop` 测量进出各个套接字连接的总数据量,即它捕获通过网络适配器收到或发出的数据包,然后将这些数据相加以得到使用的带宽。
|
||||
|
||||
### 在 Debian/Ubuntu 上安装
|
||||
|
||||
iftop 存在于 Debian/Ubuntu 的默认仓库中,可以使用下面的命令安装:
|
||||
|
||||
```
|
||||
$ sudo apt-get install iftop
|
||||
```
|
||||
|
||||
### 使用 yum 在 RHEL/Centos 上安装
|
||||
|
||||
要在 CentOS 或 RHEL 上安装 iftop,我们需要启用 EPEL 仓库。要启用仓库,请在终端上运行以下命令:
|
||||
|
||||
**RHEL/CentOS 7:**
|
||||
|
||||
```
|
||||
$ rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-10.noarch.rpm
|
||||
```
|
||||
|
||||
**RHEL/CentOS 6(64 位):**
|
||||
|
||||
```
|
||||
$ rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
|
||||
```
|
||||
|
||||
**RHEL/CentOS 6 (32 位):**
|
||||
|
||||
```
|
||||
$ rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
|
||||
```
|
||||
|
||||
EPEL 仓库安装完成后,我们可以用下面的命令安装 `iftop`:
|
||||
|
||||
```
|
||||
$ yum install iftop
|
||||
```
|
||||
|
||||
这将在你的系统上安装 `iftop`。我们现在将用它来监控我们的网络。
|
||||
|
||||
### 使用 iftop
|
||||
|
||||
可以打开终端窗口,并输入下面的命令使用 `iftop`:
|
||||
|
||||
```
|
||||
$ iftop
|
||||
```
|
||||
|
||||
![network monitoring][5]
|
||||
|
||||
现在你将看到计算机上发生的网络活动。你也可以使用:
|
||||
|
||||
```
|
||||
$ iftop -n
|
||||
```
|
||||
|
||||
这将在屏幕上显示网络信息,但使用 `-n`,则不会显示与 IP 地址相关的名称,只会显示 IP 地址。这个选项能节省一些将 IP 地址解析为名称的带宽。
|
||||
|
||||
我们也可以看到 `iftop` 可以使用的所有命令。运行 `iftop` 后,按下键盘上的 `h` 查看 `iftop` 可以使用的所有命令。
|
||||
|
||||
![network monitoring][7]
|
||||
|
||||
要监控特定的网络接口,我们可以在 `iftop` 后加上接口名:
|
||||
|
||||
```
|
||||
$ iftop -I enp0s3
|
||||
```
|
||||
|
||||
如上所述,你可以使用帮助来查看 `iftop` 可以使用的更多选项。但是这些提到的例子只是可能只是监控网络。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/monitoring-network-bandwidth-iftop-command/
|
||||
|
||||
作者:[SHUSAIN][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/installing-configuring-nagios-server/
|
||||
[2]:http://linuxtechlab.com/commands-system-hardware-info/
|
||||
[3]:http://linuxtechlab.com/important-logs-monitor-identify-issues/
|
||||
[4]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=661%2C424
|
||||
[5]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2017/04/iftop-1.jpg?resize=661%2C424
|
||||
[6]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=663%2C416
|
||||
[7]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2017/04/iftop-help.jpg?resize=663%2C416
|
100
published/20170511 Working with VI editor - The Basics.md
Normal file
100
published/20170511 Working with VI editor - The Basics.md
Normal file
@ -0,0 +1,100 @@
|
||||
使用 Vi/Vim 编辑器:基础篇
|
||||
=========
|
||||
|
||||
VI 编辑器是一个基于命令行的、功能强大的文本编辑器,最早为 Unix 系统开发,后来也被移植到许多的 Unix 和 Linux 发行版上。
|
||||
|
||||
在 Linux 上还存在着另一个 VI 编辑器的高阶版本 —— VIM(也被称作 VI IMproved)。VIM 只是在 VI 已经很强的功能上添加了更多的功能,这些功能有:
|
||||
|
||||
- 支持更多 Linux 发行版,
|
||||
- 支持多种编程语言,包括 python、c++、perl 等语言的代码块折叠,语法高亮,
|
||||
- 支持通过多种网络协议,包括 http、ssh 等编辑文件,
|
||||
- 支持编辑压缩归档中的文件,
|
||||
- 支持分屏同时编辑多个文件。
|
||||
|
||||
接下来我们会讨论 VI/VIM 的命令以及选项。本文出于教学的目的,我们使用 VI 来举例,但所有的命令都可以被用于 VIM。首先我们先介绍 VI 编辑器的两种模式。
|
||||
|
||||
### 命令模式
|
||||
|
||||
命令模式下,我们可以执行保存文件、在 VI 内运行命令、复制/剪切/粘贴操作,以及查找/替换等任务。当我们处于插入模式时,我们可以按下 `Escape`(`Esc`)键返回命令模式
|
||||
|
||||
### 插入模式
|
||||
|
||||
在插入模式下,我们可以键入文件内容。在命令模式下按下 `i` 进入插入模式。
|
||||
|
||||
### 创建文件
|
||||
|
||||
我们可以通过下述命令建立一个文件(LCTT 译注:如果该文件存在,则编辑已有文件):
|
||||
|
||||
```
|
||||
$ vi filename
|
||||
```
|
||||
|
||||
一旦该文件被创建或者打开,我们首先进入命令模式,我们需要进入输入模式以在文件中输入内容。我们通过前文已经大致上了解这两种模式。
|
||||
|
||||
### 退出 Vi
|
||||
|
||||
如果是想从插入模式中退出,我们首先需要按下 `Esc` 键进入命令模式。接下来我们可以根据不同的需要分别使用两种命令退出 Vi。
|
||||
|
||||
1. 不保存退出 - 在命令模式中输入 `:q!`
|
||||
2. 保存并退出 - 在命令模式中输入 `:wq`
|
||||
|
||||
### 移动光标
|
||||
|
||||
下面我们来讨论下那些在命令模式中移动光标的命令和选项:
|
||||
|
||||
1. `k` 将光标上移一行
|
||||
2. `j` 将光标下移一行
|
||||
3. `h` 将光标左移一个字母
|
||||
4. `l` 将光标右移一个字母
|
||||
|
||||
注意:如果你想通过一个命令上移或下移多行,或者左移、右移多个字母,你可以使用 `4k` 或者 `5l`,这两条命令会分别上移 4 行或者右移 5 个字母。
|
||||
1. `0` 将光标移动到该行行首
|
||||
2. `$` 将光标移动到该行行尾
|
||||
3. `nG` 将光标移动到第 n 行
|
||||
4. `G` 将光标移动到文件的最后一行
|
||||
5. `{` 将光标移动到上一段
|
||||
6. `}` 将光标移动到下一段
|
||||
|
||||
除此之外还有一些命令可以用于控制光标的移动,但上述列出的这些命令应该就能应付日常工作所需。
|
||||
|
||||
### 编辑文本
|
||||
|
||||
这部分会列出一些用于命令模式的命令,可以进入插入模式来编辑当前文件
|
||||
|
||||
|
||||
1. `i` 在当前光标位置之前插入内容
|
||||
2. `I` 在光标所在行的行首插入内容
|
||||
3. `a` 在当前光标位置之后插入内容
|
||||
4. `A` 在光标所在行的行尾插入内容
|
||||
5. `o` 在当前光标所在行之后添加一行
|
||||
6. `O` 在当前光标所在行之前添加一行
|
||||
|
||||
|
||||
### 删除文本
|
||||
|
||||
|
||||
以下的这些命令都只能在命令模式下使用,所以首先需要按下 `Esc` 进入命令模式,如果你正处于插入模式:
|
||||
|
||||
1. `dd` 删除光标所在的整行内容,可以在 `dd` 前增加数字,比如 `2dd` 可以删除从光标所在行开始的两行
|
||||
2. `d$` 删除从光标所在位置直到行尾
|
||||
3. `d^` 删除从光标所在位置直到行首
|
||||
4. `dw` 删除从光标所在位置直到下一个词开始的所有内容
|
||||
|
||||
### 复制与黏贴
|
||||
|
||||
1. `yy` 复制当前行,在 `yy` 前添加数字可以复制多行
|
||||
2. `p` 在光标之后粘贴复制行
|
||||
3. `P` 在光标之前粘贴复制行
|
||||
|
||||
上述就是可以在 VI/VIM 编辑器上使用的一些基本命令。在未来的教程中还会继续教授一些更高级的命令。如果有任何疑问和建议,请在下方评论区留言。
|
||||
|
||||
---------
|
||||
via: http://linuxtechlab.com/working-vi-editor-basics/
|
||||
|
||||
作者:[Shusain][a]
|
||||
译者:[ljgibbslf](https://github.com/ljgibbslf)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 LCTT 原创编译,Linux中国 荣誉推出
|
||||
|
||||
[a]: http://linuxtechlab.com/author/shsuain/
|
@ -1,10 +1,11 @@
|
||||
我在 Twitch 平台直播编程的第一年
|
||||
我在 Twitch 平台直播编程的经验
|
||||
============================================================
|
||||
去年 7 月我进行了第一次直播。不像大多数人那样在 Twitch 上进行游戏直播,我想直播的内容是我利用个人时间进行的开源工作。我对 NodeJS 硬件库有一定的研究(其中大部分是靠我自学的)。考虑到我已经在 Twitch 上有了一个直播间,为什么不再建一个更小更专业的直播间,比如使用 <ruby>JavaScript 驱动硬件<rt>JavaScript powered hardware</rt></ruby> 来建立直播间 :) 我注册了 [我自己的频道][1] ,从那以后我就开始定期直播。
|
||||
|
||||
去年 7 月我进行了第一次直播。不像大多数人那样在 Twitch 上进行游戏直播,我想直播的内容是我利用个人时间进行的开源工作。我对 NodeJS 硬件库有一定的研究(其中大部分是靠我自学的)。考虑到我已经在 Twitch 上有了一个直播间,为什么不再建一个更小更专业的直播间,比如 <ruby>由 JavaScript 驱动的硬件<rt>JavaScript powered hardware</rt></ruby> ;) 我注册了 [我自己的频道][1] ,从那以后我就开始定期直播。
|
||||
|
||||
我当然不是第一个这么做的人。[Handmade Hero][2] 是我最早看到的几个在线直播编程的程序员之一。很快这种直播方式被 Vlambeer 发扬光大,他在 Twitch 的 [Nuclear Throne live][3] 直播间进行直播。我对 Vlambeer 尤其着迷。
|
||||
|
||||
我的朋友 [Nolan Lawson][4] 让我 _真正开始做_ 这件事,而不只是单纯地 _想要做_ 。我看了他 [在周末直播开源工作][5] ,做得棒极了。他解释了他当时做的每一件事。每一件事。回复 GitHub 上的 <ruby>问题<rt>issues</rt></ruby> ,鉴别 bug ,在 <ruby>分支<rt>branches</rt></ruby> 中调试程序,你知道的。这令我着迷,因为 Nolan 使他的开源库得到了广泛的使用。他的开源生活和我的完全不一样。
|
||||
我的朋友 [Nolan Lawson][4] 让我 _真正开始做_ 这件事,而不只是单纯地 _想要做_ 。我看了他 [在周末直播开源工作][5] ,做得棒极了。他解释了他当时做的每一件事。是的,每一件事,包括回复 GitHub 上的 <ruby>问题<rt>issues</rt></ruby> ,鉴别 bug ,在 <ruby>分支<rt>branches</rt></ruby> 中调试程序,你知道的。这令我着迷,因为 Nolan 使他的开源库得到了广泛的使用。他的开源生活和我的完全不一样。
|
||||
|
||||
你甚至可以看到我在他视频下的评论:
|
||||
|
||||
@ -14,27 +15,27 @@
|
||||
|
||||
那个星期六我极少的几个听众给了我很大的鼓舞,因此我坚持了下去。现在我有了超过一千个听众,他们中的一些人形成了一个可爱的小团体,他们会定期观看我的直播,我称呼他们为 “noopkat 家庭” 。
|
||||
|
||||
我们很开心。我想称呼这个即时编程部分为“多玩家在线组队编程”。我真的被他们每个人的热情和才能触动了。一次,一个团体成员指出我的 Arduino 开发板没有连接上软件,因为板子上的芯片丢了。这真是最有趣的时刻之一。
|
||||
我们很开心。我想称呼这个即时编程部分为“多玩家在线组队编程”。我真的被他们每个人的热情和才能触动了。一次,一个团体成员指出我的 Arduino 开发板不能随同我的软件工作,因为板子上的芯片丢了。这真是最有趣的时刻之一。
|
||||
|
||||
我经常暂停直播,检查我的收件箱,看看有没有人对我提过的,不再有时间完成的工作发起 <ruby>拉取请求<rt>pull request</rt></ruby> 。感谢我 Twitch 社区对我的帮助和鼓励。
|
||||
我经常暂停直播,检查我的收件箱,看看有没有人对我提及过但没有时间完成的工作发起 <ruby>拉取请求<rt>pull request</rt></ruby> 。感谢我 Twitch 社区对我的帮助和鼓励。
|
||||
|
||||
我很想聊聊 Twitch 直播给我带来的好处,但它的内容太多了,我应该会在我下一个博客里介绍。我在这里想要分享的,是我学习的关于如何自己实现直播编程的课程。最近几个开发者问我怎么开始自己的直播,因此我在这里想大家展示我给他们的建议!
|
||||
我很想聊聊 Twitch 直播给我带来的好处,但它的内容太多了,我应该会在我下一篇博客里介绍。我在这里想要分享的,是我学习的关于如何自己实现直播编程的课程。最近几个开发者问我怎么开始自己的直播,因此我在这里想大家展示我给他们的建议!
|
||||
|
||||
首先,我在这里贴出一个给过我很大帮助的教程 [“Streaming and Finding Success on Twitch”][7] 。它专注于 Twitch 与游戏直播,但也有很多和我们要做的东西相关的部分。我建议首先阅读这个教程,然后再考虑一些建立直播频道的细节(比如如何选择设备和软件)。
|
||||
|
||||
下面我列出我自己的配置。这些配置是从我多次的错误经验中总结出来的,其中要感谢我的直播同行的智慧与建议(对,你们知道就是你们!)。
|
||||
下面我列出我自己的配置。这些配置是从我多次的错误经验中总结出来的,其中要感谢我的直播同行的智慧与建议。(对,你们知道就是你们!)
|
||||
|
||||
### 软件
|
||||
|
||||
有很多免费的直播软件。我用的是 [Open Broadcaster Software (OBS)][8] 。它适用于大多数的平台。我觉得它十分直观且易于入门,但掌握其他的进阶功能则需要一段时间的学习。学好它你会获得很多好处!这是今天我直播时 OBS 的桌面截图(点击查看大图):
|
||||
有很多免费的直播软件。我用的是 [Open Broadcaster Software (OBS)][8] 。它适用于大多数的平台。我觉得它十分直观且易于入门,但掌握其他的进阶功能则需要一段时间的学习。学好它你会获得很多好处!这是今天我直播时 OBS 的桌面截图:
|
||||
|
||||

|
||||
|
||||
你直播时需要在不用的“场景”中进行切换。一个“场景”是多个“素材”通过堆叠和组合产生的集合。一个“素材”可以是照相机,麦克风,你的桌面,网页,动态文本,图片等等。 OBS 是一个很强大的软件。
|
||||
你直播时需要在不用的“<ruby>场景<rt>scenes</rt></ruby>”中进行切换。一个“场景”是多个“<ruby>素材<rt>sources</rt></ruby>”通过堆叠和组合产生的集合。一个“素材”可以是照相机、麦克风、你的桌面、网页、动态文本、图片等等。 OBS 是一个很强大的软件。
|
||||
|
||||
最上方的桌面场景是我编程的环境,我直播的时候主要停留在这里。我使用 iTerm 和 vim ,同时打开一个可以切换的浏览器窗口来查阅文献或在 GitHub 上分类检索资料。
|
||||
|
||||
底部的黑色长方形是我的网络摄像头,人们可以通过这种个人化的连接方式来观看我工作。
|
||||
底部的黑色长方形是我的网络摄像头,人们可以通过这种更个人化的连接方式来观看我工作。
|
||||
|
||||
我的场景中有一些“标签”,很多都与状态或者顶栏信息有关。顶栏只是添加了个性化信息,它在直播时是一个很好的连续性素材。这是我在 [GIMP][9] 里制作的图片,在你的场景里它会作为一个素材来加载。一些标签是从文本文件里添加的动态内容(例如最新粉丝)。另一个标签是一个 [custom one I made][10] ,它可以展示我直播的房间的动态温度与湿度。
|
||||
|
||||
@ -62,7 +63,7 @@
|
||||
|
||||
### 硬件
|
||||
|
||||
我从使用便宜的器材开始,当我意识到我会长期坚持直播之后,才将他们逐渐换成更好的。开始的时候尽量使用你现有的器材,即使是只用电脑内置的摄像头与麦克风。
|
||||
我从使用便宜的器材开始,当我意识到我会长期坚持直播之后,才将它们逐渐换成更好的。开始的时候尽量使用你现有的器材,即使是只用电脑内置的摄像头与麦克风。
|
||||
|
||||
现在我使用 Logitech Pro C920 网络摄像头,和一个固定有支架的 Blue Yeti 麦克风。花费是值得的。我直播的质量完全不同了。
|
||||
|
||||
@ -116,7 +117,7 @@
|
||||
|
||||
当你即将开始的时候,你会感觉很奇怪,不适应。你会在人们看着你写代码的时候感到紧张。这很正常!尽管我之前有过公共演说的经历,我一开始的时候还是感到陌生而不适应。我感觉我无处可藏,这令我害怕。我想:“大家可能都觉得我的代码很糟糕,我是一个糟糕的开发者。”这是一个困扰了我 _整个职业生涯_ 的想法,对我来说不新鲜了。我知道带着这些想法,我不能在发布到 GitHub 之前仔细地再检查一遍代码,而这样做更有利于我保持我作为开发者的声誉。
|
||||
|
||||
我从 Twitch 直播中发现了很多关于我代码风格的东西。我知道我的风格绝对是“先让它跑起来,然后再考虑可读性,然后再考虑运行速度”。我不再在前一天晚上提前排练好直播的内容(一开始的三四次直播我都是这么做的),所以我在 Twitch 上写的代码是相当粗糙的,我还得保证它们运行起来没问题。当我不看别人的聊天和讨论的时候,我可以写出我最好的代码,这样是没问题的。但我总会忘记我使用过无数遍的方法的名字,而且每次直播的时候都会犯“愚蠢的”错误。一般来说,这不是一个让你能达到你最好状态的生产环境。
|
||||
我从 Twitch 直播中发现了很多关于我代码风格的东西。我知道我的风格绝对是“先让它跑起来,然后再考虑可读性,然后再考虑运行速度”。我不再在前一天晚上提前排练好直播的内容(一开始的三、四次直播我都是这么做的),所以我在 Twitch 上写的代码是相当粗糙的,我还得保证它们运行起来没问题。当我不看别人的聊天和讨论的时候,我可以写出我最好的代码,这样是没问题的。但我总会忘记我使用过无数遍的方法的名字,而且每次直播的时候都会犯“愚蠢的”错误。一般来说,这不是一个让你能达到你最好状态的生产环境。
|
||||
|
||||
我的 Twitch 社区从来不会因为这个苛求我,反而是他们帮了我很多。他们理解我正同时做着几件事,而且真的给了很多务实的意见和建议。有时是他们帮我找到了解决方法,有时是我要向他们解释为什么他们的建议不适合解决这个问题。这真的很像一般意义的组队编程!
|
||||
|
||||
@ -128,7 +129,7 @@
|
||||
|
||||
如果你周日想要加入我的直播,你可以 [订阅我的 Twitch 频道][13] :)
|
||||
|
||||
最后我想说一下,我个人十分感谢 [Mattias Johansson][14] 在我早期开始直播的时候给我的建议和鼓励。他的 [FunFunFunction YouTube channel][15] 也是一个令人激动的定期直播频道。
|
||||
最后我想说一下,我自己十分感谢 [Mattias Johansson][14] 在我早期开始直播的时候给我的建议和鼓励。他的 [FunFunFunction YouTube channel][15] 也是一个令人激动的定期直播频道。
|
||||
|
||||
另:许多人问过我的键盘和其他工作设备是什么样的, [这是我使用的器材的完整列表][16] 。感谢关注!
|
||||
|
||||
@ -136,9 +137,9 @@
|
||||
|
||||
via: https://medium.freecodecamp.org/lessons-from-my-first-year-of-live-coding-on-twitch-41a32e2f41c1
|
||||
|
||||
作者:[ Suz Hinton][a]
|
||||
作者:[Suz Hinton][a]
|
||||
译者:[lonaparte](https://github.com/lonaparte)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,312 @@
|
||||
使用 Ansible 让你的系统管理自动化
|
||||
======
|
||||
|
||||
>精进你的系统管理能力和 Linux 技能,学习如何设置工具来简化管理多台机器。
|
||||
|
||||

|
||||
|
||||
你是否想精进你的系统管理能力和 Linux 技能?也许你的本地局域网上跑了一些东西,而你又想让生活更轻松一点--那该怎么办呢?在本文中,我会向你演示如何设置工具来简化管理多台机器。
|
||||
|
||||
远程管理工具有很多,SaltStack、Puppet、Chef,以及 Ansible 都是很流行的选择。在本文中,我将重点放在 Ansible 上并会解释它是如何帮到你的,不管你是有 5 台还是 1000 台虚拟机。
|
||||
|
||||
让我们从多机(不管这些机器是虚拟的还是物理的)的基本管理开始。我假设你知道要做什么,有基础的 Linux 管理技能(至少要有能找出执行每个任务具体步骤的能力)。我会向你演示如何使用这一工具,而是否使用它由你自己决定。
|
||||
|
||||
### 什么是 Ansible?
|
||||
|
||||
Ansible 的网站上将之解释为 “一个超级简单的 IT 自动化引擎,可以自动进行云供给、配置管理、应用部署、服务内部编排,以及其他很多 IT 需求。” 通过在一个集中的位置定义好服务器集合,Ansible 可以在多个服务器上执行相同的任务。
|
||||
|
||||
如果你对 Bash 的 `for` 循环很熟悉,你会发现 Ansible 操作跟这很类似。区别在于 Ansible 是<ruby>幕等的<rt>idempotent</rt></ruby>。通俗来说就是 Ansible 一般只有在确实会发生改变时才执行所请求的动作。比如,假设你执行一个 Bash 的 for 循环来为多个机器创建用户,像这样子:
|
||||
|
||||
```
|
||||
for server in serverA serverB serverC; do ssh ${server} "useradd myuser"; done
|
||||
```
|
||||
|
||||
这会在 serverA、serverB,以及 serverC 上创建 myuser 用户;然而不管这个用户是否存在,每次运行这个 for 循环时都会执行 `useradd` 命令。一个幕等的系统会首先检查用户是否存在,只有在不存在的情况下才会去创建它。当然,这个例子很简单,但是幕等工具的好处将会随着时间的推移变得越发明显。
|
||||
|
||||
#### Ansible 是如何工作的?
|
||||
|
||||
Ansible 会将 Ansible playbooks 转换成通过 SSH 运行的命令,这在管理类 UNIX 环境时有很多优势:
|
||||
|
||||
1. 绝大多数类 UNIX 机器默认都开了 SSH。
|
||||
2. 依赖 SSH 意味着远程主机不需要有代理。
|
||||
3. 大多数情况下都无需安装额外的软件,Ansible 需要 2.6 或更新版本的 Python。而绝大多数 Linux 发行版默认都安装了这一版本(或者更新版本)的 Python。
|
||||
4. Ansible 无需主节点。他可以在任何安装有 Ansible 并能通过 SSH 访问的主机上运行。
|
||||
5. 虽然可以在 cron 中运行 Ansible,但默认情况下,Ansible 只会在你明确要求的情况下运行。
|
||||
|
||||
#### 配置 SSH 密钥认证
|
||||
|
||||
使用 Ansible 的一种常用方法是配置无需密码的 SSH 密钥登录以方便管理。(可以使用 Ansible Vault 来为密码等敏感信息提供保护,但这不在本文的讨论范围之内)。现在只需要使用下面命令来生成一个 SSH 密钥,如示例 1 所示。
|
||||
|
||||
```
|
||||
[09:44 user ~]$ ssh-keygen
|
||||
Generating public/private rsa key pair。
|
||||
Enter file in which to save the key (/home/user/.ssh/id_rsa):
|
||||
Created directory '/home/user/.ssh'。
|
||||
Enter passphrase (empty for no passphrase):
|
||||
Enter same passphrase again:
|
||||
Your identification has been saved in /home/user/.ssh/id_rsa。
|
||||
Your public key has been saved in /home/user/.ssh/id_rsa.pub。
|
||||
The key fingerprint is:
|
||||
SHA256:TpMyzf4qGqXmx3aqZijVv7vO9zGnVXsh6dPbXAZ+LUQ user@user-fedora
|
||||
The key's randomart image is:
|
||||
+---[RSA 2048]----+
|
||||
| |
|
||||
| |
|
||||
| E |
|
||||
| o . .。|
|
||||
| . + S o+。|
|
||||
| . .o * . .+ooo|
|
||||
| . .+o o o oo+。*|
|
||||
|。.ooo* o。* .*+|
|
||||
| . o+*BO.o+ .o|
|
||||
+----[SHA256]-----+
|
||||
```
|
||||
|
||||
*示例 1 :生成一个 SSH 密钥*
|
||||
|
||||
|
||||
在示例 1 中,直接按下回车键来接受默认值。任何非特权用户都能生成 SSH 密钥,也能安装到远程系统中任何用户的 SSH 的 `authorized_keys` 文件中。生成密钥后,还需要将之拷贝到远程主机上去,运行下面命令:
|
||||
|
||||
```
|
||||
ssh-copy-id root@servera
|
||||
```
|
||||
|
||||
注意:运行 Ansible 本身无需 root 权限;然而如果你使用非 root 用户,你_需要_为要执行的任务配置合适的 sudo 权限。
|
||||
|
||||
输入 servera 的 root 密码,这条命令会将你的 SSH 密钥安装到远程主机上去。安装好 SSH 密钥后,再通过 SSH 登录远程主机就不再需要输入 root 密码了。
|
||||
|
||||
### 安装 Ansible
|
||||
|
||||
只需要在示例 1 中生成 SSH 密钥的那台主机上安装 Ansible。若你使用的是 Fedora,输入下面命令:
|
||||
|
||||
```
|
||||
sudo dnf install ansible -y
|
||||
```
|
||||
|
||||
若运行的是 CentOS,你需要为 EPEL 仓库配置额外的包:
|
||||
|
||||
```
|
||||
sudo yum install epel-release -y
|
||||
```
|
||||
|
||||
然后再使用 yum 来安装 Ansible:
|
||||
|
||||
```
|
||||
sudo yum install ansible -y
|
||||
```
|
||||
|
||||
对于基于 Ubuntu 的系统,可以从 PPA 上安装 Ansible:
|
||||
|
||||
```
|
||||
sudo apt-get install software-properties-common -y
|
||||
sudo apt-add-repository ppa:ansible/ansible
|
||||
sudo apt-get update
|
||||
sudo apt-get install ansible -y
|
||||
```
|
||||
|
||||
若你使用的是 macOS,那么推荐通过 Python PIP 来安装:
|
||||
|
||||
```
|
||||
sudo pip install ansible
|
||||
```
|
||||
|
||||
对于其他发行版,请参见 [Ansible 安装文档 ][2]。
|
||||
|
||||
### Ansible Inventory
|
||||
|
||||
Ansible 使用一个 INI 风格的文件来追踪要管理的服务器,这种文件被称之为<ruby>库存清单<rt>Inventory</rt></ruby>。默认情况下该文件位于 `/etc/ansible/hosts`。本文中,我使用示例 2 中所示的 Ansible 库存清单来对所需的主机进行操作(为了简洁起见已经进行了裁剪):
|
||||
|
||||
```
|
||||
[arch]
|
||||
nextcloud
|
||||
prometheus
|
||||
desktop1
|
||||
desktop2
|
||||
vm-host15
|
||||
|
||||
[fedora]
|
||||
netflix
|
||||
|
||||
[centos]
|
||||
conan
|
||||
confluence
|
||||
7-repo
|
||||
vm-server1
|
||||
gitlab
|
||||
|
||||
[ubuntu]
|
||||
trusty-mirror
|
||||
nwn
|
||||
kids-tv
|
||||
media-centre
|
||||
nas
|
||||
|
||||
[satellite]
|
||||
satellite
|
||||
|
||||
[ocp]
|
||||
lb00
|
||||
ocp_dns
|
||||
master01
|
||||
app01
|
||||
infra01
|
||||
```
|
||||
|
||||
*示例 2 : Ansible 主机文件*
|
||||
|
||||
每个分组由中括号和组名标识(像这样 `[group1]` ),是应用于一组服务器的任意组名。一台服务器可以存在于多个组中,没有任何问题。在这个案例中,我有根据操作系统进行的分组(`arch`、`ubuntu`、`centos`、`fedora`),也有根据服务器功能进行的分组(`ocp`、`satellite`)。Ansible 主机文件可以处理比这复杂的多的情况。详细内容,请参阅 [库存清单文档][3]。
|
||||
|
||||
### 运行命令
|
||||
|
||||
将你的 SSH 密钥拷贝到库存清单中所有服务器上后,你就可以开始使用 Ansible 了。Ansible 的一项基本功能就是运行特定命令。语法为:
|
||||
|
||||
```
|
||||
ansible -a "some command"
|
||||
```
|
||||
例如,假设你想升级所有的 CentOS 服务器,可以运行:
|
||||
|
||||
```
|
||||
ansible centos -a 'yum update -y'
|
||||
```
|
||||
|
||||
_注意:不是必须要根据服务器操作系统来进行分组的。我下面会提到,[Ansible Facts][4] 可以用来收集这一信息;然而,若使用 Facts 的话,则运行特定命令会变得很复杂,因此,如果你在管理异构环境的话,那么为了方便起见,我推荐创建一些根据操作系统来划分的组。_
|
||||
|
||||
这会遍历 `centos` 组中的所有服务器并安装所有的更新。一个更加有用的命令应该是 Ansible 的 `ping` 模块了,可以用来验证服务器是否准备好接受命令了:
|
||||
|
||||
```
|
||||
ansible all -m ping
|
||||
```
|
||||
|
||||
这会让 Ansible 尝试通过 SSH 登录库存清单中的所有服务器。在示例 3 中可以看到 `ping` 命令的部分输出结果。
|
||||
|
||||
```
|
||||
nwn | SUCCESS => {
|
||||
"changed":false,
|
||||
"ping":"pong"
|
||||
}
|
||||
media-centre | SUCCESS => {
|
||||
"changed":false,
|
||||
"ping":"pong"
|
||||
}
|
||||
nas | SUCCESS => {
|
||||
"changed":false,
|
||||
"ping":"pong"
|
||||
}
|
||||
kids-tv | SUCCESS => {
|
||||
"changed":false,
|
||||
"ping":"pong"
|
||||
}
|
||||
...
|
||||
```
|
||||
|
||||
*示例 3 :Ansible ping 命令输出*
|
||||
|
||||
运行指定命令的能力有助于完成快速任务(LCTT 译注:应该指的那种一次性任务),但是如果我想在以后也能以同样的方式运行同样的任务那该怎么办呢?Ansible [playbooks][5] 就是用来做这个的。
|
||||
|
||||
### 复杂任务使用 Ansible playbooks
|
||||
|
||||
Ansible <ruby>剧本<rt>playbook<rt></ruby> 就是包含 Ansible 指令的 YAML 格式的文件。我这里不打算讲解类似 Roles 和 Templates 这些比较高深的内容。有兴趣的话,请阅读 [Ansible 文档][6]。
|
||||
|
||||
在前一章节,我推荐你使用 `ssh-copy-id` 命令来传递你的 SSH 密钥;然而,本文关注于如何以一种一致的、可重复性的方式来完成任务。示例 4 演示了一种以冥等的方式,即使 SSH 密钥已经存在于目标主机上也能保证正确性的实现方法。
|
||||
|
||||
```
|
||||
---
|
||||
- hosts:all
|
||||
gather_facts:false
|
||||
vars:
|
||||
ssh_key:'/root/playbooks/files/laptop_ssh_key'
|
||||
tasks:
|
||||
- name:copy ssh key
|
||||
authorized_key:
|
||||
key:"{{ lookup('file',ssh_key) }}"
|
||||
user:root
|
||||
```
|
||||
|
||||
*示例 4:Ansible 剧本 “push_ssh_keys.yaml”*
|
||||
|
||||
`- hosts:` 行标识了这个剧本应该在那个主机组上执行。在这个例子中,它会检查库存清单里的所有主机。
|
||||
|
||||
`gather_facts:` 行指明 Ansible 是否去搜索每个主机的详细信息。我稍后会做一次更详细的检查。现在为了节省时间,我们设置 `gather_facts` 为 `false`。
|
||||
|
||||
`vars:` 部分,顾名思义,就是用来定义剧本中所用变量的。在示例 4 的这个简短剧本中其实不是必要的,但是按惯例我们还是设置了一个变量。
|
||||
|
||||
最后由 `tasks:` 标注的这个部分,是存放主体指令的地方。每个任务都有一个 `-name:`。Ansbile 在运行剧本时会显示这个名字。
|
||||
|
||||
`authorized_key:` 是剧本所使用 Ansible 模块的名字。可以通过命令 `ansible-doc -a` 来查询 Ansible 模块的相关信息; 不过通过网络浏览器查看 [文档 ][7] 可能更方便一些。[authorized_key 模块][8] 有很多很好的例子可以参考。要运行示例 4 中的剧本,只要运行 `ansible-playbook` 命令就行了:
|
||||
|
||||
```
|
||||
ansible-playbook push_ssh_keys.yaml
|
||||
```
|
||||
|
||||
如果是第一次添加 SSH 密钥,SSH 会提示你输入 root 用户的密码。
|
||||
|
||||
现在 SSH 密钥已经传输到服务器中去了,可以来做点有趣的事了。
|
||||
|
||||
### 使用 Ansible 收集信息
|
||||
|
||||
Ansible 能够收集目标系统的各种信息。如果你的主机数量很多,那它会特别的耗时。按我的经验,每台主机大概要花个 1 到 2 秒钟,甚至更长时间;然而有时收集信息是有好处的。考虑下面这个剧本,它会禁止 root 用户通过密码远程登录系统:
|
||||
|
||||
```
|
||||
---
|
||||
- hosts:all
|
||||
gather_facts:true
|
||||
vars:
|
||||
tasks:
|
||||
- name:Enabling ssh-key only root access
|
||||
lineinfile:
|
||||
dest:/etc/ssh/sshd_config
|
||||
regexp:'^PermitRootLogin'
|
||||
line:'PermitRootLogin without-password'
|
||||
notify:
|
||||
- restart_sshd
|
||||
- restart_ssh
|
||||
|
||||
handlers:
|
||||
- name:restart_sshd
|
||||
service:
|
||||
name:sshd
|
||||
state:restarted
|
||||
enabled:true
|
||||
when:ansible_distribution == 'RedHat'
|
||||
- name:restart_ssh
|
||||
service:
|
||||
name:ssh
|
||||
state:restarted
|
||||
enabled:true
|
||||
when:ansible_distribution == 'Debian'
|
||||
```
|
||||
|
||||
*示例 5:锁定 root 的 SSH 访问*
|
||||
|
||||
在示例 5 中 `sshd_config` 文件的修改是有[条件][9] 的,只有在找到匹配的发行版的情况下才会执行。在这个案例中,基于 Red Hat 的发行版与基于 Debian 的发行版对 SSH 服务的命名是不一样的,这也是使用条件语句的目的所在。虽然也有其他的方法可以达到相同的效果,但这个例子很好演示了 Ansible 信息的作用。若你想查看 Ansible 默认收集的所有信息,可以在本地运行 `setup` 模块:
|
||||
|
||||
```
|
||||
ansible localhost -m setup |less
|
||||
```
|
||||
|
||||
Ansible 收集的所有信息都能用来做判断,就跟示例 4 中 `vars:` 部分所演示的一样。所不同的是,Ansible 信息被看成是**内置** 变量,无需由系统管理员定义。
|
||||
|
||||
### 更近一步
|
||||
|
||||
现在可以开始探索 Ansible 并创建自己的基本了。Ansible 是一个富有深度、复杂性和灵活性的工具,只靠一篇文章不可能就把它讲透。希望本文能够激发你的兴趣,鼓励你去探索 Ansible 的功能。在下一篇文章中,我会再聊聊 `Copy`、`systemd`、`service`、`apt`、`yum`、`virt`,以及 `user` 模块。我们可以在剧本中组合使用这些模块,还可以创建一个简单的 Git 服务器来存储这些所有剧本。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/7/automate-sysadmin-ansible
|
||||
|
||||
作者:[Steve Ovens][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/stratusss
|
||||
[1]:https://opensource.com/tags/ansible
|
||||
[2]:http://docs.ansible.com/ansible/intro_installation.html
|
||||
[3]:http://docs.ansible.com/ansible/intro_inventory.html
|
||||
[4]:http://docs.ansible.com/ansible/playbooks_variables.html#information-discovered-from-systems-facts
|
||||
[5]:http://docs.ansible.com/ansible/playbooks.html
|
||||
[6]:http://docs.ansible.com/ansible/playbooks_roles.html
|
||||
[7]:http://docs.ansible.com/ansible/modules_by_category.html
|
||||
[8]:http://docs.ansible.com/ansible/authorized_key_module.html
|
||||
[9]:http://docs.ansible.com/ansible/lineinfile_module.html
|
134
published/20171005 Reasons Kubernetes is cool.md
Normal file
134
published/20171005 Reasons Kubernetes is cool.md
Normal file
@ -0,0 +1,134 @@
|
||||
为什么 Kubernetes 很酷
|
||||
============================================================
|
||||
|
||||
在我刚开始学习 Kubernetes(大约是一年半以前吧?)时,我真的不明白为什么应该去关注它。
|
||||
|
||||
在我使用 Kubernetes 全职工作了三个多月后,我才逐渐明白了为什么我应该使用它。(我距离成为一个 Kubernetes 专家还很远!)希望这篇文章对你理解 Kubernetes 能做什么会有帮助!
|
||||
|
||||
我将尝试去解释我对 Kubernetes 感兴趣的一些原因,而不去使用 “<ruby>原生云<rt>cloud native</rt></ruby>”、“<ruby>编排系统<rt>orchestration</rt></ruby>”、“<ruby>容器<rt>container</rt></ruby>”,或者任何 Kubernetes 专用的术语 :)。我去解释的这些观点主要来自一位 Kubernetes 操作者/基础设施工程师,因为,我现在的工作就是去配置 Kubernetes 和让它工作的更好。
|
||||
|
||||
我不会去尝试解决一些如 “你应该在你的生产系统中使用 Kubernetes 吗?”这样的问题。那是非常复杂的问题。(不仅是因为“生产系统”根据你的用途而总是有不同的要求)
|
||||
|
||||
### Kubernetes 可以让你无需设置一台新的服务器即可在生产系统中运行代码
|
||||
|
||||
我首次被说教使用 Kubernetes 是与我的伙伴 Kamal 的下面的谈话:
|
||||
|
||||
大致是这样的:
|
||||
|
||||
* Kamal: 使用 Kubernetes 你可以通过一条命令就能设置一台新的服务器。
|
||||
* Julia: 我觉得不太可能吧。
|
||||
* Kamal: 像这样,你写一个配置文件,然后应用它,这时候,你就在生产系统中运行了一个 HTTP 服务。
|
||||
* Julia: 但是,现在我需要去创建一个新的 AWS 实例,明确地写一个 Puppet 清单,设置服务发现,配置负载均衡,配置我们的部署软件,并且确保 DNS 正常工作,如果没有什么问题的话,至少在 4 小时后才能投入使用。
|
||||
* Kamal: 是的,使用 Kubernetes 你不需要做那么多事情,你可以在 5 分钟内设置一台新的 HTTP 服务,并且它将自动运行。只要你的集群中有空闲的资源它就能正常工作!
|
||||
* Julia: 这儿一定是一个“坑”。
|
||||
|
||||
这里有一种陷阱,设置一个生产用 Kubernetes 集群(在我的经险中)确实并不容易。(查看 [Kubernetes 艰难之旅][3] 中去开始使用时有哪些复杂的东西)但是,我们现在并不深入讨论它。
|
||||
|
||||
因此,Kubernetes 第一个很酷的事情是,它可能使那些想在生产系统中部署新开发的软件的方式变得更容易。那是很酷的事,而且它真的是这样,因此,一旦你使用一个运作中的 Kubernetes 集群,你真的可以仅使用一个配置文件就在生产系统中设置一台 HTTP 服务(在 5 分钟内运行这个应用程序,设置一个负载均衡,给它一个 DNS 名字,等等)。看起来真的很有趣。
|
||||
|
||||
### 对于运行在生产系统中的代码,Kubernetes 可以提供更好的可见性和可管理性
|
||||
|
||||
在我看来,在理解 etcd 之前,你可能不会理解 Kubernetes 的。因此,让我们先讨论 etcd!
|
||||
|
||||
想像一下,如果现在我这样问你,“告诉我你运行在生产系统中的每个应用程序,它运行在哪台主机上?它是否状态很好?是否为它分配了一个 DNS 名字?”我并不知道这些,但是,我可能需要到很多不同的地方去查询来回答这些问题,并且,我需要花很长的时间才能搞定。我现在可以很确定地说不需要查询,仅一个 API 就可以搞定它们。
|
||||
|
||||
在 Kubernetes 中,你的集群的所有状态 – 运行中的应用程序 (“pod”)、节点、DNS 名字、 cron 任务、 等等 —— 都保存在一个单一的数据库中(etcd)。每个 Kubernetes 组件是无状态的,并且基本是通过下列方式工作的:
|
||||
|
||||
* 从 etcd 中读取状态(比如,“分配给节点 1 的 pod 列表”)
|
||||
* 产生变化(比如,“在节点 1 上运行 pod A”)
|
||||
* 更新 etcd 中的状态(比如,“设置 pod A 的状态为 ‘running’”)
|
||||
|
||||
这意味着,如果你想去回答诸如 “在那个可用区中有多少台运行着 nginx 的 pod?” 这样的问题时,你可以通过查询一个统一的 API(Kubernetes API)去回答它。并且,你可以在每个其它 Kubernetes 组件上运行那个 API 去进行同样的访问。
|
||||
|
||||
这也意味着,你可以很容易地去管理每个运行在 Kubernetes 中的任何东西。比如说,如果你想要:
|
||||
|
||||
* 部署实现一个复杂的定制的部署策略(部署一个东西,等待 2 分钟,部署 5 个以上,等待 3.7 分钟,等等)
|
||||
* 每当推送到 github 上一个分支,自动化 [启动一个新的 web 服务器][1]
|
||||
* 监视所有你的运行的应用程序,确保它们有一个合理的内存使用限制。
|
||||
|
||||
这些你只需要写一个程序与 Kubernetes API(“controller”)通讯就可以了。
|
||||
|
||||
另一个关于 Kubernetes API 的令人激动的事情是,你不会局限于 Kubernetes 所提供的现有功能!如果对于你要部署/创建/监视的软件有你自己的方案,那么,你可以使用 Kubernetes API 去写一些代码去达到你的目的!它可以让你做到你想做的任何事情。
|
||||
|
||||
### 即便每个 Kubernetes 组件都“挂了”,你的代码将仍然保持运行
|
||||
|
||||
关于 Kubernetes 我(在各种博客文章中 :))承诺的一件事情是,“如果 Kubernetes API 服务和其它组件‘挂了’也没事,你的代码将一直保持运行状态”。我认为理论上这听起来很酷,但是我不确定它是否真是这样的。
|
||||
|
||||
到目前为止,这似乎是真的!
|
||||
|
||||
我已经断开了一些正在运行的 etcd,发生了这些情况:
|
||||
|
||||
1. 所有的代码继续保持运行状态
|
||||
2. 不能做 _新的_ 事情(你不能部署新的代码或者生成变更,cron 作业将停止工作)
|
||||
3. 当它恢复时,集群将赶上这期间它错过的内容
|
||||
|
||||
这样做意味着如果 etcd 宕掉,并且你的应用程序的其中之一崩溃或者发生其它事情,在 etcd 恢复之前,它不能够恢复。
|
||||
|
||||
### Kubernetes 的设计对 bug 很有弹性
|
||||
|
||||
与任何软件一样,Kubernetes 也会有 bug。例如,到目前为止,我们的集群控制管理器有内存泄漏,并且,调度器经常崩溃。bug 当然不好,但是,我发现 Kubernetes 的设计可以帮助减轻它的许多核心组件中的错误的影响。
|
||||
|
||||
如果你重启动任何组件,将会发生:
|
||||
|
||||
* 从 etcd 中读取所有的与它相关的状态
|
||||
* 基于那些状态(调度 pod、回收完成的 pod、调度 cron 作业、按需部署等等),它会去做那些它认为必须要做的事情
|
||||
|
||||
因为,所有的组件并不会在内存中保持状态,你在任何时候都可以重启它们,这可以帮助你减轻各种 bug 的影响。
|
||||
|
||||
例如,如果在你的控制管理器中有内存泄露。因为,控制管理器是无状态的,你可以每小时定期去重启它,或者,在感觉到可能导致任何不一致的问题发生时重启它。又或者,在调度器中遇到了一个 bug,它有时忘记了某个 pod,从来不去调度它们。你可以每隔 10 分钟来重启调度器来缓减这种情况。(我们并不会这么做,而是去修复这个 bug,但是,你_可以这样做_ :))
|
||||
|
||||
因此,我觉得即使在它的核心组件中有 bug,我仍然可以信任 Kubernetes 的设计可以让我确保集群状态的一致性。并且,总在来说,随着时间的推移软件质量会提高。唯一你必须去操作的有状态的东西就是 etcd。
|
||||
|
||||
不用过多地讨论“状态”这个东西 —— 而我认为在 Kubernetes 中很酷的一件事情是,唯一需要去做备份/恢复计划的东西是 etcd (除非为你的 pod 使用了持久化存储的卷)。我认为这样可以使 Kubernetes 运维比你想的更容易一些。
|
||||
|
||||
### 在 Kubernetes 之上实现新的分布式系统是非常容易的
|
||||
|
||||
假设你想去实现一个分布式 cron 作业调度系统!从零开始做工作量非常大。但是,在 Kubernetes 里面实现一个分布式 cron 作业调度系统是非常容易的!(仍然没那么简单,毕竟它是一个分布式系统)
|
||||
|
||||
我第一次读到 Kubernetes 的 cron 作业控制器的代码时,我对它是如此的简单感到由衷高兴。去读读看,其主要的逻辑大约是 400 行的 Go 代码。去读它吧! => [cronjob_controller.go][4] <=
|
||||
|
||||
cron 作业控制器基本上做的是:
|
||||
|
||||
* 每 10 秒钟:
|
||||
* 列出所有已存在的 cron 作业
|
||||
* 检查是否有需要现在去运行的任务
|
||||
* 如果有,创建一个新的作业对象去调度,并通过其它的 Kubernetes 控制器实际运行它
|
||||
* 清理已完成的作业
|
||||
* 重复以上工作
|
||||
|
||||
Kubernetes 模型是很受限制的(它有定义在 etcd 中的资源模式,控制器读取这个资源并更新 etcd),我认为这种相关的固有的/受限制的模型,可以使它更容易地在 Kubernetes 框架中开发你自己的分布式系统。
|
||||
|
||||
Kamal 给我说的是 “Kubernetes 是一个写你自己的分布式系统的很好的平台” ,而不是“ Kubernetes 是一个你可以使用的分布式系统”,并且,我觉得它真的很有意思。他做了一个 [为你推送到 GitHub 的每个分支运行一个 HTTP 服务的系统][5] 的原型。这花了他一个周末的时间,大约 800 行 Go 代码,我认为它真不可思议!
|
||||
|
||||
### Kubernetes 可以使你做一些非常神奇的事情(但并不容易)
|
||||
|
||||
我一开始就说 “kubernetes 可以让你做一些很神奇的事情,你可以用一个配置文件来做这么多的基础设施,它太神奇了”。这是真的!
|
||||
|
||||
为什么说 “Kubernetes 并不容易”呢?是因为 Kubernetes 有很多部分,学习怎么去成功地运营一个高可用的 Kubernetes 集群要做很多的工作。就像我发现它给我了许多抽象的东西,我需要去理解这些抽象的东西才能调试问题和正确地配置它们。我喜欢学习新东西,因此,它并不会使我发狂或者生气,但是我认为了解这一点很重要 :)
|
||||
|
||||
对于 “我不能仅依靠抽象概念” 的一个具体的例子是,我努力学习了许多 [Linux 上网络是如何工作的][6],才让我对设置 Kubernetes 网络稍有信心,这比我以前学过的关于网络的知识要多很多。这种方式很有意思但是非常费时间。在以后的某个时间,我或许写更多的关于设置 Kubernetes 网络的困难/有趣的事情。
|
||||
|
||||
或者,为了成功设置我的 Kubernetes CA,我写了一篇 [2000 字的博客文章][7],述及了我不得不学习 Kubernetes 不同方式的 CA 的各种细节。
|
||||
|
||||
我觉得,像 GKE (Google 的 Kubernetes 产品) 这样的一些监管的 Kubernetes 的系统可能更简单,因为,他们为你做了许多的决定,但是,我没有尝试过它们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2017/10/05/reasons-kubernetes-is-cool/
|
||||
|
||||
作者:[Julia Evans][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://jvns.ca/about
|
||||
[1]:https://github.com/kamalmarhubi/kubereview
|
||||
[2]:https://jvns.ca/categories/kubernetes
|
||||
[3]:https://github.com/kelseyhightower/kubernetes-the-hard-way
|
||||
[4]:https://github.com/kubernetes/kubernetes/blob/e4551d50e57c089aab6f67333412d3ca64bc09ae/pkg/controller/cronjob/cronjob_controller.go
|
||||
[5]:https://github.com/kamalmarhubi/kubereview
|
||||
[6]:https://jvns.ca/blog/2016/12/22/container-networking/
|
||||
[7]:https://jvns.ca/blog/2017/08/05/how-kubernetes-certificates-work/
|
||||
|
||||
|
@ -1,13 +1,15 @@
|
||||
使用 TLS 加密保护 VNC 服务器的简单指南
|
||||
======
|
||||
在本教程中,我们将学习使用 TLS 加密安装 VNC 服务器并保护 VNC 会话。
|
||||
此方法已经在 CentOS 6&7 上测试过了,但是也可以在其他的版本/操作系统上运行(RHEL、Scientific Linux 等)。
|
||||
|
||||
在本教程中,我们将学习安装 VNC 服务器并使用 TLS 加密保护 VNC 会话。
|
||||
|
||||
此方法已经在 CentOS 6&7 上测试过了,但是也可以在其它的版本/操作系统上运行(RHEL、Scientific Linux 等)。
|
||||
|
||||
**(推荐阅读:[保护 SSH 会话终极指南][1])**
|
||||
|
||||
### 安装 VNC 服务器
|
||||
|
||||
在机器上安装 VNC 服务器之前,请确保我们有一个可用的 GUI。如果机器上还没有安装 GUI,我们可以通过执行以下命令来安装:
|
||||
在机器上安装 VNC 服务器之前,请确保我们有一个可用的 GUI(图形用户界面)。如果机器上还没有安装 GUI,我们可以通过执行以下命令来安装:
|
||||
|
||||
```
|
||||
yum groupinstall "GNOME Desktop"
|
||||
@ -38,7 +40,7 @@ yum groupinstall "GNOME Desktop"
|
||||
现在我们需要编辑 VNC 配置文件:
|
||||
|
||||
```
|
||||
**# vim /etc/sysconfig/vncservers**
|
||||
# vim /etc/sysconfig/vncservers
|
||||
```
|
||||
|
||||
并添加下面这几行:
|
||||
@ -63,7 +65,7 @@ VNCSERVERARGS[1]= "-geometry 1024×768″
|
||||
|
||||
#### CentOS 7
|
||||
|
||||
在 CentOS 7 上,/etc/sysconfig/vncservers 已经改为 /lib/systemd/system/vncserver@.service。我们将使用这个配置文件作为参考,所以创建一个文件的副本,
|
||||
在 CentOS 7 上,`/etc/sysconfig/vncservers` 已经改为 `/lib/systemd/system/vncserver@.service`。我们将使用这个配置文件作为参考,所以创建一个文件的副本,
|
||||
|
||||
```
|
||||
# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service
|
||||
@ -85,8 +87,8 @@ PIDFile=/home/vncuser/.vnc/%H%i.pid
|
||||
保存文件并退出。接下来重启服务并在启动时启用它:
|
||||
|
||||
```
|
||||
systemctl restart[[email protected]][2]:1.service
|
||||
systemctl enable[[email protected]][2]:1.service
|
||||
# systemctl restart vncserver@:1.service
|
||||
# systemctl enable vncserver@:1.service
|
||||
```
|
||||
|
||||
现在我们已经设置好了 VNC 服务器,并且可以使用 VNC 服务器的 IP 地址从客户机连接到它。但是,在此之前,我们将使用 TLS 加密保护我们的连接。
|
||||
@ -105,7 +107,9 @@ systemctl enable[[email protected]][2]:1.service
|
||||
|
||||
现在,我们可以使用客户机上的 VNC 浏览器访问服务器,使用以下命令以安全连接启动 vnc 浏览器:
|
||||
|
||||
**# vncviewer -SecurityTypes=VeNCrypt,TLSVnc 192.168.1.45:1**
|
||||
```
|
||||
# vncviewer -SecurityTypes=VeNCrypt,TLSVnc 192.168.1.45:1
|
||||
```
|
||||
|
||||
这里,192.168.1.45 是 VNC 服务器的 IP 地址。
|
||||
|
||||
@ -115,14 +119,13 @@ systemctl enable[[email protected]][2]:1.service
|
||||
|
||||
这篇教程就完了,欢迎随时使用下面的评论栏提交你的建议或疑问。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/secure-vnc-server-tls-encryption/
|
||||
|
||||
作者:[Shusain][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -3,31 +3,33 @@
|
||||
|
||||

|
||||
|
||||
在本教程中,我们将讨论如何在 Arch Linux 中设置日语环境。在其他类 Unix 操作系统中,设置日文布局并不是什么大不了的事情。你可以从设置中轻松选择日文键盘布局。然而,在 Arch Linux 下有点困难,ArchWiki 中没有合适的文档。如果你正在使用 Arch Linux 和/或其衍生产品如 Antergos,Manajaro Linux,请遵循本指南在 Arch Linux 及其衍生系统中使用日语。
|
||||
在本教程中,我们将讨论如何在 Arch Linux 中设置日语环境。在其他类 Unix 操作系统中,设置日文布局并不是什么大不了的事情。你可以从设置中轻松选择日文键盘布局。然而,在 Arch Linux 下有点困难,ArchWiki 中没有合适的文档。如果你正在使用 Arch Linux 和/或其衍生产品如 Antergos、Manajaro Linux,请遵循本指南以在 Arch Linux 及其衍生系统中使用日语。
|
||||
|
||||
### 在Arch Linux中设置日语环境
|
||||
### 在 Arch Linux 中设置日语环境
|
||||
|
||||
首先,为了正确查看日语字符,先安装必要的日语字体:
|
||||
|
||||
首先,为了正确查看日语 ASCII 格式,先安装必要的日语字体:
|
||||
```
|
||||
sudo pacman -S adobe-source-han-sans-jp-fonts otf-ipafont
|
||||
```
|
||||
```
|
||||
pacaur -S ttf-monapo
|
||||
```
|
||||
|
||||
如果你尚未安装 pacaur,请参阅[**此链接**][1]。
|
||||
如果你尚未安装 `pacaur`,请参阅[此链接][1]。
|
||||
|
||||
确保你在 `/etc/locale.gen` 中注释掉了(添加 `#` 注释)下面的行。
|
||||
|
||||
确保你在 **/etc/locale.gen** 中注释掉了(添加 # 注释)下面的行。
|
||||
```
|
||||
#ja_JP.UTF-8
|
||||
```
|
||||
|
||||
然后,安装 **iBus** 和 **ibus-anthy**。对于那些想知道原因的,iBus 是类 Unix 系统的输入法(IM)框架,而 ibus-anthy 是 iBus 的日语输入法。
|
||||
然后,安装 iBus 和 ibus-anthy。对于那些想知道原因的,iBus 是类 Unix 系统的输入法(IM)框架,而 ibus-anthy 是 iBus 的日语输入法。
|
||||
|
||||
```
|
||||
sudo pacman -S ibus ibus-anthy
|
||||
```
|
||||
|
||||
在 **~/.xprofile** 中添加以下几行(如果不存在,创建一个):
|
||||
在 `~/.xprofile` 中添加以下几行(如果不存在,创建一个):
|
||||
|
||||
```
|
||||
# Settings for Japanese input
|
||||
export GTK_IM_MODULE='ibus'
|
||||
@ -38,51 +40,49 @@ export XMODIFIERS=@im='ibus'
|
||||
ibus-daemon -drx
|
||||
```
|
||||
|
||||
~/.xprofile 允许我们在 X 用户会话开始时且在窗口管理器启动之前执行命令。
|
||||
|
||||
`~/.xprofile` 允许我们在 X 用户会话开始时且在窗口管理器启动之前执行命令。
|
||||
|
||||
保存并关闭文件。重启 Arch Linux 系统以使更改生效。
|
||||
|
||||
登录到系统后,右键单击任务栏中的 iBus 图标,然后选择 **Preferences**。如果不存在,请从终端运行以下命令来启动 iBus 并打开偏好设置窗口。
|
||||
登录到系统后,右键单击任务栏中的 iBus 图标,然后选择 “Preferences”。如果不存在,请从终端运行以下命令来启动 iBus 并打开偏好设置窗口。
|
||||
|
||||
```
|
||||
ibus-setup
|
||||
```
|
||||
|
||||
选择 Yes 来启动 iBus。你会看到一个像下面的页面。点击 Ok 关闭它。
|
||||
选择 “Yes” 来启动 iBus。你会看到一个像下面的页面。点击 Ok 关闭它。
|
||||
|
||||
[![][2]][3]
|
||||
![][3]
|
||||
|
||||
现在,你将看到 iBus 偏好设置窗口。进入 **Input Method** 选项卡,然后单击 “Add” 按钮。
|
||||
现在,你将看到 iBus 偏好设置窗口。进入 “Input Method” 选项卡,然后单击 “Add” 按钮。
|
||||
|
||||
[![][2]][4]
|
||||
![][4]
|
||||
|
||||
在列表中选择 “Japanese”:
|
||||
|
||||
[![][2]][5]
|
||||
![][5]
|
||||
|
||||
然后,选择 “Anthy” 并点击添加。
|
||||
然后,选择 “Anthy” 并点击添加:
|
||||
|
||||
[![][2]][6]
|
||||
![][6]
|
||||
|
||||
就是这样了。你现在将在输入法栏看到 “Japanese - Anthy”。
|
||||
就是这样了。你现在将在输入法栏看到 “Japanese - Anthy”:
|
||||
|
||||
[![][2]][7]
|
||||
![][7]
|
||||
|
||||
根据你的需求在偏好设置中更改日语输入法的选项(点击 Japanese - Anthy -> Preferences)。
|
||||
根据你的需求在偏好设置中更改日语输入法的选项(点击 “Japanese-Anthy” -> “Preferences”)。
|
||||
|
||||
[![][2]][8]
|
||||
|
||||
你还可以在键盘绑定中编辑默认的快捷键。完成所有更改后,点击应用并确定。就是这样。从任务栏中的 iBus 图标中选择日语,或者按下**SUPER 键+空格键**(LCTT译注:SUPER KEY 通常为 Command/Window KEY)来在日语和英语(或者系统中的其他默认语言)之间切换。你可以从 iBus 首选项窗口更改键盘快捷键。
|
||||
|
||||
现在你知道如何在 Arch Linux 及其衍生版中使用日语了。如果你发现我们的指南很有用,那么请您在社交、专业网络上分享,并支持 OSTechNix。
|
||||
![][8]
|
||||
|
||||
你还可以在键盘绑定中编辑默认的快捷键。完成所有更改后,点击应用并确定。就是这样。从任务栏中的 iBus 图标中选择日语,或者按下 `SUPER + 空格键”(LCTT 译注:SUPER 键通常为 `Command` 或 `Window` 键)来在日语和英语(或者系统中的其他默认语言)之间切换。你可以从 iBus 首选项窗口更改键盘快捷键。
|
||||
|
||||
现在你知道如何在 Arch Linux 及其衍生版中使用日语了。如果你发现我们的指南很有用,那么请您在社交、专业网络上分享,并支持我们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/setup-japanese-language-environment-arch-linux/
|
||||
|
||||
作者:[][a]
|
||||
作者:[SK][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[Locez](https://github.com/locez)
|
||||
|
||||
@ -91,9 +91,9 @@ via: https://www.ostechnix.com/setup-japanese-language-environment-arch-linux/
|
||||
[a]:https://www.ostechnix.com
|
||||
[1]:https://www.ostechnix.com/install-pacaur-arch-linux/
|
||||
[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2017/11/ibus.png ()
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2017/11/iBus-preferences.png ()
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2017/11/Choose-Japanese.png ()
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2017/11/Japanese-Anthy.png ()
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2017/11/iBus-preferences-1.png ()
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2017/11/ibus-anthy.png ()
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2017/11/ibus.png
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2017/11/iBus-preferences.png
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2017/11/Choose-Japanese.png
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2017/11/Japanese-Anthy.png
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2017/11/iBus-preferences-1.png
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2017/11/ibus-anthy.png
|
@ -1,27 +1,26 @@
|
||||
一步一步学习如何在 MariaDB 中配置主从复制
|
||||
循序渐进学习如何在 MariaDB 中配置主从复制
|
||||
======
|
||||
在我们前面的教程中,我们已经学习了 [**如何安装和配置 MariaDB**][1],也学习了 [**管理 MariaDB 的一些基础命令**][2]。现在我们来学习,如何在 MariaDB 服务器上配置一个主从复制。
|
||||
|
||||
复制是用于为我们的数据库去创建多个副本,这些副本可以在其它数据库上用于运行查询,像一些非常繁重的查询可能会影响主数据库服务器的性能,或者我们可以使用它来做数据冗余,或者兼具以上两个目的。我们可以将这个过程自动化,即主服务器到从服务器的复制过程自动进行。执行备份而不影响在主服务器上的写操作。
|
||||
在我们前面的教程中,我们已经学习了 [如何安装和配置 MariaDB][1],也学习了 [管理 MariaDB 的一些基础命令][2]。现在我们来学习,如何在 MariaDB 服务器上配置一个主从复制。
|
||||
|
||||
复制是用于为我们的数据库创 建多个副本,这些副本可以在其它数据库上用于运行查询,像一些非常繁重的查询可能会影响主数据库服务器的性能,或者我们可以使用它来做数据冗余,或者兼具以上两个目的。我们可以将这个过程自动化,即主服务器到从服务器的复制过程自动进行。执行备份而不影响在主服务器上的写操作。
|
||||
|
||||
因此,我们现在去配置我们的主-从复制,它需要两台安装了 MariaDB 的机器。它们的 IP 地址如下:
|
||||
|
||||
**主服务器 -** 192.168.1.120 **主机名** master.ltechlab.com
|
||||
- **主服务器 -** 192.168.1.120 **主机名 -** master.ltechlab.com
|
||||
- **从服务器 -** 192.168.1.130 **主机名 -** slave.ltechlab.com
|
||||
|
||||
**从服务器 -** 192.168.1.130 **主机名 -** slave.ltechlab.com
|
||||
MariaDB 安装到这些机器上之后,我们继续进行本教程。如果你需要安装和配置 MariaDB 的教程,请查看[**这个教程**][1]。
|
||||
|
||||
MariaDB 安装到这些机器上之后,我们继续进行本教程。如果你需要安装和配置 MariaDB 的教程,请查看[ **这个教程**][1]。
|
||||
### 第 1 步 - 主服务器配置
|
||||
|
||||
|
||||
### **第 1 步 - 主服务器配置**
|
||||
|
||||
我们现在进入到 MariaDB 中的一个命名为 ' **important '** 的数据库,它将被复制到我们的从服务器。为开始这个过程,我们编辑名为 ' **/etc/my.cnf** ' 的文件,它是 MariaDB 的配置文件。
|
||||
我们现在进入到 MariaDB 中的一个命名为 `important` 的数据库,它将被复制到我们的从服务器。为开始这个过程,我们编辑名为 `/etc/my.cnf` 的文件,它是 MariaDB 的配置文件。
|
||||
|
||||
```
|
||||
$ vi /etc/my.cnf
|
||||
```
|
||||
|
||||
在这个文件中找到 [mysqld] 节,然后输入如下内容:
|
||||
在这个文件中找到 `[mysqld]` 节,然后输入如下内容:
|
||||
|
||||
```
|
||||
[mysqld]
|
||||
@ -43,7 +42,7 @@ $ systemctl restart mariadb
|
||||
$ mysql -u root -p
|
||||
```
|
||||
|
||||
在它上面创建一个命名为 'slaveuser' 的为主从复制使用的新用户,然后运行如下的命令为它分配所需要的权限:
|
||||
在它上面创建一个命名为 `slaveuser` 的为主从复制使用的新用户,然后运行如下的命令为它分配所需要的权限:
|
||||
|
||||
```
|
||||
STOP SLAVE;
|
||||
@ -53,19 +52,19 @@ FLUSH TABLES WITH READ LOCK;
|
||||
SHOW MASTER STATUS;
|
||||
```
|
||||
|
||||
**注意: ** 我们配置主从复制需要 **MASTER_LOG_FILE 和 MASTER_LOG_POS ** 的值,它可以通过 'show master status' 来获得,因此,你一定要确保你记下了它们的值。
|
||||
**注意:** 我们配置主从复制需要 `MASTER_LOG_FILE` 和 `MASTER_LOG_POS` 的值,它可以通过 `show master status` 来获得,因此,你一定要确保你记下了它们的值。
|
||||
|
||||
这些命令运行完成之后,输入 'exit' 退出这个会话。
|
||||
这些命令运行完成之后,输入 `exit` 退出这个会话。
|
||||
|
||||
### 第 2 步 - 创建一个数据库备份,并将它移动到从服务器上
|
||||
|
||||
现在,我们需要去为我们的数据库 'important' 创建一个备份,可以使用 'mysqldump' 命令去备份。
|
||||
现在,我们需要去为我们的数据库 `important` 创建一个备份,可以使用 `mysqldump` 命令去备份。
|
||||
|
||||
```
|
||||
$ mysqldump -u root -p important > important_backup.sql
|
||||
```
|
||||
|
||||
备份完成后,我们需要重新登陆到 MariaDB 数据库,并解锁我们的表。
|
||||
备份完成后,我们需要重新登录到 MariaDB 数据库,并解锁我们的表。
|
||||
|
||||
```
|
||||
$ mysql -u root -p
|
||||
@ -78,7 +77,7 @@ $ UNLOCK TABLES;
|
||||
|
||||
### 第 3 步:配置从服务器
|
||||
|
||||
我们再次去编辑 '/etc/my.cnf' 文件,找到配置文件中的 [mysqld] 节,然后输入如下内容:
|
||||
我们再次去编辑(从服务器上的) `/etc/my.cnf` 文件,找到配置文件中的 `[mysqld]` 节,然后输入如下内容:
|
||||
|
||||
```
|
||||
[mysqld]
|
||||
@ -93,7 +92,7 @@ replicate-do-db=important
|
||||
$ mysql -u root -p < /data/ important_backup.sql
|
||||
```
|
||||
|
||||
当这个恢复过程结束之后,我们将通过登入到从服务器上的 MariaDB,为数据库 'important' 上的用户 'slaveuser' 授权。
|
||||
当这个恢复过程结束之后,我们将通过登入到从服务器上的 MariaDB,为数据库 `important` 上的用户 'slaveuser' 授权。
|
||||
|
||||
```
|
||||
$ mysql -u root -p
|
||||
@ -110,9 +109,9 @@ FLUSH PRIVILEGES;
|
||||
$ systemctl restart mariadb
|
||||
```
|
||||
|
||||
### **第 4 步:启动复制**
|
||||
### 第 4 步:启动复制
|
||||
|
||||
记住,我们需要 **MASTER_LOG_FILE 和 MASTER_LOG_POS** 变量的值,它可以通过在主服务器上运行 'SHOW MASTER STATUS' 获得。现在登入到从服务器上的 MariaDB,然后通过运行下列命令,告诉我们的从服务器它应该去哪里找主服务器。
|
||||
记住,我们需要 `MASTER_LOG_FILE` 和 `MASTER_LOG_POS` 变量的值,它可以通过在主服务器上运行 `SHOW MASTER STATUS` 获得。现在登入到从服务器上的 MariaDB,然后通过运行下列命令,告诉我们的从服务器它应该去哪里找主服务器。
|
||||
|
||||
```
|
||||
STOP SLAVE;
|
||||
@ -131,13 +130,13 @@ SHOW SLAVE STATUS\G;
|
||||
$ mysql -u root -p
|
||||
```
|
||||
|
||||
选择数据库为 'important':
|
||||
选择数据库为 `important`:
|
||||
|
||||
```
|
||||
use important;
|
||||
```
|
||||
|
||||
在这个数据库上创建一个名为 ‘test’ 的表:
|
||||
在这个数据库上创建一个名为 `test` 的表:
|
||||
|
||||
```
|
||||
create table test (c int);
|
||||
@ -175,10 +174,10 @@ via: http://linuxtechlab.com/creating-master-slave-replication-mariadb/
|
||||
|
||||
作者:[Shusain][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/installing-configuring-mariadb-rhelcentos/
|
||||
[2]:http://linuxtechlab.com/mariadb-administration-commands-beginners/
|
||||
[1]:https://linux.cn/article-8320-1.html
|
||||
[2]:https://linux.cn/article-9306-1.html
|
@ -1,12 +1,12 @@
|
||||
用 mod 保护您的网站免受应用层 DOS 攻击
|
||||
用 Apache 服务器模块保护您的网站免受应用层 DOS 攻击
|
||||
======
|
||||
|
||||
有多种恶意攻击网站的方法,比较复杂的方法要涉及数据库和编程方面的技术知识。一个更简单的方法被称为“拒绝服务”或“DOS”攻击。这个攻击方法的名字来源于它的意图:使普通客户或网站访问者的正常服务请求被拒绝。
|
||||
有多种可以导致网站下线的攻击方法,比较复杂的方法要涉及数据库和编程方面的技术知识。一个更简单的方法被称为“<ruby>拒绝服务<rt>Denial Of Service</rt></ruby>”(DOS)攻击。这个攻击方法的名字来源于它的意图:使普通客户或网站访问者的正常服务请求被拒绝。
|
||||
|
||||
一般来说,有两种形式的 DOS 攻击:
|
||||
|
||||
1. OSI 模型的三、四层,即网络层攻击
|
||||
2. OSI 模型的七层,即应用层攻击
|
||||
1. OSI 模型的三、四层,即网络层攻击
|
||||
2. OSI 模型的七层,即应用层攻击
|
||||
|
||||
第一种类型的 DOS 攻击——网络层,发生于当大量的垃圾流量流向网页服务器时。当垃圾流量超过网络的处理能力时,网站就会宕机。
|
||||
|
||||
@ -14,174 +14,172 @@
|
||||
|
||||
本文将着眼于缓解应用层攻击,因为减轻网络层攻击需要大量的可用带宽和上游提供商的合作,这通常不是通过配置网络服务器就可以做到的。
|
||||
|
||||
通过配置普通的网页服务器,可以保护网页免受应用层攻击,至少是适度的防护。防止这种形式的攻击是非常重要的,因为 [Cloudflare][1] 最近 [报道][2] 了网络层攻击的数量正在减少,而应用层攻击的数量则在增加。
|
||||
通过配置普通的网页服务器,可以保护网页免受应用层攻击,至少是适度的防护。防止这种形式的攻击是非常重要的,因为 [Cloudflare][1] 最近 [报告称][2] 网络层攻击的数量正在减少,而应用层攻击的数量则在增加。
|
||||
|
||||
本文将根据 [zdziarski 的博客][4] 来解释如何使用 Apache2 的模块 [mod_evasive][3]。
|
||||
本文将介绍如何使用 [zdziarski][4] 开发的 Apache2 的模块 [mod_evasive][3]。
|
||||
|
||||
另外,mod_evasive 会阻止攻击者试图通过尝试数百个组合来猜测用户名和密码,即暴力攻击。
|
||||
另外,mod_evasive 会阻止攻击者通过尝试数百个用户名和密码的组合来进行猜测(即暴力攻击)的企图。
|
||||
|
||||
Mod_evasive 会记录来自每个 IP 地址的请求的数量。当这个数字超过相应 IP 地址的几个阈值之一时,会出现一个错误页面。错误页面所需的资源要比一个能够响应合法访问的在线网站少得多。
|
||||
mod_evasive 会记录来自每个 IP 地址的请求的数量。当这个数字超过相应 IP 地址的几个阈值之一时,会出现一个错误页面。错误页面所需的资源要比一个能够响应合法访问的在线网站少得多。
|
||||
|
||||
### 在 Ubuntu 16.04 上安装 mod_evasive
|
||||
|
||||
Ubuntu 16.04 默认的软件库中包含了 mod_evasive,名称为“libapache2-mod-evasive”。您可以使用 `apt-get` 来完成安装:
|
||||
Ubuntu 16.04 默认的软件库中包含了 mod_evasive,名称为 “libapache2-mod-evasive”。您可以使用 `apt-get` 来完成安装:
|
||||
|
||||
```
|
||||
apt-get update
|
||||
apt-get upgrade
|
||||
apt-get install libapache2-mod-evasive
|
||||
|
||||
```
|
||||
|
||||
现在我们需要配置 mod_evasive。
|
||||
|
||||
它的配置文件位于 `/etc/apache2/mods-available/evasive.conf`。默认情况下,所有模块的设置在安装后都会被注释掉。因此,在修改配置文件之前,模块不会干扰到网站流量。
|
||||
|
||||
```
|
||||
<IfModule mod_evasive20.c>
|
||||
#DOSHashTableSize 3097
|
||||
#DOSPageCount 2
|
||||
#DOSSiteCount 50
|
||||
#DOSPageInterval 1
|
||||
#DOSSiteInterval 1
|
||||
#DOSBlockingPeriod 10
|
||||
|
||||
#DOSEmailNotify you@yourdomain.com
|
||||
#DOSSystemCommand "su - someuser -c '/sbin/... %s ...'"
|
||||
#DOSLogDir "/var/log/mod_evasive"
|
||||
<IfModule mod_evasive20.c>
|
||||
#DOSHashTableSize 3097
|
||||
#DOSPageCount 2
|
||||
#DOSSiteCount 50
|
||||
#DOSPageInterval 1
|
||||
#DOSSiteInterval 1
|
||||
#DOSBlockingPeriod 10
|
||||
|
||||
#DOSEmailNotify you@yourdomain.com
|
||||
#DOSSystemCommand "su - someuser -c '/sbin/... %s ...'"
|
||||
#DOSLogDir "/var/log/mod_evasive"
|
||||
</IfModule>
|
||||
|
||||
```
|
||||
|
||||
第一部分的参数的含义如下:
|
||||
|
||||
* **DOSHashTableSize** - 正在访问网站的 IP 地址列表及其请求数。
|
||||
* **DOSPageCount** - 在一定的时间间隔内,每个的页面的请求次数。时间间隔由 DOSPageInterval 定义。
|
||||
* **DOSPageInterval** - mod_evasive 统计页面请求次数的时间间隔。
|
||||
* **DOSSiteCount** - 与 DOSPageCount 相同,但统计的是网站内任何页面的来自相同 IP 地址的请求数量。
|
||||
* **DOSSiteInterval** - mod_evasive 统计网站请求次数的时间间隔。
|
||||
* **DOSBlockingPeriod** - 某个 IP 地址被加入黑名单的时长(以秒为单位)。
|
||||
|
||||
* `DOSHashTableSize` - 正在访问网站的 IP 地址列表及其请求数的当前列表。
|
||||
* `DOSPageCount` - 在一定的时间间隔内,每个页面的请求次数。时间间隔由 DOSPageInterval 定义。
|
||||
* `DOSPageInterval` - mod_evasive 统计页面请求次数的时间间隔。
|
||||
* `DOSSiteCount` - 与 `DOSPageCount` 相同,但统计的是来自相同 IP 地址对网站内任何页面的请求数量。
|
||||
* `DOSSiteInterval` - mod_evasive 统计网站请求次数的时间间隔。
|
||||
* `DOSBlockingPeriod` - 某个 IP 地址被加入黑名单的时长(以秒为单位)。
|
||||
|
||||
如果使用上面显示的默认配置,则在如下情况下,一个 IP 地址会被加入黑名单:
|
||||
|
||||
* 每秒请求同一页面超过两次。
|
||||
* 每秒请求 50 个以上不同页面。
|
||||
|
||||
|
||||
如果某个 IP 地址超过了这些阈值,则被加入黑名单 10 秒钟。
|
||||
|
||||
这看起来可能不算久,但是,mod_evasive 将一直监视页面请求,包括在黑名单中的 IP 地址,并重置其加入黑名单的起始时间。只要一个 IP 地址一直尝试使用 DOS 攻击该网站,它将始终在黑名单中。
|
||||
|
||||
其余的参数是:
|
||||
|
||||
* **DOSEmailNotify** - 用于接收 DOS 攻击信息和 IP 地址黑名单的电子邮件地址。
|
||||
* **DOSSystemCommand** - 检测到 DOS 攻击时运行的命令。
|
||||
* **DOSLogDir** - 用于存放 mod_evasive 的临时文件的目录。
|
||||
|
||||
* `DOSEmailNotify` - 用于接收 DOS 攻击信息和 IP 地址黑名单的电子邮件地址。
|
||||
* `DOSSystemCommand` - 检测到 DOS 攻击时运行的命令。
|
||||
* `DOSLogDir` - 用于存放 mod_evasive 的临时文件的目录。
|
||||
|
||||
### 配置 mod_evasive
|
||||
|
||||
默认的配置是一个很好的开始,因为它的黑名单里不该有任何合法的用户。取消配置文件中的所有参数(DOSSystemCommand 除外)的注释,如下所示:
|
||||
默认的配置是一个很好的开始,因为它不会阻塞任何合法的用户。取消配置文件中的所有参数(`DOSSystemCommand` 除外)的注释,如下所示:
|
||||
|
||||
```
|
||||
<IfModule mod_evasive20.c>
|
||||
DOSHashTableSize 3097
|
||||
DOSPageCount 2
|
||||
DOSSiteCount 50
|
||||
DOSPageInterval 1
|
||||
DOSSiteInterval 1
|
||||
DOSBlockingPeriod 10
|
||||
|
||||
DOSEmailNotify JohnW@example.com
|
||||
#DOSSystemCommand "su - someuser -c '/sbin/... %s ...'"
|
||||
DOSLogDir "/var/log/mod_evasive"
|
||||
<IfModule mod_evasive20.c>
|
||||
DOSHashTableSize 3097
|
||||
DOSPageCount 2
|
||||
DOSSiteCount 50
|
||||
DOSPageInterval 1
|
||||
DOSSiteInterval 1
|
||||
DOSBlockingPeriod 10
|
||||
|
||||
DOSEmailNotify JohnW@example.com
|
||||
#DOSSystemCommand "su - someuser -c '/sbin/... %s ...'"
|
||||
DOSLogDir "/var/log/mod_evasive"
|
||||
</IfModule>
|
||||
|
||||
```
|
||||
|
||||
必须要创建日志目录并且要赋予其与 apache 进程相同的所有者。这里创建的目录是 `/var/log/mod_evasive` ,并且在 Ubuntu 上将该目录的所有者和组设置为 `www-data` ,与 Apache 服务器相同:
|
||||
必须要创建日志目录并且要赋予其与 apache 进程相同的所有者。这里创建的目录是 `/var/log/mod_evasive` ,并且在 Ubuntu 上将该目录的所有者和组设置为 `www-data` ,与 Apache 服务器相同:
|
||||
|
||||
```
|
||||
mkdir /var/log/mod_evasive
|
||||
chown www-data:www-data /var/log/mod_evasive
|
||||
|
||||
```
|
||||
|
||||
在编辑了 Apache 的配置之后,特别是在正在运行的网站上,在重新启动或重新加载之前,最好检查一下语法,因为语法错误将影响 Apache 的启动从而使网站宕机。
|
||||
|
||||
Apache 包含一个辅助命令,是一个配置语法检查器。只需运行以下命令来检查您的语法:
|
||||
|
||||
```
|
||||
apachectl configtest
|
||||
|
||||
```
|
||||
|
||||
如果您的配置是正确的,会得到如下结果:
|
||||
|
||||
```
|
||||
Syntax OK
|
||||
|
||||
```
|
||||
|
||||
但是,如果出现问题,您会被告知在哪部分发生了什么错误,例如:
|
||||
|
||||
```
|
||||
AH00526: Syntax error on line 6 of /etc/apache2/mods-enabled/evasive.conf:
|
||||
DOSSiteInterval takes one argument, Set site interval
|
||||
Action 'configtest' failed.
|
||||
The Apache error log may have more information.
|
||||
|
||||
```
|
||||
|
||||
如果您的配置通过了 configtest 的测试,那么这个模块可以安全地被启用并且 Apache 可以重新加载:
|
||||
|
||||
```
|
||||
a2enmod evasive
|
||||
systemctl reload apache2.service
|
||||
|
||||
```
|
||||
|
||||
Mod_evasive 现在已配置好并正在运行了。
|
||||
mod_evasive 现在已配置好并正在运行了。
|
||||
|
||||
### 测试
|
||||
|
||||
为了测试 mod_evasive,我们只需要向服务器提出足够的网页访问请求,以使其超出阈值,并记录来自 Apache 的响应代码。
|
||||
|
||||
一个正常并成功的页面请求将收到如下响应:
|
||||
|
||||
```
|
||||
HTTP/1.1 200 OK
|
||||
|
||||
```
|
||||
|
||||
但是,被 mod_evasive 拒绝的将返回以下内容:
|
||||
|
||||
```
|
||||
HTTP/1.1 403 Forbidden
|
||||
|
||||
```
|
||||
|
||||
以下脚本会尽可能迅速地向本地主机(127.0.0.1,localhost)的 80 端口发送 HTTP 请求,并打印出每个请求的响应代码。
|
||||
|
||||
你所要做的就是把下面的 bash 脚本复制到一个文件中,例如 `mod_evasive_test.sh`:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
for i in {1..50}; do
|
||||
curl -s -I 127.0.0.1 | head -n 1
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
for i in {1..50}; do
|
||||
curl -s -I 127.0.0.1 | head -n 1
|
||||
done
|
||||
|
||||
```
|
||||
|
||||
这个脚本的部分含义如下:
|
||||
|
||||
* curl - 这是一个发出网络请求的命令。
|
||||
* -s - 隐藏进度表。
|
||||
* -I - 仅显示响应头部信息。
|
||||
* head - 打印文件的第一部分。
|
||||
* -n 1 - 只显示第一行。
|
||||
* `curl` - 这是一个发出网络请求的命令。
|
||||
* `-s` - 隐藏进度表。
|
||||
* `-I` - 仅显示响应头部信息。
|
||||
* `head` - 打印文件的第一部分。
|
||||
* `-n 1` - 只显示第一行。
|
||||
|
||||
然后赋予其执行权限:
|
||||
|
||||
```
|
||||
chmod 755 mod_evasive_test.sh
|
||||
|
||||
```
|
||||
|
||||
在启用 mod_evasive **之前**,脚本运行时,将会看到 50 行“HTTP / 1.1 200 OK”的返回值。
|
||||
在启用 mod_evasive **之前**,脚本运行时,将会看到 50 行 “HTTP / 1.1 200 OK” 的返回值。
|
||||
|
||||
但是,启用 mod_evasive 后,您将看到以下内容:
|
||||
|
||||
```
|
||||
HTTP/1.1 200 OK
|
||||
HTTP/1.1 200 OK
|
||||
@ -191,13 +189,11 @@ HTTP/1.1 403 Forbidden
|
||||
HTTP/1.1 403 Forbidden
|
||||
HTTP/1.1 403 Forbidden
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
前两个请求被允许,但是在同一秒内第三个请求发出时,mod_evasive 拒绝了任何进一步的请求。您还将收到一封电子邮件(邮件地址在选项 `DOSEmailNotify` 中设置),通知您有 DOS 攻击被检测到。
|
||||
|
||||
Mod_evasive 现在已经在保护您的网站啦!
|
||||
|
||||
mod_evasive 现在已经在保护您的网站啦!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -205,7 +201,7 @@ via: https://bash-prompt.net/guides/mod_proxy/
|
||||
|
||||
作者:[Elliot Cooper][a]
|
||||
译者:[jessie-pang](https://github.com/jessie-pang)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,201 @@
|
||||
如何统计 Linux 中文件和文件夹/目录的数量
|
||||
======
|
||||
|
||||
嗨,伙计们,今天我们再次带来一系列可以多方面帮助到你的复杂的命令。 通过操作命令,可以帮助您计数当前目录中的文件和目录、递归计数,统计特定用户创建的文件列表等。
|
||||
|
||||
在本教程中,我们将向您展示如何使用多个命令,并使用 `ls`、`egrep`、`wc` 和 `find` 命令执行一些高级操作。 下面的命令将可用在多个方面。
|
||||
|
||||
为了实验,我打算总共创建 7 个文件和 2 个文件夹(5 个常规文件和 2 个隐藏文件)。 下面的 `tree` 命令的输出清楚的展示了文件和文件夹列表。
|
||||
|
||||
```
|
||||
# tree -a /opt
|
||||
/opt
|
||||
├── magi
|
||||
│ └── 2g
|
||||
│ ├── test5.txt
|
||||
│ └── .test6.txt
|
||||
├── test1.txt
|
||||
├── test2.txt
|
||||
├── test3.txt
|
||||
├── .test4.txt
|
||||
└── test.txt
|
||||
|
||||
2 directories, 7 files
|
||||
```
|
||||
|
||||
### 示例-1
|
||||
|
||||
统计当前目录的文件(不包括隐藏文件)。 运行以下命令以确定当前目录中有多少个文件,并且不计算点文件(LCTT 译注:点文件即以“.” 开头的文件,它们在 Linux 默认是隐藏的)。
|
||||
|
||||
```
|
||||
# ls -l . | egrep -c '^-'
|
||||
4
|
||||
```
|
||||
|
||||
**细节:**
|
||||
|
||||
* `ls` : 列出目录内容
|
||||
* `-l` : 使用长列表格式
|
||||
* `.` : 列出有关文件的信息(默认为当前目录)
|
||||
* `|` : 将一个程序的输出发送到另一个程序进行进一步处理的控制操作符
|
||||
* `egrep` : 打印符合模式的行
|
||||
* `-c` : 通用输出控制
|
||||
* `'^-'` : 以“-”开头的行(`ls -l` 列出长列表时,行首的 “-” 代表普通文件)
|
||||
|
||||
### 示例-2
|
||||
|
||||
统计当前目录包含隐藏文件在内的文件。 包括当前目录中的点文件。
|
||||
|
||||
```
|
||||
# ls -la . | egrep -c '^-'
|
||||
5
|
||||
```
|
||||
|
||||
### 示例-3
|
||||
|
||||
运行以下命令来计数当前目录的文件和文件夹。 它会计算所有的文件和目录。
|
||||
|
||||
```
|
||||
# ls -l | wc -l
|
||||
5
|
||||
```
|
||||
|
||||
**细节:**
|
||||
|
||||
* `ls` : 列出目录内容
|
||||
* `-l` : 使用长列表格式
|
||||
* `|` : 将一个程序的输出发送到另一个程序进行进一步处理的控制操作符
|
||||
* `wc` : 这是一个统计每个文件的换行符、单词和字节数的命令
|
||||
* `-l` : 输出换行符的数量
|
||||
|
||||
### 示例-4
|
||||
|
||||
统计当前目录包含隐藏文件和目录在内的文件和文件夹。
|
||||
|
||||
```
|
||||
# ls -la | wc -l
|
||||
8
|
||||
```
|
||||
|
||||
### 示例-5
|
||||
|
||||
递归计算当前目录的文件,包括隐藏文件。
|
||||
|
||||
```
|
||||
# find . -type f | wc -l
|
||||
7
|
||||
```
|
||||
|
||||
**细节 :**
|
||||
|
||||
* `find` : 搜索目录结构中的文件
|
||||
* `-type` : 文件类型
|
||||
* `f` : 常规文件
|
||||
* `wc` : 这是一个统计每个文件的换行符、单词和字节数的命令
|
||||
* `-l` : 输出换行符的数量
|
||||
|
||||
### 示例-6
|
||||
|
||||
使用 `tree` 命令输出目录和文件数(不包括隐藏文件)。
|
||||
|
||||
```
|
||||
# tree | tail -1
|
||||
2 directories, 5 files
|
||||
```
|
||||
|
||||
### 示例-7
|
||||
|
||||
使用包含隐藏文件的 `tree` 命令输出目录和文件计数。
|
||||
|
||||
```
|
||||
# tree -a | tail -1
|
||||
2 directories, 7 files
|
||||
```
|
||||
|
||||
### 示例-8
|
||||
|
||||
运行下面的命令递归计算包含隐藏目录在内的目录数。
|
||||
|
||||
```
|
||||
# find . -type d | wc -l
|
||||
3
|
||||
```
|
||||
|
||||
### 示例-9
|
||||
|
||||
根据文件扩展名计数文件数量。 这里我们要计算 `.txt` 文件。
|
||||
|
||||
```
|
||||
# find . -name "*.txt" | wc -l
|
||||
7
|
||||
```
|
||||
|
||||
### 示例-10
|
||||
|
||||
组合使用 `echo` 命令和 `wc` 命令统计当前目录中的所有文件。 `4` 表示当前目录中的文件数量。
|
||||
|
||||
```
|
||||
# echo *.* | wc
|
||||
1 4 39
|
||||
```
|
||||
|
||||
### 示例-11
|
||||
|
||||
组合使用 `echo` 命令和 `wc` 命令来统计当前目录中的所有目录。 第二个 `1` 表示当前目录中的目录数量。
|
||||
|
||||
```
|
||||
# echo */ | wc
|
||||
1 1 6
|
||||
```
|
||||
|
||||
### 示例-12
|
||||
|
||||
组合使用 `echo` 命令和 `wc` 命令来统计当前目录中的所有文件和目录。 `5` 表示当前目录中的目录和文件的数量。
|
||||
|
||||
```
|
||||
# echo * | wc
|
||||
1 5 44
|
||||
```
|
||||
|
||||
### 示例-13
|
||||
|
||||
统计系统(整个系统)中的文件数。
|
||||
|
||||
```
|
||||
# find / -type f | wc -l
|
||||
69769
|
||||
```
|
||||
|
||||
### 示例-14
|
||||
|
||||
统计系统(整个系统)中的文件夹数。
|
||||
|
||||
```
|
||||
# find / -type d | wc -l
|
||||
8819
|
||||
```
|
||||
|
||||
### 示例-15
|
||||
|
||||
运行以下命令来计算系统(整个系统)中的文件、文件夹、硬链接和符号链接数。
|
||||
|
||||
```
|
||||
# find / -type d -exec echo dirs \; -o -type l -exec echo symlinks \; -o -type f -links +1 -exec echo hardlinks \; -o -type f -exec echo files \; | sort | uniq -c
|
||||
8779 dirs
|
||||
69343 files
|
||||
20 hardlinks
|
||||
11646 symlinks
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-count-the-number-of-files-and-folders-directories-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/magesh/
|
||||
[1]:https://www.2daygeek.com/empty-a-file-delete-contents-lines-from-a-file-remove-matching-string-from-a-file-remove-empty-blank-lines-from-a-file/
|
70
published/20171215 Linux Vs Unix.md
Normal file
70
published/20171215 Linux Vs Unix.md
Normal file
@ -0,0 +1,70 @@
|
||||
Linux 与 Unix 之差异
|
||||
==============
|
||||
|
||||
[][1]
|
||||
|
||||
在计算机时代,相当一部分的人错误地认为 **Unix** 和 **Linux** 操作系统是一样的。然而,事实恰好相反。让我们仔细看看。
|
||||
|
||||
### 什么是 Unix?
|
||||
|
||||
[][2]
|
||||
|
||||
在 IT 领域,以操作系统而为人所知的 Unix,是 1969 年 AT&T 公司在美国新泽西所开发的(目前它的商标权由国际开放标准组织所拥有)。大多数的操作系统都受到了 Unix 的启发,而 Unix 也受到了未完成的 Multics 系统的启发。Unix 的另一版本是来自贝尔实验室的 Play 9。
|
||||
|
||||
#### Unix 被用于哪里?
|
||||
|
||||
作为一个操作系统,Unix 大多被用在服务器、工作站,现在也有用在个人计算机上。它在创建互联网、计算机网络或客户端/服务器模型方面发挥着非常重要的作用。
|
||||
|
||||
#### Unix 系统的特点
|
||||
|
||||
* 支持多任务
|
||||
* 相比 Multics 操作更加简单
|
||||
* 所有数据以纯文本形式存储
|
||||
* 采用单一根文件的树状存储
|
||||
* 能够同时访问多用户账户
|
||||
|
||||
#### Unix 操作系统的组成
|
||||
|
||||
**a)** 单核操作系统,负责低级操作以及由用户发起的操作,内核之间的通信通过系统调用进行。
|
||||
**b)** 系统工具
|
||||
**c)** 其他应用程序
|
||||
|
||||
### 什么是 Linux?
|
||||
|
||||
[][4]
|
||||
|
||||
这是一个基于 Unix 操作系统原理的开源操作系统。正如开源的含义一样,它是一个可以自由下载的系统。它也可以通过编辑、添加及扩充其源代码而定制该系统。这是它最大的好处之一,而不像今天的其它操作系统(Windows、Mac OS X 等)需要付费。Unix 系统不是创建新系统的唯一模版,另外一个重要的因素是 MINIX 系统,不像 Linus,此版本被其缔造者(Andrew Tanenbaum)用于商业系统。
|
||||
|
||||
Linux 由 Linus Torvalds 开发于 1991 年,这是一个其作为个人兴趣的操作系统。为什么 Linux 借鉴 Unix 的一个主要原因是因为其简洁性。Linux 第一个官方版本(0.01)发布于 1991 年 9 月 17 日。虽然这个系统并不是很完美和完善,但 Linus 对它产生很大的兴趣,并在几天内,Linus 发出了一些关于 Linux 源代码扩展以及其他想法的电子邮件。
|
||||
|
||||
#### Linux 的特点
|
||||
|
||||
Linux 的基石是 Unix 内核,其基于 Unix 的基本特点以及 **POSIX** 和单独的 **UNIX 规范标准**。看起来,该操作系统官方名字取自于 **Linus**,其中其操作系统名称的尾部的 “x” 和 **Unix 系统**相联系。
|
||||
|
||||
#### 主要功能
|
||||
|
||||
* 同时运行多任务(多任务)
|
||||
* 程序可以包含一个或多个进程(多用途系统),且每个进程可能有一个或多个线程。
|
||||
* 多用户,因此它可以运行多个用户程序。
|
||||
* 个人帐户受适当授权的保护。
|
||||
* 因此账户准确地定义了系统控制权。
|
||||
|
||||
**企鹅 Tux** 的 Logo 作者是 Larry Ewing,他选择这个企鹅作为他的开源 **Linux 操作系统**的吉祥物。**Linux Torvalds** 最初提出这个新的操作系统的名字为 “Freax” ,即为 “自由(free)” + “奇异(freak)” + x(UNIX 系统)的结合字,而不像存放它的首个版本的 FTP 服务器上所起的名字(Linux)。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxandubuntu.com/home/linux-vs-unix
|
||||
|
||||
作者:[linuxandubuntu][a]
|
||||
译者:[HardworkFish](https://github.com/HardworkFish)
|
||||
校对:[imquanquan](https://github.com/imquanquan), [wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxandubuntu.com
|
||||
[1]:http://www.linuxandubuntu.com/home/linux-vs-unix
|
||||
[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/unix_orig.png
|
||||
[3]:http://www.unix.org/what_is_unix.html
|
||||
[4]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux_orig.png
|
||||
[5]:https://www.linux.com
|
@ -0,0 +1,140 @@
|
||||
为 Linux 初学者讲解 wc 命令
|
||||
======
|
||||
|
||||
在命令行工作时,有时您可能想要知道一个文件中的单词数量、字节数、甚至换行数量。如果您正在寻找这样做的工具,您会很高兴地知道,在 Linux 中,存在一个命令行实用程序,它被称为 `wc` ,它为您完成所有这些工作。在本文中,我们将通过简单易懂的例子来讨论这个工具。
|
||||
|
||||
但是在我们开始之前,值得一提的是,本教程中提供的所有示例都在 Ubuntu 16.04 上进行了测试。
|
||||
|
||||
### Linux wc 命令
|
||||
|
||||
`wc` 命令打印每个输入文件的新行、单词和字节数。以下是该命令行工具的语法:
|
||||
|
||||
```
|
||||
wc [OPTION]... [FILE]...
|
||||
```
|
||||
|
||||
以下是 `wc` 的 man 文档的解释:
|
||||
|
||||
```
|
||||
为每个文件打印新行、单词和字节数,如果指定多于一个文件,也列出总的行数。单词是由空格分隔的非零长度的字符序列。如果没有指定文件,或当文件为 `-`,则读取标准输入。
|
||||
```
|
||||
|
||||
下面的 Q&A 样式的示例将会让您更好地了解 `wc` 命令的基本用法。
|
||||
|
||||
注意:在所有示例中我们将使用一个名为 `file.txt` 的文件作为输入文件。以下是该文件包含的内容:
|
||||
|
||||
```
|
||||
hi
|
||||
hello
|
||||
how are you
|
||||
thanks.
|
||||
```
|
||||
|
||||
### Q1. 如何打印字节数
|
||||
|
||||
使用 `-c` 命令选项打印字节数.
|
||||
|
||||
```
|
||||
wc -c file.txt
|
||||
```
|
||||
|
||||
下面是这个命令在我们的系统上产生的输出:
|
||||
|
||||
[![如何打印字节数][1]][2]
|
||||
|
||||
文件包含 29 个字节。
|
||||
|
||||
### Q2. 如何打印字符数
|
||||
|
||||
要打印字符数,请使用 `-m` 命令行选项。
|
||||
|
||||
```
|
||||
wc -m file.txt
|
||||
```
|
||||
|
||||
下面是这个命令在我们的系统上产生的输出:
|
||||
|
||||
[![如何打印字符数][3]][4]
|
||||
|
||||
文件包含 29 个字符。
|
||||
|
||||
### Q3. 如何打印换行数
|
||||
|
||||
使用 `-l` 命令选项来打印文件中的新行数:
|
||||
|
||||
```
|
||||
wc -l file.txt
|
||||
```
|
||||
|
||||
这里是我们的例子的输出:
|
||||
|
||||
[![如何打印换行数][5]][6]
|
||||
|
||||
### Q4. 如何打印单词数
|
||||
|
||||
要打印文件中的单词数量,请使用 `-w` 命令选项。
|
||||
|
||||
```
|
||||
wc -w file.txt
|
||||
```
|
||||
|
||||
在我们的例子中命令的输出如下:
|
||||
|
||||
[![如何打印字数][7]][8]
|
||||
|
||||
这显示文件中有 6 个单词。
|
||||
|
||||
### Q5. 如何打印最长行的显示宽度或长度
|
||||
|
||||
如果您想要打印输入文件中最长行的长度,请使用 `-l` 命令行选项。
|
||||
|
||||
```
|
||||
wc -L file.txt
|
||||
```
|
||||
|
||||
下面是在我们的案例中命令产生的结果:
|
||||
|
||||
[![如何打印最长行的显示宽度或长度][9]][10]
|
||||
|
||||
所以文件中最长的行长度是 11。
|
||||
|
||||
### Q6. 如何从文件读取输入文件名
|
||||
|
||||
如果您有多个文件名,并且您希望 `wc` 从一个文件中读取它们,那么使用`-files0-from` 选项。
|
||||
|
||||
```
|
||||
wc --files0-from=names.txt
|
||||
```
|
||||
|
||||
[![如何从文件读取输入文件名][11]][12]
|
||||
|
||||
如你所见 `wc` 命令,在这个例子中,输出了文件 `file.txt` 的行、单词和字符计数。文件名为 `file.txt` 的文件在 `name.txt` 文件中提及。值得一提的是,要成功地使用这个选项,文件中的文件名应该用 NUL 终止——您可以通过键入`Ctrl + v` 然后按 `Ctrl + Shift + @` 来生成这个字符。
|
||||
|
||||
### 结论
|
||||
|
||||
正如您所认同的一样,从理解和使用目的来看, `wc` 是一个简单的命令。我们已经介绍了几乎所有的命令行选项,所以一旦你练习了我们这里介绍的内容,您就可以随时在日常工作中使用该工具了。想了解更多关于 `wc` 的信息,请参考它的 [man 文档][13]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-wc-command-explained-for-beginners-6-examples/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[stevenzdg988](https://github.com/stevenzdg988)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-c-option.png
|
||||
[2]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-c-option.png
|
||||
[3]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-m-option.png
|
||||
[4]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-m-option.png
|
||||
[5]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-l-option.png
|
||||
[6]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-l-option.png
|
||||
[7]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-w-option.png
|
||||
[8]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-w-option.png
|
||||
[9]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-L-option.png
|
||||
[10]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-L-option.png
|
||||
[11]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/wc-file0-from-option.png
|
||||
[12]:https://www.howtoforge.com/images/usage_of_pfsense_to_block_dos_attack_/big/wc-file0-from-option.png
|
||||
[13]:https://linux.die.net/man/1/wc
|
@ -1,40 +1,41 @@
|
||||
cURL VS wget:根据两者的差异和使用习惯,你应该选用哪一个?
|
||||
cURL 与 wget:你应该选用哪一个?
|
||||
======
|
||||
|
||||

|
||||
|
||||
当想要直接通过 Linux 命令行下载文件,马上就能想到两个工具:‘wget’和‘cURL’。它们有很多共享的特征,可以很轻易的完成一些相同的任务。
|
||||
当想要直接通过 Linux 命令行下载文件,马上就能想到两个工具:wget 和 cURL。它们有很多一样的特征,可以很轻易的完成一些相同的任务。
|
||||
|
||||
虽然它们有一些相似的特征,但它们并不是完全一样。这两个程序适用与不同的场合,在特定场合下,都拥有各自的特性。
|
||||
|
||||
### cURL vs wget: 相似之处
|
||||
### cURL vs wget: 相似之处
|
||||
|
||||
wget 和 cURL 都可以下载内容。它们的内核就是这么设计的。它们都可以向互联网发送请求并返回请求项。这可以是文件、图片或者是其他诸如网站的原始 HTML 之类。
|
||||
wget 和 cURL 都可以下载内容。它们的核心就是这么设计的。它们都可以向互联网发送请求并返回请求项。这可以是文件、图片或者是其他诸如网站的原始 HTML 之类。
|
||||
|
||||
这两个程序都可以进行 HTTP POST 请求。这意味着它们都可以向网站发送数据,比如说填充表单什么的。
|
||||
|
||||
由于这两者都是命令行工具,它们都被设计成脚本程序。wget 和 cURL 都可以写进你的 [Bash 脚本][1] ,自动与新内容交互,下载所需内容。
|
||||
由于这两者都是命令行工具,它们都被设计成可脚本化。wget 和 cURL 都可以写进你的 [Bash 脚本][1] ,自动与新内容交互,下载所需内容。
|
||||
|
||||
### wget 的优势
|
||||
|
||||
![wget download][2]
|
||||
|
||||
wget 简单直接。这意味着你能享受它超凡的下载速度。wget 是一个独立的程序,无需额外的资源库,更不会做出格的事情。
|
||||
wget 简单直接。这意味着你能享受它超凡的下载速度。wget 是一个独立的程序,无需额外的资源库,更不会做其范畴之外的事情。
|
||||
|
||||
wget 是专业的直接下载程序,支持递归下载。同时,它也允许你在网页或是 FTP 目录下载任何事物。
|
||||
wget 是专业的直接下载程序,支持递归下载。同时,它也允许你下载网页中或是 FTP 目录中的任何内容。
|
||||
|
||||
wget 拥有智能的默认项。他规定了很多在常规浏览器里的事物处理方式,比如 cookies 和重定向,这都不需要额外的配置。可以说,wget 简直就是无需说明,开罐即食!
|
||||
wget 拥有智能的默认设置。它规定了很多在常规浏览器里的事物处理方式,比如 cookies 和重定向,这都不需要额外的配置。可以说,wget 简直就是无需说明,开罐即食!
|
||||
|
||||
### cURL 优势
|
||||
|
||||
![cURL Download][3]
|
||||
|
||||
cURL是一个多功能工具。当然,他可以下载网络内容,但同时它也能做更多别的事情。
|
||||
cURL是一个多功能工具。当然,它可以下载网络内容,但同时它也能做更多别的事情。
|
||||
|
||||
cURL 技术支持库是:libcurl。这就意味着你可以基于 cURL 编写整个程序,允许你在 libcurl 库中基于图形环境下载程序,访问它所有的功能。
|
||||
cURL 技术支持库是:libcurl。这就意味着你可以基于 cURL 编写整个程序,允许你基于 libcurl 库中编写图形环境的下载程序,访问它所有的功能。
|
||||
|
||||
cURL 宽泛的网络协议支持可能是其最大的卖点。cURL 支持访问 HTTP 和 HTTPS 协议,能够处理 FTP 传送。它支持 LDAP 协议,甚至支持 Samba 分享。实际上,你还可以用 cURL 收发邮件。
|
||||
cURL 宽泛的网络协议支持可能是其最大的卖点。cURL 支持访问 HTTP 和 HTTPS 协议,能够处理 FTP 传输。它支持 LDAP 协议,甚至支持 Samba 分享。实际上,你还可以用 cURL 收发邮件。
|
||||
|
||||
cURL 也有一些简洁的安全特性。cURL 支持安装许多 SSL/TLS 库,也支持通过网络代理访问,包括 SOCKS。这意味着,你可以越过 Tor. 使用cURL。
|
||||
cURL 也有一些简洁的安全特性。cURL 支持安装许多 SSL/TLS 库,也支持通过网络代理访问,包括 SOCKS。这意味着,你可以越过 Tor 来使用cURL。
|
||||
|
||||
cURL 同样支持让数据发送变得更容易的 gzip 压缩技术。
|
||||
|
||||
@ -42,15 +43,15 @@ cURL 同样支持让数据发送变得更容易的 gzip 压缩技术。
|
||||
|
||||
那你应该使用 cURL 还是使用 wget?这个比较得看实际用途。如果你想快速下载并且没有担心参数标识的需求,那你应该使用轻便有效的 wget。如果你想做一些更复杂的使用,直觉告诉你,你应该选择 cRUL。
|
||||
|
||||
cURL 支持你做很多事情。你可以把 cURL想象成一个精简的命令行网页浏览器。它支持几乎你能想到的所有协议,可以交互访问几乎所有在线内容。唯一和浏览器不同的是,cURL 不能显示接收到的相应信息。
|
||||
cURL 支持你做很多事情。你可以把 cURL 想象成一个精简的命令行网页浏览器。它支持几乎你能想到的所有协议,可以交互访问几乎所有在线内容。唯一和浏览器不同的是,cURL 不会渲染接收到的相应信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/curl-vs-wget/
|
||||
|
||||
作者:[Nick Congleton][a]
|
||||
译者:[译者ID](https://github.com/CYLeft)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[CYLeft](https://github.com/CYLeft)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -3,11 +3,12 @@
|
||||
|
||||
![Learn xfs commands with examples][1]
|
||||
|
||||
在我们另一篇文章中,我带您领略了一下[什么事 xfs,xfs 的相关特性等内容 ][2]。本文我们来看一些常用的 xfs 管理命令。我们将会通过几个例子来讲解如何创建 xfs 文件系统,如何对 xfs 文件系统进行扩容,如何检测并修复 xfs 文件系统。
|
||||
在我们另一篇文章中,我带您领略了一下[什么是 xfs,xfs 的相关特性等内容][2]。本文我们来看一些常用的 xfs 管理命令。我们将会通过几个例子来讲解如何创建 xfs 文件系统,如何对 xfs 文件系统进行扩容,如何检测并修复 xfs 文件系统。
|
||||
|
||||
### 创建 XFS 文件系统
|
||||
|
||||
`mkfs.xfs` 命令用来创建 xfs 文件系统。无需任何特别的参数,其输出如下:
|
||||
|
||||
```
|
||||
root@kerneltalks # mkfs.xfs /dev/xvdf
|
||||
meta-data=/dev/xvdf isize=512 agcount=4, agsize=1310720 blks
|
||||
@ -25,7 +26,7 @@ realtime =none extsz=4096 blocks=0, rtextents=0
|
||||
|
||||
### 调整 XFS 文件系统容量
|
||||
|
||||
你职能对 XFS 进行扩容而不能缩容。我们使用 `xfs_growfs` 来进行扩容。你需要使用 `-D` 参数指定挂载点的新容量。`-D` 接受一个数字的参数,指定文件系统块的数量。若你没有提供 `-D` 参数,则 `xfs_growfs` 会将文件系统扩到最大。
|
||||
你只能对 XFS 进行扩容而不能缩容。我们使用 `xfs_growfs` 来进行扩容。你需要使用 `-D` 参数指定挂载点的新容量。`-D` 接受一个数字的参数,指定文件系统块的数量。若你没有提供 `-D` 参数,则 `xfs_growfs` 会将文件系统扩到最大。
|
||||
|
||||
```
|
||||
root@kerneltalks # xfs_growfs /dev/xvdf -D 256
|
||||
@ -41,7 +42,7 @@ realtime =none extsz=4096 blocks=0, rtextents=0
|
||||
data size 256 too small, old size is 2883584
|
||||
```
|
||||
|
||||
观察上面的输出中的最后一行。由于我分配的容量要小于现在的容量。它告诉你不能缩减 XFS 文件系统。你只能对他进行扩展。
|
||||
观察上面的输出中的最后一行。由于我分配的容量要小于现在的容量。它告诉你不能缩减 XFS 文件系统。你只能对它进行扩展。
|
||||
|
||||
```
|
||||
root@kerneltalks # xfs_growfs /dev/xvdf -D 2883840
|
||||
@ -59,7 +60,7 @@ data blocks changed from 2883584 to 2883840
|
||||
|
||||
现在我多分配了 1GB 的空间,而且也成功地扩增了容量。
|
||||
|
||||
**1GB 块的计算方式:**
|
||||
**1GB 块的计算方式:**
|
||||
|
||||
当前文件系统 bsize 为 4096,意思是块的大小为 4MB。我们需要 1GB,也就是 256 个块。因此在当前块数,2883584 上加上 256 得到 2883840。因此我为 `-D` 传递参数 2883840。
|
||||
|
||||
@ -76,7 +77,9 @@ xfs_repair: /dev/xvdf contains a mounted and writable filesystem
|
||||
|
||||
fatal error -- couldn't initialize XFS library
|
||||
```
|
||||
|
||||
卸载后运行检查命令。
|
||||
|
||||
```
|
||||
root@kerneltalks # xfs_repair -n /dev/xvdf
|
||||
Phase 1 - find and verify superblock...
|
||||
@ -184,10 +187,10 @@ via: https://kerneltalks.com/commands/xfs-file-system-commands-with-examples/
|
||||
|
||||
作者:[kerneltalks][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://kerneltalks.com
|
||||
[1]:https://c3.kerneltalks.com/wp-content/uploads/2018/01/xfs-commands.png
|
||||
[1]:https://a3.kerneltalks.com/wp-content/uploads/2018/01/xfs-commands.png
|
||||
[2]:https://kerneltalks.com/disk-management/xfs-filesystem-in-linux/
|
106
published/20180103 Creating an Offline YUM repository for LAN.md
Normal file
106
published/20180103 Creating an Offline YUM repository for LAN.md
Normal file
@ -0,0 +1,106 @@
|
||||
创建局域网内的离线 Yum 仓库
|
||||
======
|
||||
|
||||
在早先的教程中,我们讨论了[如何使用 ISO 镜像和在线 Yum 仓库的方式来创建自己的 Yum 仓库 ][1]。创建自己的 Yum 仓库是一个不错的想法,但若网络中只有 2-3 台 Linux 机器那就没啥必要了。不过若你的网络中有大量的 Linux 服务器,而且这些服务器还需要定时进行升级,或者你有大量服务器无法直接访问互联网,那么创建自己的 Yum 仓库就很有必要了。
|
||||
|
||||
当我们有大量的 Linux 服务器,而每个服务器都直接从互联网上升级系统时,数据消耗会很可观。为了节省数据量,我们可以创建个离线 Yum 源并将之分享到本地网络中。网络中的其他 Linux 机器就可以直接从本地 Yum 上获取系统更新,从而节省数据量,而且传输速度也会很好。
|
||||
|
||||
我们可以使用下面两种方法来分享 Yum 仓库:
|
||||
|
||||
* 使用 Web 服务器(Apache)
|
||||
* 使用 FTP 服务器(VSFTPD)
|
||||
|
||||
在开始讲解这两个方法之前,我们需要先根据[之前的教程][1]创建一个 Yum 仓库。
|
||||
|
||||
### 使用 Web 服务器
|
||||
|
||||
首先在 Yum 服务器上安装 Web 服务器(Apache),我们假设服务器 IP 是 `192.168.1.100`。我们已经在这台系统上配置好了 Yum 仓库,现在我们来使用 `yum` 命令安装 Apache Web 服务器,
|
||||
|
||||
```
|
||||
$ yum install httpd
|
||||
```
|
||||
|
||||
下一步,拷贝所有的 rpm 包到默认的 Apache 根目录下,即 `/var/www/html`,由于我们已经将包都拷贝到了 `/YUM` 下,我们也可以创建一个软连接来从 `/var/www/html` 指向 `/YUM`。
|
||||
|
||||
```
|
||||
$ ln -s /var/www/html/Centos /YUM
|
||||
```
|
||||
|
||||
重启 Web 服务器应用改变:
|
||||
|
||||
```
|
||||
$ systemctl restart httpd
|
||||
```
|
||||
|
||||
#### 配置客户端机器
|
||||
|
||||
服务端的配置就完成了,现在需要配置下客户端来从我们创建的离线 Yum 中获取升级包,这里假设客户端 IP 为 `192.168.1.101`。
|
||||
|
||||
在 `/etc/yum.repos.d` 目录中创建 `offline-yum.repo` 文件,输入如下信息,
|
||||
|
||||
```
|
||||
$ vi /etc/yum.repos.d/offline-yum.repo
|
||||
```
|
||||
|
||||
```
|
||||
name=Local YUM
|
||||
baseurl=http://192.168.1.100/CentOS/7
|
||||
gpgcheck=0
|
||||
enabled=1
|
||||
```
|
||||
|
||||
客户端也配置完了。试一下用 `yum` 来安装/升级软件包来确认仓库是正常工作的。
|
||||
|
||||
### 使用 FTP 服务器
|
||||
|
||||
在 FTP 上分享 Yum,首先需要安装所需要的软件包,即 vsftpd。
|
||||
|
||||
```
|
||||
$ yum install vsftpd
|
||||
```
|
||||
|
||||
vsftp 的默认根目录为 `/var/ftp/pub`,因此你可以拷贝 rpm 包到这个目录,或者为它创建一个软连接:
|
||||
|
||||
```
|
||||
$ ln -s /var/ftp/pub /YUM
|
||||
```
|
||||
|
||||
重启服务应用改变:
|
||||
|
||||
```
|
||||
$ systemctl restart vsftpd
|
||||
```
|
||||
|
||||
#### 配置客户端机器
|
||||
|
||||
像上面一样,在 `/etc/yum.repos.d` 中创建 `offline-yum.repo` 文件,并输入下面信息,
|
||||
|
||||
```
|
||||
$ vi /etc/yum.repos.d/offline-yum.repo
|
||||
```
|
||||
|
||||
```
|
||||
[Offline YUM]
|
||||
name=Local YUM
|
||||
baseurl=ftp://192.168.1.100/pub/CentOS/7
|
||||
gpgcheck=0
|
||||
enabled=1
|
||||
```
|
||||
|
||||
现在客户机可以通过 ftp 接收升级了。要配置 vsftpd 服务器为其他 Linux 系统分享文件,请[阅读这篇指南][2]。
|
||||
|
||||
这两种方法都很不错,你可以任意选择其中一种方法。有任何疑问或这想说的话,欢迎在下面留言框中留言。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/offline-yum-repository-for-lan/
|
||||
|
||||
作者:[Shusain][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:https://linux.cn/article-9296-1.html
|
||||
[2]:http://linuxtechlab.com/ftp-secure-installation-configuration/
|
@ -1,6 +1,7 @@
|
||||
如何更改 Linux 控制台上的字体
|
||||
======
|
||||

|
||||
|
||||

|
||||
|
||||
我尝试尽可能的保持心灵祥和,然而总有一些事情让我意难平,比如控制台字体太小了。记住我的话,朋友,有一天你的眼睛会退化,无法再看清你编码时用的那些细小字体,到那时你就后悔莫及了。
|
||||
|
||||
@ -8,46 +9,48 @@
|
||||
|
||||
### Linux 控制台是个什么鬼?
|
||||
|
||||
首先让我们来澄清一下我们说的到底是个什么东西。当我提到 Linux 控制台,我指的是 TTY1-6,即你从图形环境用 `Ctrl-Alt-F1` 到 `F6` 切换到的虚拟终端。按下 `Ctrl+Alt+F7` 会切回图形环境。(不过这些热键已经不再通用,你的 Linux 发行版可能有不同的键映射。你的 TTY 的数量也可能不同,你图形环境会话也可能不在 `F7`。比如,Fedora 的默认图形会话是 `F2`,它只有一个额外的终端在 `F1`。) 我觉得能同时拥有 X 会话和终端绘画实在是太酷了。
|
||||
首先让我们来澄清一下我们说的到底是个什么东西。当我提到 Linux 控制台,我指的是 TTY1-6,即你从图形环境用 `Ctrl-Alt-F1` 到 `F6` 切换到的虚拟终端。按下 `Ctrl+Alt+F7` 会切回图形环境。(不过这些热键已经不再通用,你的 Linux 发行版可能有不同的键映射。你的 TTY 的数量也可能不同,你图形环境会话也可能不在 `F7`。比如,Fedora 的默认图形会话是 `F2`,它只有一个额外的终端在 `F1`。) 我觉得能同时拥有 X 会话和终端会话实在是太酷了。
|
||||
|
||||
Linux 控制台是内核的一部分,而且并不运行在 X 会话中。它和你在没有图形环境的无头服务器中用的控制台是一样的。我称呼在图形会话中的 X 终端为终端,而将控制台和 X 终端统称为终端模拟器。
|
||||
Linux 控制台是内核的一部分,而且并不运行在 X 会话中。它和你在没有图形环境的<ruby>无头<rt>headless</rt></ruby>服务器中用的控制台是一样的。我称呼在图形会话中的 X 终端为终端,而将控制台和 X 终端统称为终端模拟器。
|
||||
|
||||
但这还没完。Linux 终端从早期的 ANSI 时代开始已经经历了长久的发展,多亏了 Linux framebuffer,它现在支持 Unicode 并且对图形也有了有限的一些支持。而且出现了很多在控制台下运行的多媒体应用,这些我们在以后的文章中会提到。
|
||||
但这还没完。Linux 终端从早期的 ANSI 时代开始已经经历了长久的发展,多亏了 Linux framebuffer,它现在支持 Unicode 并且对图形也有了有限的一些支持。而且出现了很多在控制台下运行的[多媒体应用][4],这些我们在以后的文章中会提到。
|
||||
|
||||
### 控制台截屏
|
||||
|
||||
获取控制台截屏的最简单方法是让控制台跑在虚拟机内部。然后你可以在宿主系统上使用中意的截屏软件来抓取。不过借助 [fbcat][1] 和 [fbgrab][2] 你也可以直接在控制台上截屏。`fbcat` 会创建一个可移植的像素映射格式 (PPM) 图像; 这是一个高度可移植的未压缩图像格式,可以在所有的操作系统上读取,当然你也可以把它转换成任何喜欢的其他格式。`fbgrab` 则是 `fbcat` 的一个封装脚本,用来生成一个 PNG 文件。不同的人写过多个版本的 `fbgrab`。每个版本的选项都有限而且只能创建截取全屏。
|
||||
获取控制台截屏的最简单方法是让控制台跑在虚拟机内部。然后你可以在宿主系统上使用中意的截屏软件来抓取。不过借助 [fbcat][1] 和 [fbgrab][2] 你也可以直接在控制台上截屏。`fbcat` 会创建一个可移植的像素映射格式(PPM)的图像; 这是一个高度可移植的未压缩图像格式,可以在所有的操作系统上读取,当然你也可以把它转换成任何喜欢的其他格式。`fbgrab` 则是 `fbcat` 的一个封装脚本,用来生成一个 PNG 文件。很多人写过多个版本的 `fbgrab`。每个版本的选项都有限而且只能创建截取全屏。
|
||||
|
||||
`fbcat` 的执行需要 root 权限,而且它的输出需要重定向到文件中。你无需指定文件扩展名,只需要输入文件名就行了:
|
||||
|
||||
```
|
||||
$ sudo fbcat > Pictures/myfile
|
||||
|
||||
```
|
||||
|
||||
在 GIMP 中裁剪后,就得到了图 1。
|
||||
|
||||

|
||||
Figure 1:View after cropping。
|
||||
|
||||
*图 1 : 裁剪后查看*
|
||||
|
||||
如果能在左边空白处有一点填充就好了,如果有读者知道如何实现请在留言框中告诉我。
|
||||
|
||||
`fbgrab` 还有一些选项,你可以通过 `man fbgrab` 来查看,这些选项包括对另一个控制台进行截屏,以及延时截屏。在下面的例子中可以看到,`fbgrab` 截屏跟 `fbcat` 截屏类似,只是你无需明确进行输出重定性了:
|
||||
`fbgrab` 还有一些选项,你可以通过 `man fbgrab` 来查看,这些选项包括对另一个控制台进行截屏,以及延时截屏等。在下面的例子中可以看到,`fbgrab` 截屏跟 `fbcat` 截屏类似,只是你无需明确进行输出重定性了:
|
||||
|
||||
```
|
||||
$ sudo fbgrab Pictures/myOtherfile
|
||||
|
||||
```
|
||||
|
||||
### 查找字体
|
||||
|
||||
就我所知,除了查看字体存储目录 `/usr/share/consolefonts/`(Debian/etc。),`/lib/kbd/consolefonts/` (Fedora),`/usr/share/kbd/consolefonts` (openSUSE),外没有其他方法可以列出已安装的字体了。
|
||||
就我所知,除了查看字体存储目录 `/usr/share/consolefonts/`(Debian 等),`/lib/kbd/consolefonts/` (Fedora),`/usr/share/kbd/consolefonts` (openSUSE)外没有其他方法可以列出已安装的字体了。
|
||||
|
||||
### 更改字体
|
||||
|
||||
可读字体不是什么新概念。我们应该尊重以前的经验!可读性是很重要的。可配置性也很重要,然而现如今却不怎么看重了。
|
||||
|
||||
在 Debian/Ubuntu/ 等系统上,可以运行 `sudo dpkg-reconfigure console-setup` 来设置控制台字体,然后在控制台运行 `setupcon` 命令来让变更生效。`setupcon` 属于 `console-setup` 软件包中的一部分。若你的 Linux 发行版中不包含该工具,可以在 [openSUSE][3] 中下载到它。
|
||||
在 Debian/Ubuntu 等系统上,可以运行 `sudo dpkg-reconfigure console-setup` 来设置控制台字体,然后在控制台运行 `setupcon` 命令来让变更生效。`setupcon` 属于 `console-setup` 软件包中的一部分。若你的 Linux 发行版中不包含该工具,可以在 [openSUSE][3] 中下载到它。
|
||||
|
||||
你也可以直接编辑 `/etc/default/console-setup` 文件。下面这个例子中设置字体为 32 点大小的 Terminus Bold 字体,这是我的最爱,并且严格限制控制台宽度为 80 列。
|
||||
|
||||
```
|
||||
ACTIVE_CONSOLES="/dev/tty[1-6]"
|
||||
CHARMAP="UTF-8"
|
||||
@ -55,22 +58,20 @@ CODESET="guess"
|
||||
FONTFACE="TerminusBold"
|
||||
FONTSIZE="16x32"
|
||||
SCREEN_WIDTH="80"
|
||||
|
||||
```
|
||||
|
||||
这里的 FONTFACE 和 FONTSIZE 的值来自于字体的文件名,`TerminusBold32x16.psf.gz`。是的,你需要反转 FONTSIZE 中值的顺序。计算机就是这么搞笑。然后再运行 `setupcon` 来让新配置生效。可以使用 `showconsolefont` 来查看当前所用字体的所有字符集。要查看完整的选项说明请参考 `man console-setup`。
|
||||
这里的 `FONTFACE` 和 `FONTSIZE` 的值来自于字体的文件名 `TerminusBold32x16.psf.gz`。是的,你需要反转 `FONTSIZE` 中值的顺序。计算机就是这么搞笑。然后再运行 `setupcon` 来让新配置生效。可以使用 `showconsolefont` 来查看当前所用字体的所有字符集。要查看完整的选项说明请参考 `man console-setup`。
|
||||
|
||||
### Systemd
|
||||
|
||||
Systemd 与 `console-setup` 不太一样,除了字体之外,你无需安装任何东西。你只需要编辑 `/etc/vconsole.conf` 然后重启就行了。我在 Fedora 和 openSUSE 系统中安装了一些额外的大型号的 Terminus 字体包,因为默认安装的字体最大只有 16 点而我想要的是 32 点。然后将 `/etc/vconsole.conf` 的内容修改为:
|
||||
Systemd 与 `console-setup` 不太一样,除了字体之外,你无需安装任何东西。你只需要编辑 `/etc/vconsole.conf` 然后重启就行了。我在 Fedora 和 openSUSE 系统中安装了一些额外的大字号的 Terminus 字体包,因为默认安装的字体最大只有 16 点而我想要的是 32 点。然后将 `/etc/vconsole.conf` 的内容修改为:
|
||||
|
||||
```
|
||||
KEYMAP="us"
|
||||
FONT="ter-v32b"
|
||||
|
||||
```
|
||||
|
||||
下周我们还将学习一些更加酷的控制台小技巧,以及一些在控制台上运行的多媒体应用。
|
||||
|
||||
下周我们还将学习一些更加酷的控制台小技巧,以及一些在控制台上运行的[多媒体应用][4]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -78,7 +79,7 @@ via: https://www.linux.com/learn/intro-to-linux/2018/1/how-change-your-linux-con
|
||||
|
||||
作者:[Carla Schroder][a]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -86,3 +87,4 @@ via: https://www.linux.com/learn/intro-to-linux/2018/1/how-change-your-linux-con
|
||||
[1]:http://jwilk.net/software/fbcat
|
||||
[2]:https://github.com/jwilk/fbcat/blob/master/fbgrab
|
||||
[3]:https://software.opensuse.org/package/console-setup
|
||||
[4]:https://linux.cn/article-9320-1.html
|
100
published/20180106 Meltdown and Spectre Linux Kernel Status.md
Normal file
100
published/20180106 Meltdown and Spectre Linux Kernel Status.md
Normal file
@ -0,0 +1,100 @@
|
||||
Gerg:meltdown 和 spectre 影响下的 Linux 内核状况
|
||||
============================================================
|
||||
|
||||
现在(LCTT 译注:本文发表于 1 月初),每个人都知道一件关乎电脑安全的“大事”发生了,真见鬼,等[每日邮报报道][1]的时候,你就知道什么是糟糕了...
|
||||
|
||||
不管怎样,除了告诉你这篇写的及其出色的[披露该问题的 Zero 项目的论文][2]之外,我不打算去跟进这个问题已经被报道出来的细节。他们应该现在就直接颁布 2018 年的 [Pwnie][3] 奖,干的太棒了。
|
||||
|
||||
如果你想了解我们如何在内核中解决这些问题的技术细节,你可以保持关注了不起的 [lwn.net][4],他们会把这些细节写成文章。
|
||||
|
||||
此外,这有一条很好的关于[这些公告][5]的摘要,包括了各个厂商的公告。
|
||||
|
||||
至于这些涉及的公司是如何处理这些问题的,这可以说是如何**不**与 Linux 内核社区保持沟通的教科书般的例子。这件事涉及到的人和公司都知道发生了什么,我确定这件事最终会出现,但是目前我需要去关注的是如何修复这些涉及到的问题,然后不去点名指责,不管我有多么的想去这么做。
|
||||
|
||||
### 你现在能做什么
|
||||
|
||||
如果你的 Linux 系统正在运行一个正常的 Linux 发行版,那么升级你的内核。它们都应该已经更新了,然后在接下来的几个星期里保持更新。我们会统计大量在极端情况下出现的 bug ,这里涉及的测试很复杂,包括庞大的受影响的各种各样的系统和工作任务。如果你的 Linux 发行版没有升级内核,我强烈的建议你马上更换一个 Linux 发行版。
|
||||
|
||||
然而有很多的系统因为各种各样的原因(听说它们比起“传统”的企业发行版更多)不是在运行“正常的” Linux 发行版上。它们依靠长期支持版本(LTS)的内核升级,或者是正常的稳定内核升级,或者是内部的某人打造版本的内核。对于这部分人,这篇介绍了你能使用的上游内核中发生的混乱是怎么回事。
|
||||
|
||||
### Meltdown – x86
|
||||
|
||||
现在,Linus 的内核树包含了我们当前所知的为 x86 架构解决 meltdown 漏洞的所有修复。开启 `CONFIG_PAGE_TABLE_ISOLATION` 这个内核构建选项,然后进行重构和重启,所有的设备应该就安全了。
|
||||
|
||||
然而,Linus 的内核树当前处于 4.15-rc6 这个版本加上一些未完成的补丁。4.15-rc7 版本要明天才会推出,里面的一些补丁会解决一些问题。但是大部分的人不会在一个“正常”的环境里运行 -rc 内核。
|
||||
|
||||
因为这个原因,x86 内核开发者在<ruby>页表隔离<rt>page table isolation</rt></ruby>代码的开发过程中做了一个非常好的工作,好到要反向移植到最新推出的稳定内核 4.14 的话,我们只需要做一些微不足道的工作。这意味着最新的 4.14 版本(本文发表时是 4.14.12 版本),就是你应该运行的版本。4.14.13 会在接下来的几天里推出,这个更新里有一些额外的修复补丁,这些补丁是一些运行 4.14.12 内核且有启动时间问题的系统所需要的(这是一个显而易见的问题,如果它不启动,就把这些补丁加入更新排队中)。
|
||||
|
||||
我个人要感谢 Andy Lutomirski、Thomas Gleixner、Ingo Molnar、 Borislav Petkov、 Dave Hansen、 Peter Zijlstra、 Josh Poimboeuf、 Juergen Gross 和 Linus Torvalds。他们开发出了这些修复补丁,并且为了让我能轻松地使稳定版本能够正常工作,还把这些补丁以一种形式融合到了上游分支里。没有这些工作,我甚至不敢想会发生什么。
|
||||
|
||||
对于老的长期支持内核(LTS),我主要依靠 Hugh Dickins、 Dave Hansen、 Jiri Kosina 和 Borislav Petkov 优秀的工作,来为 4.4 到 4.9 的稳定内核代码树分支带去相同的功能。我同样在追踪讨厌的 bug 和缺失的补丁方面从 Guenter Roeck、 Kees Cook、 Jamie Iles 以及其他很多人那里得到了极大的帮助。我要感谢 David Woodhouse、 Eduardo Valentin、 Laura Abbott 和 Rik van Riel 在反向移植和集成方面的帮助,他们的帮助在许多棘手的地方是必不可少的。
|
||||
|
||||
这些长期支持版本的内核同样有 `CONFIG_PAGE_TABLE_ISOLATION` 这个内核构建选项,你应该开启它来获得全方面的保护。
|
||||
|
||||
从主线版本 4.14 和 4.15 的反向移植是非常不一样的,它们会出现不同的 bug,我们现在知道了一些在工作中遇见的 VDSO 问题。一些特殊的虚拟机安装的时候会报一些奇怪的错,但这是只是现在出现的少数情况,这种情况不应该阻止你进行升级。如果你在这些版本中遇到了问题,请让我们在稳定内核邮件列表中知道这件事。
|
||||
|
||||
如果你依赖于 4.4 和 4.9 或是现在的 4.14 以外的内核代码树分支,并且没有发行版支持你的话,你就太不幸了。比起你当前版本内核包含的上百个已知的漏洞和 bug,缺少补丁去解决 meltdown 问题算是一个小问题了。你现在最需要考虑的就是马上把你的系统升级到最新。
|
||||
|
||||
与此同时,臭骂那些强迫你运行一个已被废弃且不安全的内核版本的人,他们是那些需要知道这是完全不顾后果的行为的人中的一份子。
|
||||
|
||||
### Meltdown – ARM64
|
||||
|
||||
现在 ARM64 为解决 Meltdown 问题而开发的补丁还没有并入 Linus 的代码树,一旦 4.15 在接下来的几周里成功发布,他们就准备[阶段式地并入][6] 4.16-rc1,因为这些补丁还没有在一个 Linus 发布的内核中,我不能把它们反向移植进一个稳定的内核版本里(额……我们有这个[规矩][7]是有原因的)
|
||||
|
||||
由于它们还没有在一个已发布的内核版本中,如果你的系统是用的 ARM64 的芯片(例如 Android ),我建议你选择 [Android 公共内核代码树][8],现在,所有的 ARM64 补丁都并入 [3.18][9]、[4.4][10] 和 [4.9][11] 分支 中。
|
||||
|
||||
我强烈建议你关注这些分支,看随着时间的过去,由于测试了已并入补丁的已发布的上游内核版本,会不会有更多的修复补丁被补充进来,特别是我不知道这些补丁会在什么时候加进稳定的长期支持内核版本里。
|
||||
|
||||
对于 4.4 到 4.9 的长期支持内核版本,这些补丁有很大概率永远不会并入它们,因为需要大量的先决补丁。而所有的这些先决补丁长期以来都一直在 Android 公共内核版本中测试和合并,所以我认为现在对于 ARM 系统来说,仅仅依赖这些内核分支而不是长期支持版本是一个更好的主意。
|
||||
|
||||
同样需要注意的是,我合并所有的长期支持内核版本的更新到这些分支后通常会在一天之内或者这个时间点左右进行发布,所以你无论如何都要关注这些分支,来确保你的 ARM 系统是最新且安全的。
|
||||
|
||||
### Spectre
|
||||
|
||||
现在,事情变得“有趣”了……
|
||||
|
||||
再一次,如果你正在运行一个发行版的内核,一些内核融入了各种各样的声称能缓解目前大部分问题的补丁,你的内核*可能*就被包含在其中。如果你担心这一类的攻击的话,我建议你更新并测试看看。
|
||||
|
||||
对于上游来说,很好,现状就是仍然没有任何的上游代码树分支合并了这些类型的问题相关的修复补丁。有很多的邮件列表在讨论如何去解决这些问题的解决方案,大量的补丁在这些邮件列表中广为流传,但是它们尚处于开发前期,一些补丁系列甚至没有被构建或者应用到任何已知的代码树,这些补丁系列彼此之间相互冲突,这是常见的混乱。
|
||||
|
||||
这是由于 Spectre 问题是最近被内核开发者解决的。我们所有人都在 Meltdown 问题上工作,我们没有精确的 Spectre 问题全部的真实信息,而四处散乱的补丁甚至比公开发布的补丁还要糟糕。
|
||||
|
||||
因为所有的这些原因,我们打算在内核社区里花上几个星期去解决这些问题并把它们合并到上游去。修复补丁会进入到所有内核的各种各样的子系统中,而且在它们被合并后,会集成并在稳定内核的更新中发布,所以再次提醒,无论你使用的是发行版的内核还是长期支持的稳定内核版本,你最好并保持更新到最新版。
|
||||
|
||||
这不是好消息,我知道,但是这就是现实。如果有所安慰的话,似乎没有任何其它的操作系统完全地解决了这些问题,现在整个产业都在同一条船上,我们只需要等待,并让开发者尽快地解决这些问题。
|
||||
|
||||
提出的解决方案并非毫不重要,但是它们中的一些还是非常好的。一些新概念会被创造出来来帮助解决这些问题,Paul Turner 提出的 Retpoline 方法就是其中的一个例子。这将是未来大量研究的一个领域,想出方法去减轻硬件中涉及的潜在问题,希望在它发生前就去预见它。
|
||||
|
||||
### 其他架构的芯片
|
||||
|
||||
现在,我没有看见任何 x86 和 arm64 架构以外的芯片架构的补丁,听说在一些企业发行版中有一些用于其他类型的处理器的补丁,希望他们在这几周里能浮出水面,合并到合适的上游那里。我不知道什么时候会发生,如果你使用着一个特殊的架构,我建议在 arch-specific 邮件列表上问这件事来得到一个直接的回答。
|
||||
|
||||
### 结论
|
||||
|
||||
再次说一遍,更新你的内核,不要耽搁,不要止步。更新会在很长的一段时间里持续地解决这些问题。同样的,稳定和长期支持内核发行版里仍然有很多其它的 bug 和安全问题,它们和问题的类型无关,所以一直保持更新始终是一个好主意。
|
||||
|
||||
现在,有很多非常劳累、坏脾气、缺少睡眠的人,他们通常会生气地让内核开发人员竭尽全力地解决这些问题,即使这些问题完全不是开发人员自己造成的。请关爱这些可怜的程序猿。他们需要爱、支持,我们可以为他们免费提供的他们最爱的饮料,以此来确保我们都可以尽可能快地结束修补系统。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://kroah.com/log/blog/2018/01/06/meltdown-status/
|
||||
|
||||
作者:[Greg Kroah-Hartman][a]
|
||||
译者:[hopefully2333](https://github.com/hopefully2333)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://kroah.com
|
||||
[1]:http://www.dailymail.co.uk/sciencetech/article-5238789/Intel-says-security-updates-fix-Meltdown-Spectre.html
|
||||
[2]:https://googleprojectzero.blogspot.fr/2018/01/reading-privileged-memory-with-side.html
|
||||
[3]:https://pwnies.com/
|
||||
[4]:https://lwn.net/Articles/743265/
|
||||
[5]:https://lwn.net/Articles/742999/
|
||||
[6]:https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git/log/?h=kpti
|
||||
[7]:https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
|
||||
[8]:https://android.googlesource.com/kernel/common/
|
||||
[9]:https://android.googlesource.com/kernel/common/+/android-3.18
|
||||
[10]:https://android.googlesource.com/kernel/common/+/android-4.4
|
||||
[11]:https://android.googlesource.com/kernel/common/+/android-4.9
|
||||
[12]:https://support.google.com/faqs/answer/7625886
|
@ -3,25 +3,25 @@ Linux 终端下的多媒体应用
|
||||
|
||||

|
||||
|
||||
Linux 终端是支持多媒体的,所以你可以在终端里听音乐,看电影,看图片,甚至是阅读 PDF。
|
||||
> Linux 终端是支持多媒体的,所以你可以在终端里听音乐,看电影,看图片,甚至是阅读 PDF。
|
||||
|
||||
在我的上一篇文章里,我们了解到 Linux 终端是可以支持多媒体的。是的,这是真的!你可以使用 Mplayer、fbi 和 fbgs 来实现不打开 X 进程就听音乐、看电影、看照片,甚至阅读 PDF。此外,你还可以通过 CMatrix 来体验黑客帝国(Matrix)风格的屏幕保护。
|
||||
在我的上一篇文章里,我们了解到 Linux 终端是可以支持多媒体的。是的,这是真的!你可以使用 Mplayer、fbi 和 fbgs 来实现不打开 X 会话就听音乐、看电影、看照片,甚至阅读 PDF。此外,你还可以通过 CMatrix 来体验黑客帝国(Matrix)风格的屏幕保护。
|
||||
|
||||
不过你可能需要对系统进行一些修改才能达到前面这些目的。下文的操作都是在 Ubuntu 16.04 上进行的。
|
||||
|
||||
### MPlayer
|
||||
|
||||
你可能会比较熟悉功能丰富的 MPlayer。它支持几乎所有格式的视频与音频,并且能在绝大部分现有的平台上运行,像 Linux,Android,Windows,Mac,Kindle,OS/2 甚至是 AmigaOS。不过,要在你的终端运行 MPlayer 可能需要多做一点工作,这些工作与你使用的 Linux 发行版有关。来,我们先试着播放一个视频:
|
||||
你可能会比较熟悉功能丰富的 MPlayer。它支持几乎所有格式的视频与音频,并且能在绝大部分现有的平台上运行,像 Linux、Android、Windows、Mac、Kindle、OS/2 甚至是 AmigaOS。不过,要在你的终端运行 MPlayer 可能需要多做一点工作,这些工作与你使用的 Linux 发行版有关。来,我们先试着播放一个视频:
|
||||
|
||||
```
|
||||
$ mplayer [视频文件名]
|
||||
```
|
||||
|
||||
如果上面的命令正常执行了,那么很好,接下来你可以把时间放在了解 MPlayer 的常用选项上了,譬如设定视频大小等。但是,有些 Linux 发行版在对帧缓冲(framebuffer)的处理方式上与早期的不同,那么你就需要进行一些额外的设置才能让其正常工作了。下面是在最近的 Ubuntu 发行版上需要做的一些操作。
|
||||
如果上面的命令正常执行了,那么很好,接下来你可以把时间放在了解 MPlayer 的常用选项上了,譬如设定视频大小等。但是,有些 Linux 发行版在对<ruby>帧缓冲<rt>framebuffer</rt></ruby>的处理方式上与早期的不同,那么你就需要进行一些额外的设置才能让其正常工作了。下面是在最近的 Ubuntu 发行版上需要做的一些操作。
|
||||
|
||||
首先,将你自己添加到 video 用户组。
|
||||
首先,将你自己添加到 `video` 用户组。
|
||||
|
||||
其次,确认 `/etc/modprobe.d/blacklist-framebuffer.conf` 文件中包含这样一行:`#blacklist vesafb`。这一行应该默认被注释掉了,如果不是的话,那就手动把它注释掉。此外的其他模块行需要确认没有被注释,这样设置才能保证其他那些模块不会被载入。注:如果你想要对控制帧缓冲(framebuffer)有更深入的了解,可以从针对你的显卡的这些模块里获取更深入的认识。
|
||||
其次,确认 `/etc/modprobe.d/blacklist-framebuffer.conf` 文件中包含这样一行:`#blacklist vesafb`。这一行应该默认被注释掉了,如果不是的话,那就手动把它注释掉。此外的其他模块行需要确认没有被注释,这样设置才能保证其他那些模块不会被载入。注:如果你想要更深入的利用<ruby>帧缓冲<rt>framebuffer</rt></ruby>,这些针对你的显卡的模块可以使你获得更好的性能。
|
||||
|
||||
然后,在 `/etc/initramfs-tools/modules` 的结尾增加两个模块:`vesafb` 和 `fbcon`,并且更新 iniramfs 镜像:
|
||||
|
||||
@ -35,7 +35,7 @@ $ sudo nano /etc/initramfs-tools/modules
|
||||
$ sudo update-initramfs -u
|
||||
```
|
||||
|
||||
[fbcon][1] 是 Linux 帧缓冲(framebuffer)终端,它运行在帧缓冲(framebuffer)之上并为其增加图形功能。而它需要一个帧缓冲(framebuffer)设备,这则是由 `vesafb` 模块来提供的。
|
||||
[fbcon][1] 是 Linux <ruby>帧缓冲<rt>framebuffer</rt></ruby>终端,它运行在<ruby>帧缓冲<rt>framebuffer</rt></ruby>之上并为其增加图形功能。而它需要一个<ruby>帧缓冲<rt>framebuffer</rt></ruby>设备,这则是由 `vesafb` 模块来提供的。
|
||||
|
||||
接下来,你需要修改你的 GRUB2 配置。在 `/etc/default/grub` 中你将会看到类似下面的一行:
|
||||
|
||||
@ -49,7 +49,7 @@ GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
|
||||
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash vga=789"
|
||||
```
|
||||
|
||||
重启之后进入你的终端(Ctrl+Alt+F1)(LCTT 译注:在某些发行版中 Ctrl+Alt+F1 默认为图形界面,可以尝试 Ctrl+Alt+F2),然后就可以尝试播放一个视频了。下面的命令指定了 `fbdev2` 为视频输出设备,虽然我还没弄明白如何去选择用哪个输入设备,但是我用它成功过。默认的视频大小是 320x240,在此我给缩放到了 960:
|
||||
重启之后进入你的终端(`Ctrl+Alt+F1`)(LCTT 译注:在某些发行版中 `Ctrl+Alt+F1` 默认为图形界面,可以尝试 `Ctrl+Alt+F2`),然后就可以尝试播放一个视频了。下面的命令指定了 `fbdev2` 为视频输出设备,虽然我还没弄明白如何去选择用哪个输入设备,但是我用它成功过。默认的视频大小是 320x240,在此我给缩放到了 960:
|
||||
|
||||
```
|
||||
$ mplayer -vo fbdev2 -vf scale -zoom -xy 960 AlienSong_mp4.mov
|
||||
@ -69,19 +69,19 @@ MPlayer 可以播放 CD、DVD 以及网络视频流,并且还有一系列的
|
||||
$ fbi 文件名
|
||||
```
|
||||
|
||||
你可以使用方向键来在大图片中移动视野,使用 + 和 - 来缩放,或者使用 r 或 l 来向右或向左旋转 90 度。Escape 键则可以关闭查看的图片。此外,你还可以给 `fbi` 一个文件列表来实现幻灯播放:
|
||||
你可以使用方向键来在大图片中移动视野,使用 `+` 和 `-` 来缩放,或者使用 `r` 或 `l` 来向右或向左旋转 90 度。`Escape` 键则可以关闭查看的图片。此外,你还可以给 `fbi` 一个文件列表来实现幻灯播放:
|
||||
|
||||
```
|
||||
$ fbi --list 文件列表.txt
|
||||
```
|
||||
|
||||
`fbi` 还支持自动缩放。还可以使用 `-a` 选项来控制缩放比例。`--autoup` 和 `--autodown` 则是用于告知 `fbi` 只进行放大或者缩小。要调整图片切换时淡入淡出的时间则可以使用 `--blend [时间]` 来指定一个以毫秒为单位的时间长度。使用 k 和 j 键则可以切换文件列表中的上一张或下一张图片。
|
||||
`fbi` 还支持自动缩放。还可以使用 `-a` 选项来控制缩放比例。`--autoup` 和 `--autodown` 则是用于告知 `fbi` 只进行放大或者缩小。要调整图片切换时淡入淡出的时间则可以使用 `--blend [时间]` 来指定一个以毫秒为单位的时间长度。使用 `k` 和 `j` 键则可以切换文件列表中的上一张或下一张图片。
|
||||
|
||||
`fbi` 还提供了命令来为你浏览过的文件创建文件列表,或者将你的命令导出到文件中,以及一系列其它很棒的选项。你可以通过 `man fbi` 来查阅完整的选项列表。
|
||||
|
||||
### CMatrix 终端屏保
|
||||
|
||||
黑客帝国(The Matrix)屏保仍然是我非常喜欢的屏保之一(如图 2),仅次于弹跳牛(bouncing cow)。[CMatrix][3] 可以在终端运行。要运行它只需输入 `cmatrix`,然后可以用 Ctrl+C 来停止运行。执行 `cmatrix -s` 则会启动屏保模式,这样的话,按任意键都会直接退出。`-C` 参数可以设定颜色,譬如绿色(green)、红色(red)、蓝色(blue)、黄色(yellow)、白色(white)、紫色(magenta)、青色(cyan)或者黑色(black)。
|
||||
<ruby>黑客帝国<rt>The Matrix</rt></ruby>屏保仍然是我非常喜欢的屏保之一(如图 2),仅次于<ruby>弹跳牛<rt>bouncing cow</rt></ruby>。[CMatrix][3] 可以在终端运行。要运行它只需输入 `cmatrix`,然后可以用 `Ctrl+C` 来停止运行。执行 `cmatrix -s` 则会启动屏保模式,这样的话,按任意键都会直接退出。`-C` 参数可以设定颜色,譬如绿色(`green`)、红色(`red`)、蓝色(`blue`)、黄色(`yellow`)、白色(`white`)、紫色(`magenta`)、青色(`cyan`)或者黑色(`black`)。
|
||||
|
||||

|
||||
|
||||
@ -91,7 +91,7 @@ CMatrix 还支持异步按键,这意味着你可以在它运行的时候改变
|
||||
|
||||
### fbgs PDF 阅读器
|
||||
|
||||
看起来,PDF 文档的流行是普遍且无法阻止的,而且 PDF 比它之前好了很多,譬如超链接、复制粘贴以及更好的文本搜索功能等。`fbgs` 是 `fbida` 包中提供的一个 PDF 阅读器。它可以设置页面大小、分辨率、指定页码以及绝大部分 `fbi` 所提供的选项,当然除了一些在 `man fbgs` 中列举出来的不可用选项。我主要用到的选项是页面大小,你可以选择 `-l`、`xl` 或者 `xxl`:
|
||||
看起来,PDF 文档是普遍流行且无法避免的,而且 PDF 比它之前的功能好了很多,譬如超链接、复制粘贴以及更好的文本搜索功能等。`fbgs` 是 `fbida` 包中提供的一个 PDF 阅读器。它可以设置页面大小、分辨率、指定页码以及绝大部分 `fbi` 所提供的选项,当然除了一些在 `man fbgs` 中列举出来的不可用选项。我主要用到的选项是页面大小,你可以选择 `-l`、`xl` 或者 `xxl`:
|
||||
|
||||
```
|
||||
$ fbgs -xl annoyingpdf.pdf
|
||||
@ -105,7 +105,7 @@ via: https://www.linux.com/learn/intro-to-linux/2018/1/multimedia-apps-linux-con
|
||||
|
||||
作者:[Carla Schroder][a]
|
||||
译者:[Yinr](https://github.com/Yinr)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,42 +1,42 @@
|
||||
如何启动进入 Linux 命令行
|
||||
======
|
||||
|
||||

|
||||
|
||||
可能有时候你需要或者不想使用 GUI,也就是没有 X,而是选择命令行启动 [Linux][1]。不管是什么原因,幸运的是,直接启动进入 Linux **命令行** 非常简单。在其他内核选项之后,它需要对引导参数进行简单的更改。此更改将系统引导到指定的运行级别。
|
||||
可能有时候你启动 Linux 时需要或者希望不使用 GUI(图形用户界面),也就是没有 X,而是选择命令行。不管是什么原因,幸运的是,直接启动进入 Linux 命令行 非常简单。它需要在其他内核选项之后对引导参数进行简单的更改。此更改将系统引导到指定的运行级别。
|
||||
|
||||
### 为什么要这样做?
|
||||
|
||||
如果你的系统由于无效配置或者显示管理器损坏或任何可能导致 GUI 无法正常启动的情况而无法运行 Xorg,那么启动到命令行将允许你通过登录到终端进行故障排除(假设你知道要怎么开始),并能做任何你需要做的东西。引导到命令行也是一个很好的熟悉终端的方式,不然,你也可以为了好玩这么做。
|
||||
如果你的系统由于无效配置或者显示管理器损坏或任何可能导致 GUI 无法正常启动的情况而无法运行 Xorg,那么启动到命令行将允许你通过登录到终端进行故障排除(假设你知道要怎么做),并能做任何你需要做的东西。引导到命令行也是一个很好的熟悉终端的方式,不然,你也可以为了好玩这么做。
|
||||
|
||||
### 访问 GRUB 菜单
|
||||
|
||||
在启动时,你需要访问 GRUB 启动菜单。如果在每次启动计算机时菜单未设置为显示,那么可能需要在系统启动之前按住 SHIFT 键。在菜单中,需要选择 [Linux 发行版][2]条目。高亮显示后,按下 “e” 编辑引导参数。
|
||||
在启动时,你需要访问 GRUB 启动菜单。如果在每次启动计算机时菜单未设置为显示,那么可能需要在系统启动之前按住 `SHIFT` 键。在菜单中,需要选择 Linux 发行版条目。高亮显示后该条目,按下 `e` 编辑引导参数。
|
||||
|
||||
[][3]
|
||||
|
||||
较老的 GRUB 版本遵循类似的机制。启动管理器应提供有关如何编辑启动参数的说明。
|
||||
较老的 GRUB 版本遵循类似的机制。启动管理器应提供有关如何编辑启动参数的说明。
|
||||
|
||||
### 指定运行级别
|
||||
|
||||
编辑器将出现,你将看到 GRUB 解析到内核的选项。移动到以 “linux” 开头的行(旧的 GRUB 版本可能是 “kernel”,选择它并按照说明操作)。这指定了解析到内核的参数。在该行的末尾(可能会出现跨越多行,具体取决于分辨率),只需指定要引导的运行级别,即 3(多用户模式,纯文本)。
|
||||
会出现一个编辑器,你将看到 GRUB 会解析给内核的选项。移动到以 `linux` 开头的行(旧的 GRUB 版本可能是 `kernel`,选择它并按照说明操作)。这指定了要解析给内核的参数。在该行的末尾(可能会出现跨越多行,具体取决于你的终端分辨率),只需指定要引导的运行级别,即 `3`(多用户模式,纯文本)。
|
||||
|
||||
[][4]
|
||||
|
||||
按下 Ctrl-X 或 F10 将使用这些参数启动系统。开机和以前一样。唯一改变的是启动的运行级别。
|
||||
|
||||
|
||||
按下 `Ctrl-X` 或 `F10` 将使用这些参数启动系统。开机和以前一样。唯一改变的是启动的运行级别。
|
||||
|
||||
这是启动后的页面:
|
||||
|
||||
[][5]
|
||||
[][5]
|
||||
|
||||
### 运行级别
|
||||
|
||||
你可以指定不同的运行级别,默认运行级别是 5。1 启动到“单用户”模式,它会启动进入 root shell。3 提供了一个多用户命令行系统。
|
||||
你可以指定不同的运行级别,默认运行级别是 `5` (多用户图形界面)。`1` 启动到“单用户”模式,它会启动进入 root shell。`3` 提供了一个多用户命令行系统。
|
||||
|
||||
### 从命令行切换
|
||||
|
||||
在某个时候,你可能想要再次运行显示管理器来使用 GUI,最快的方法是运行这个:
|
||||
在某个时候,你可能想要运行显示管理器来再次使用 GUI,最快的方法是运行这个:
|
||||
|
||||
```
|
||||
$ sudo init 5
|
||||
```
|
||||
@ -49,7 +49,7 @@ via: http://www.linuxandubuntu.com/home/how-to-boot-into-linux-command-line
|
||||
|
||||
作者:[LinuxAndUbuntu][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,62 +1,63 @@
|
||||
在终端显示世界地图
|
||||
MapSCII:在终端显示世界地图
|
||||
======
|
||||
我偶然发现了一个有趣的工具。在终端的世界地图!是的,这太酷了。向 **MapSCII** 问好,这是可在 xterm 兼容终端渲染的盲文和 ASCII 世界地图。它支持 GNU/Linux、Mac OS 和 Windows。我以为这是另一个在 GitHub 上托管的项目。但是我错了!他们做了令人印象深刻的事。我们可以使用我们的鼠标指针在世界地图的任何地方拖拽放大和缩小。其他显著的特性是:
|
||||
|
||||

|
||||
|
||||
我偶然发现了一个有趣的工具。在终端里的世界地图!是的,这太酷了。给 `MapSCII` 打 call,这是可在 xterm 兼容终端上渲染的布莱叶盲文和 ASCII 世界地图。它支持 GNU/Linux、Mac OS 和 Windows。我原以为它只不过是一个在 GitHub 上托管的项目而已,但是我错了!他们做的事令人印象深刻。我们可以使用我们的鼠标指针在世界地图的任何地方拖拽放大和缩小。其他显著的特性是:
|
||||
|
||||
* 发现任何特定地点周围的兴趣点
|
||||
* 高度可定制的图层样式,带有[ Mapbox 样式][1]支持
|
||||
* 连接到任何公共或私有矢量贴片服务器
|
||||
* 高度可定制的图层样式,支持 [Mapbox 样式][1]
|
||||
* 可连接到任何公共或私有的矢量贴片服务器
|
||||
* 或者使用已经提供并已优化的基于 [OSM2VectorTiles][2] 服务器
|
||||
* 离线工作,发现本地 [VectorTile][3]/[MBTiles][4]
|
||||
* 可以离线工作并发现本地的 [VectorTile][3]/[MBTiles][4]
|
||||
* 兼容大多数 Linux 和 OSX 终端
|
||||
* 高度优化算法的流畅体验
|
||||
|
||||
|
||||
|
||||
### 使用 MapSCII 在终端中显示世界地图
|
||||
|
||||
要打开地图,只需从终端运行以下命令:
|
||||
|
||||
```
|
||||
telnet mapscii.me
|
||||
```
|
||||
|
||||
这是我终端上的世界地图。
|
||||
|
||||
[![][5]][6]
|
||||
![][6]
|
||||
|
||||
很酷,是吗?
|
||||
|
||||
要切换到盲文视图,请按 **c**。
|
||||
要切换到布莱叶盲文视图,请按 `c`。
|
||||
|
||||
[![][5]][7]
|
||||
![][7]
|
||||
|
||||
Type **c** again to switch back to the previous format **.**
|
||||
再次输入 **c** 切回以前的格式。
|
||||
再次输入 `c` 切回以前的格式。
|
||||
|
||||
要滚动地图,请使用**向上**、向下**、**向左**、**向右**箭头键。要放大/缩小位置,请使用 **a** 和 **a** 键。另外,你可以使用鼠标的滚轮进行放大或缩小。要退出地图,请按 **q**。
|
||||
要滚动地图,请使用“向上”、“向下”、“向左”、“向右”箭头键。要放大/缩小位置,请使用 `a` 和 `z` 键。另外,你可以使用鼠标的滚轮进行放大或缩小。要退出地图,请按 `q`。
|
||||
|
||||
就像我已经说过的,不要认为这是一个简单的项目。点击地图上的任何位置,然后按 **“a”** 放大。
|
||||
就像我已经说过的,不要认为这是一个简单的项目。点击地图上的任何位置,然后按 `a` 放大。
|
||||
|
||||
放大后,下面是一些示例截图。
|
||||
|
||||
[![][5]][8]
|
||||
![][8]
|
||||
|
||||
我可以放大查看我的国家(印度)的州。
|
||||
|
||||
[![][5]][9]
|
||||
![][9]
|
||||
|
||||
和州内的地区(Tamilnadu):
|
||||
|
||||
[![][5]][10]
|
||||
![][10]
|
||||
|
||||
甚至是地区内的镇 [Taluks][11]:
|
||||
|
||||
[![][5]][12]
|
||||
![][12]
|
||||
|
||||
还有,我完成学业的地方:
|
||||
|
||||
[![][5]][13]
|
||||
![][13]
|
||||
|
||||
即使它只是一个最小的城镇,MapSCII 也能准确地显示出来。 MapSCII 使用 [**OpenStreetMap**][14] 来收集数据。
|
||||
即使它只是一个最小的城镇,MapSCII 也能准确地显示出来。 MapSCII 使用 [OpenStreetMap][14] 来收集数据。
|
||||
|
||||
### 在本地安装 MapSCII
|
||||
|
||||
@ -64,15 +65,16 @@ Type **c** again to switch back to the previous format **.**
|
||||
|
||||
确保你的系统上已经安装了 Node.js。如果还没有,请参阅以下链接。
|
||||
|
||||
[Install NodeJS on Linux][15]
|
||||
- [在 Linux 上安装 NodeJS][15]
|
||||
|
||||
然后,运行以下命令来安装它。
|
||||
|
||||
```
|
||||
sudo npm install -g mapscii
|
||||
|
||||
```
|
||||
|
||||
要启动 MapSCII,请运行:
|
||||
|
||||
```
|
||||
mapscii
|
||||
```
|
||||
@ -81,15 +83,13 @@ mapscii
|
||||
|
||||
干杯!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/mapscii-world-map-terminal/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -99,13 +99,13 @@ via: https://www.ostechnix.com/mapscii-world-map-terminal/
|
||||
[3]:https://github.com/mapbox/vector-tile-spec
|
||||
[4]:https://github.com/mapbox/mbtiles-spec
|
||||
[5]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-1-2.png ()
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-2.png ()
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-3.png ()
|
||||
[9]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-4.png ()
|
||||
[10]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-5.png ()
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-1-2.png
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-2.png
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-3.png
|
||||
[9]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-4.png
|
||||
[10]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-5.png
|
||||
[11]:https://en.wikipedia.org/wiki/Tehsils_of_India
|
||||
[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-6.png ()
|
||||
[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-7.png ()
|
||||
[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-6.png
|
||||
[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-7.png
|
||||
[14]:https://www.openstreetmap.org/
|
||||
[15]:https://www.ostechnix.com/install-node-js-linux/
|
69
published/20180122 How to price cryptocurrencies.md
Normal file
69
published/20180122 How to price cryptocurrencies.md
Normal file
@ -0,0 +1,69 @@
|
||||
如何为数字货币定价
|
||||
======
|
||||
|
||||

|
||||
|
||||
预测数字货币价格是一场愚人游戏,然而我想试试。单一数字货币价值的驱动因素目前太多,而且含糊不清,无法根据任何一点进行评估。或许新闻正报道说比特币呈上升趋势,但与此同时,有黑客攻击或是交易所 API 错误,就让比特币的价格下跌。以太坊看起来不够智能?谁知道呢,也许第二天就建立一个新的更智能的 DAO(LCTT 译注:区块链上,基于激励的经济协调机制,以代码作为表达,通过互联网进行传输,允许人们参与数字企业的风险&回报(即分配),代币创建了 DAO 参与者之间的经济链接)将吸引大量参与者。
|
||||
|
||||
那么,您该如何投资呢?又或是更准确的讲,您下注在哪种数字货币上呢?
|
||||
|
||||
理解什么时候买卖和持有数字货币,关键是使用与评估开源项目价值的相关工具。这已经一次又一次地被提及了,但是为了理解当前数字货币的快速发展,您必须回到 Linux 悄然兴起的时候。
|
||||
|
||||
互联网泡沫期间,Linux 出现在大众视野中。那时候,如果您想建立一个 Web 服务器,您必须将一台 Windows 服务器或者 Sun Sparc 工作站运送到一个服务器托管机房,在那里它将为传送 Pets.com(LCTT 译注:一个在线宠物商店,借着互联网繁荣的机会崛起,在互联网泡沫破裂时倒闭)的 HTML 页面而做出努力。与此同时,Linux 就像运行在微软和 Sun 公司平行路径上的货运列车一样,可以让开发人员使用日新月异的操作系统和工具集,快速、轻松地构建一次性项目。相比之下,解决方案提供商的花费比软硬件支出少得多,很快,所有科技巨头的业务由软件转向了服务提供,例如 Sun 公司。
|
||||
|
||||
Linux 的萌发促使整个开源市场蓬勃发展起来。但是有一个关键的问题,您无法从开源软件中赚钱。您可以提供服务,也可以销售使用开源组件的产品,但早期开源者主要是为了提升自我,而不是为了赚钱。
|
||||
|
||||
数字货币几乎完全遵循 Linux 的老路,但数字货币具有一定货币价值。因此,当您在为一个区块链项目工作时,您不是为了公共利益,也不是为了编写自由软件的乐趣,而是您写它的时候期望得到一大笔钱。因此,这掩盖了许多程序员的价值判断。那些给您带来了 Python、PHP、Django 和 Node.js 的人们回来了……而现在他们正在通过编程赚钱。
|
||||
|
||||
### 审查代码库
|
||||
|
||||
今年将是代币销售和数字货币领域大清算的一年。虽然许多公司已经能够摆脱糟糕的或不可用的代码库,但我怀疑开发人员是否能让未来的公司摆脱如此云山雾罩的东西。可以肯定地说,我们可以期待[像这样详细描述 Storj 代码库不足之处的文章成为规范][1],这些评论会让许多所谓的“ICO” 们(LCTT 译注:代币首次发行融资)陷入困境。虽然规模巨大,但 ICO 的资金渠道是有限的,在某种程度上不完整的项目会被更严格的审查。
|
||||
|
||||
这是什么意思呢?这意味着要了解数字货币,您必须像对待创业公司那样对待它。您要看它是否有一个好团队?它是否是一个好产品?是否能运作?会有人想要使用它吗?现在评估整个数字货币的价值还为时过早,但如果我们假设代币将成为未来计算机互相支付的方式,这就让我们摆脱了许多疑问。毕竟,2000 年没有多少人知道 Apache 会在一个竞争激烈的市场上,几乎击败其他所有的 Web 服务器,或者像 Ubuntu 一样常见以至于可以随时可以搭建和拆毁它们。
|
||||
|
||||
理解数字货币价值的关键是忽视泡沫、炒作、恐惧、迷惑和怀疑心理,而是关注其真正的效用。您认为有一天您的手机会为另外一个手机付款,比如游戏内的额外费用吗? 您是否认为信用卡系统会在互联网出现后消失?您是否期望有一天,在生活中花费一点点钱,让自己过得更加舒适?然后,通过一切手段,购买并持有您认为将来可能会使您的生活过得更舒适的东西。如果您认为通过 TCP/IP 互联网的方式并不能够改善您的生活(或是您对其没有足够的了解),那么您可能就不会关注这些。纳斯达克总是开放的,至少在银行的营业时间。
|
||||
|
||||
好的,下面是我的预测。
|
||||
|
||||
### 预测
|
||||
|
||||
以下是我在考虑加密货币的“投资”时应该考虑的事项评估。在我们开始之前,先讲下注意事项:
|
||||
|
||||
* 数字货币不是真正的货币投资,而是投资在未来的技术。这就好比:当您购买数字货币时,我们像是在<ruby>进取号星舰<rt>Starship Enterprise</rt></ruby>的甲板上交换“<ruby>星际信用<rt>Galactic Credit</rt></ruby>”(LCTT 译注:《星球大战》电影中的一种货币)一般。 这是数字货币的唯一不可避免的未来。虽然您可以强制将数字货币附加到各种经济模式中,对其抱有乐观的态度,整个平台是技术乌托邦,并假设各种令人兴奋和不可能的事情将在未来几年来到。如果您有多余的现金,您喜欢《星球大战》,那么您就可以投资这些“黄金”。如果您的兄弟告诉您相关信息,结果您用信用卡买了比特币,那么您可能会要度过一段啃馒头的时间。
|
||||
* 不要相信任何人。没有担保,除了提供不是投资建议的免责声明,而且这绝不是对任何特定数字货币的背书,甚至是普遍概念,但我们必须明白,我在这里写的任何东西都可能是错误的。事实上,任何关于数字货币的文章都可能是错误的,任何试图卖给您吹得天花乱坠的代币的人,几乎肯定是骗子。总之,每个人都是错的,每个人都想要得到您的钱,所以要非常、非常小心。
|
||||
* 您也可以持有。如果您是在 18000 美元的价位买的比特币,您最好还是继续持有下去。 现在您就好像正处于帕斯卡赌注。(LCTT 译注:论述——我不知道上帝是否存在,如果他不存在,作为无神论者没有任何好处,但是如果他存在,作为无神论者我将有很大的坏处。所以,宁愿相信上帝存在)是的,也许您因为数字货币让您赔钱而生气,但也许您只是因为您的愚蠢和自大,但现在您不妨保持信仰,因为没有什么是必然的,或者您可以承认您是有点过于热切。虽然现在您被惩罚,但要相信有比特币之神在注视着您。最终您需要深吸一口气,同意这一切都相当怪异,并坚持下去。
|
||||
|
||||
现在回过头来评估数字货币。
|
||||
|
||||
**比特币** —— 预计明年的涨幅将超过目前的低点。此外,[世界各地的证券交易委员会和其他联邦机构][2]也会开始以实际行动调整加密货币的买卖。现在银行开玩笑说,他们想要降低数字货币的风险。因此,比特币将成为数字黄金,成为投机者稳定,乏味但充满波动的避风港。尽管所有都不能用作真正的货币,但对于我们所需要的东西来说,这已经足够了,我们也可以期待量子计算的产品去改变最古老,最熟悉的加密货币的面貌。
|
||||
|
||||
**以太坊** —— 只要创造者 Vitalik Buterin 不再继续泼冷水,以太坊在价格上可以维持在上千美元。像一个懊悔的<ruby>维克多·弗兰肯斯坦<rt>Victor Frankenstein</rt></ruby>(LCTT 译注:来自《维克多·弗兰肯斯坦》电影的一名角色),Buterin 倾向于做出惊人的事情,然后在网上诋毁他们,这种自我鞭策在充满泡沫和谎言的空间中实际上是非常有用的。以太坊是最接近我们有用的加密货币,但它本质上仍然是分布式应用。这是一个很有用,很聪明的方法,可以很容易地进行实验,但是,没有人用新的分布式数据存储或应用程序取代旧系统。总之,这是一个非常令人兴奋的技术,但是还没有人知道该用它做什么。
|
||||
|
||||
![][3]
|
||||
|
||||
以太坊的价格将何去何从?它将徘徊在 1000 美元左右,今年可能高达 1500 美元,但这是一个原则性的科技项目,但却不是一个保值产品。
|
||||
|
||||
**竞争币**(LCTT 译注:除比特币,以太币之外的所有的数字币,亦称之为山寨币) —— 泡沫的标志之一是当普通人说“我买不起比特币,所以我买了莱特币”这样的话时。这正是我从许多人那里听到的,就好像说“我买不起汉堡,所以我买了一斤木屑,我想孩子们会吃的,对不对?”那您要自担风险。竞争币对于很多人来说是一个风险非常低游戏,就好比您根据一个算法创造出某个竞争币,在市值达到一定水平时卖出,那么您可以赚取一笔不错的利润。况且,大多数竞争币不会在一夜之间消失。 我诚实地推荐以太坊而不是竞争币,但是如果您死磕竞争币,那祝您玩得开心。
|
||||
|
||||
**代币** —— 这是数字货币变得有趣的地方。代币需要研究机构和高校对技术有深入的了解,才能真正的评估。我见过的许多代币都是一场赌博,价格暴涨暴跌。我不会给其命名,但是经验法则是,如果您在公开市场上买了一个代币,那么您可能已经错过了赚钱的机会。截至 2018 年 1 月,代币销售,庄家开始投资一个代币几美分,最后得到百倍的回报。虽然许多创始人谈论他们的产品的神奇之处和他们的团队的强大,但是就是为了单车变摩托,把价值 4 美分一个的代币升值为 20 美分,再升值成一美元。您将收益乘以数百万,就能看到它的吸引力了。
|
||||
|
||||
答案很简单:找到您喜欢的几个项目并潜藏在他们的社区中。评估团队是否有实力,并且很早就知道该如何进入。将钱投入后就当扔掉几个月或几年。但无法保证,因为代币理念太过超前,以至于无法对其评估。
|
||||
|
||||
您正在阅读这篇文章,是因为您希望在这错综复杂的环境下得到一个方向。没关系,我已经跟许多数字货币创始人交谈,知道现在许多人不知道的事情,并且知道合谋和肮脏交易的准则。因此,像我们这样的人,要慢慢地分批购买,就会开始明白这究竟是怎么一回事,也许会从数字货币投资中获利。当数字货币的潜力被完全发掘,我们会得到一个像 Linux 一样的时代。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://techcrunch.com/2018/01/22/how-to-price-cryptocurrencies/
|
||||
|
||||
作者:[John Biggs][a]
|
||||
译者:[wyxplus](https://github.com/wyxplus)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://techcrunch.com/author/john-biggs/
|
||||
[1]:https://shitcoin.com/storj-not-a-dropbox-killer-1a9f27983d70
|
||||
[2]:http://www.businessinsider.com/bitcoin-price-cryptocurrency-warning-from-sec-cftc-2018-1
|
||||
[3]:https://tctechcrunch2011.files.wordpress.com/2018/01/vitalik-twitter-1312.png?w=525&h=615
|
||||
[4]:https://unsplash.com/photos/pElSkGRA2NU?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[5]:https://unsplash.com/search/photos/cash?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -0,0 +1,170 @@
|
||||
8 个你不一定全都了解的 rm 命令示例
|
||||
======
|
||||
|
||||
删除文件和复制/移动文件一样,都是很基础的操作。在 Linux 中,有一个专门的命令 `rm`,可用于完成所有删除相关的操作。在本文中,我们将用些容易理解的例子来讨论这个命令的基本使用。
|
||||
|
||||
但在我们开始前,值得指出的是本文所有示例都在 Ubuntu 16.04 LTS 中测试过。
|
||||
|
||||
### Linux rm 命令概述
|
||||
|
||||
通俗的讲,我们可以认为 `rm` 命令是用于删除文件和目录的。下面是此命令的语法:
|
||||
|
||||
```
|
||||
rm [选项]... [要删除的文件/目录]...
|
||||
```
|
||||
|
||||
下面是命令使用说明:
|
||||
|
||||
> GUN 版本 `rm` 命令的手册文档。`rm` 删除每个指定的文件,默认情况下不删除目录。
|
||||
|
||||
> 当删除的文件超过三个或者提供了选项 `-r`、`-R` 或 `--recursive`(LCTT 译注:表示递归删除目录中的文件)时,如果给出 `-I`(LCTT 译注:大写的 I)或 `--interactive=once` 选项(LCTT 译注:表示开启交互一次),则 `rm` 命令会提示用户是否继续整个删除操作,如果用户回应不是确认(LCTT 译注:即没有回复 `y`),则整个命令立刻终止。
|
||||
|
||||
> 另外,如果被删除文件是不可写的,标准输入是终端,这时如果没有提供 `-f` 或 `--force` 选项,或者提供了 `-i`(LCTT 译注:小写的 i) 或 `--interactive=always` 选项,`rm` 会提示用户是否要删除此文件,如果用户回应不是确认(LCTT 译注:即没有回复 `y`),则跳过此文件。
|
||||
|
||||
|
||||
下面这些问答式例子会让你更好的理解这个命令的使用。
|
||||
|
||||
### Q1. 如何用 rm 命令删除文件?
|
||||
|
||||
这是非常简单和直观的。你只需要把文件名(如果文件不是在当前目录中,则还需要添加文件路径)传入给 `rm` 命令即可。
|
||||
|
||||
(LCTT 译注:可以用空格隔开传入多个文件名称。)
|
||||
|
||||
```
|
||||
rm 文件1 文件2 ...
|
||||
```
|
||||
如:
|
||||
|
||||
```
|
||||
rm testfile.txt
|
||||
```
|
||||
|
||||
[![How to remove files using rm command][1]][2]
|
||||
|
||||
### Q2. 如何用 `rm` 命令删除目录?
|
||||
|
||||
如果你试图删除一个目录,你需要提供 `-r` 选项。否则 `rm` 会抛出一个错误告诉你正试图删除一个目录。
|
||||
|
||||
(LCTT 译注:`-r` 表示递归地删除目录下的所有文件和目录。)
|
||||
|
||||
```
|
||||
rm -r [目录名称]
|
||||
```
|
||||
|
||||
如:
|
||||
|
||||
```
|
||||
rm -r testdir
|
||||
```
|
||||
|
||||
[![How to remove directories using rm command][3]][4]
|
||||
|
||||
### Q3. 如何让删除操作前有确认提示?
|
||||
|
||||
如果你希望在每个删除操作完成前都有确认提示,可以使用 `-i` 选项。
|
||||
|
||||
```
|
||||
rm -i [文件/目录]
|
||||
```
|
||||
|
||||
比如,你想要删除一个目录“testdir”,但需要每个删除操作都有确认提示,你可以这么做:
|
||||
|
||||
```
|
||||
rm -r -i testdir
|
||||
```
|
||||
|
||||
[![How to make rm prompt before every removal][5]][6]
|
||||
|
||||
### Q4. 如何让 rm 忽略不存在的文件或目录?
|
||||
|
||||
如果你删除一个不存在的文件或目录时,`rm` 命令会抛出一个错误,如:
|
||||
|
||||
[![Linux rm command example][7]][8]
|
||||
|
||||
然而,如果你愿意,你可以使用 `-f` 选项(LCTT 译注:即 “force”)让此次操作强制执行,忽略错误提示。
|
||||
|
||||
```
|
||||
rm -f [文件...]
|
||||
```
|
||||
|
||||
[![How to force rm to ignore nonexistent files][9]][10]
|
||||
|
||||
### Q5. 如何让 rm 仅在某些场景下确认删除?
|
||||
|
||||
选项 `-I`,可保证在删除超过 3 个文件时或递归删除时(LCTT 译注: 如删除目录)仅提示一次确认。
|
||||
|
||||
比如,下面的截图展示了 `-I` 选项的作用——当两个文件被删除时没有提示,当超过 3 个文件时会有提示。
|
||||
|
||||
[![How to make rm prompt only in some scenarios][11]][12]
|
||||
|
||||
### Q6. 当删除根目录是 rm 是如何工作的?
|
||||
|
||||
当然,删除根目录(`/`)是 Linux 用户最不想要的操作。这也就是为什么默认 `rm` 命令不支持在根目录上执行递归删除操作。(LCTT 译注:早期的 `rm` 命令并无此预防行为。)
|
||||
|
||||
[![How rm works when dealing with root directory][13]][14]
|
||||
|
||||
然而,如果你非得完成这个操作,你需要使用 `--no-preserve-root` 选项。当提供此选项,`rm` 就不会特殊处理根目录(`/`)了。
|
||||
|
||||
假如你想知道在哪些场景下 Linux 用户会删除他们的根目录,点击[这里][15]。
|
||||
|
||||
### Q7. 如何让 rm 仅删除空目录?
|
||||
|
||||
假如你需要 `rm` 在删除目录时仅删除空目录,你可以使用 `-d` 选项。
|
||||
|
||||
```
|
||||
rm -d [目录]
|
||||
```
|
||||
|
||||
下面的截图展示 `-d` 选项的用途——仅空目录被删除了。
|
||||
|
||||
[![How to make rm only remove empty directories][16]][17]
|
||||
|
||||
### Q8. 如何让 rm 显示当前删除操作的详情?
|
||||
|
||||
如果你想 rm 显示当前操作完成时的详细情况,使用 `-v` 选项可以做到。
|
||||
|
||||
```
|
||||
rm -v [文件/目录]
|
||||
```
|
||||
|
||||
如:
|
||||
|
||||
[![How to force rm to emit details of operation it is performing][18]][19]
|
||||
|
||||
### 结论
|
||||
|
||||
考虑到 `rm` 命令提供的功能,可以说其是 Linux 中使用频率最高的命令之一了(就像 [cp][20] 和 `mv` 一样)。在本文中,我们涉及到了其提供的几乎所有主要选项。`rm` 命令有些学习曲线,因此在你日常工作中开始使用此命令之前
|
||||
你将需要花费些时间去练习它的选项。更多的信息,请点击此命令的 [man 手册页][21]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-rm-command/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[yizhuoyan](https://github.com/yizhuoyan)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/images/command-tutorial/rm-basic-usage.png
|
||||
[2]:https://www.howtoforge.com/images/command-tutorial/big/rm-basic-usage.png
|
||||
[3]:https://www.howtoforge.com/images/command-tutorial/rm-r.png
|
||||
[4]:https://www.howtoforge.com/images/command-tutorial/big/rm-r.png
|
||||
[5]:https://www.howtoforge.com/images/command-tutorial/rm-i-option.png
|
||||
[6]:https://www.howtoforge.com/images/command-tutorial/big/rm-i-option.png
|
||||
[7]:https://www.howtoforge.com/images/command-tutorial/rm-non-ext-error.png
|
||||
[8]:https://www.howtoforge.com/images/command-tutorial/big/rm-non-ext-error.png
|
||||
[9]:https://www.howtoforge.com/images/command-tutorial/rm-f-option.png
|
||||
[10]:https://www.howtoforge.com/images/command-tutorial/big/rm-f-option.png
|
||||
[11]:https://www.howtoforge.com/images/command-tutorial/rm-I-option.png
|
||||
[12]:https://www.howtoforge.com/images/command-tutorial/big/rm-I-option.png
|
||||
[13]:https://www.howtoforge.com/images/command-tutorial/rm-root-default.png
|
||||
[14]:https://www.howtoforge.com/images/command-tutorial/big/rm-root-default.png
|
||||
[15]:https://superuser.com/questions/742334/is-there-a-scenario-where-rm-rf-no-preserve-root-is-needed
|
||||
[16]:https://www.howtoforge.com/images/command-tutorial/rm-d-option.png
|
||||
[17]:https://www.howtoforge.com/images/command-tutorial/big/rm-d-option.png
|
||||
[18]:https://www.howtoforge.com/images/command-tutorial/rm-v-option.png
|
||||
[19]:https://www.howtoforge.com/images/command-tutorial/big/rm-v-option.png
|
||||
[20]:https://www.howtoforge.com/linux-cp-command/
|
||||
[21]:https://linux.die.net/man/1/rm
|
@ -1,20 +1,20 @@
|
||||
八种在 Linux 上生成随机密码的方法
|
||||
======
|
||||
学习使用 8 种 Linux 原生命令或第三方组件来生成随机密码。
|
||||
|
||||
> 学习使用 8 种 Linux 原生命令或第三方实用程序来生成随机密码。
|
||||
|
||||
![][1]
|
||||
|
||||
在这篇文章中,我们将引导你通过几种不同的方式在 Linux 中生成随机密码。其中几种利用原生 Linux 命令,另外几种则利用极易在 Linux 机器上安装的第三方工具或组件实现。在这里我们利用像 `openssl`, [dd][2], `md5sum`, `tr`, `urandom` 这样的原生命令和 mkpasswd,randpw,pwgen,spw,gpg,xkcdpass,diceware,revelation,keepaasx,passwordmaker 这样的第三方工具。
|
||||
在这篇文章中,我们将引导你通过几种不同的方式在 Linux 终端中生成随机密码。其中几种利用原生 Linux 命令,另外几种则利用极易在 Linux 机器上安装的第三方工具或实用程序实现。在这里我们利用像 `openssl`, [dd][2], `md5sum`, `tr`, `urandom` 这样的原生命令和 mkpasswd,randpw,pwgen,spw,gpg,xkcdpass,diceware,revelation,keepaasx,passwordmaker 这样的第三方工具。
|
||||
|
||||
其实这些方法就是生成一些能被用作密码的随机字母字符串。随机密码可以用于新用户的密码,不管用户基数有多大,这些密码都是独一无二的。话不多说,让我们来看看 8 种不同的在 Linux 上生成随机密码的方法吧。
|
||||
|
||||
##### 使用 mkpasswd 组件生成密码
|
||||
### 使用 mkpasswd 实用程序生成密码
|
||||
|
||||
`mkpasswd` 在基于 RHEL 的系统上随 `expect` 软件包一起安装。在基于 Debian 的系统上 `mkpasswd` 则在软件包 `whois` 中。直接安装 `mkpasswd` 软件包将会导致错误 -
|
||||
`mkpasswd` 在基于 RHEL 的系统上随 `expect` 软件包一起安装。在基于 Debian 的系统上 `mkpasswd` 则在软件包 `whois` 中。直接安装 `mkpasswd` 软件包将会导致错误:
|
||||
|
||||
RHEL 系统:软件包 mkpasswd 不可用。
|
||||
|
||||
Debian 系统:错误:无法定位软件包 mkpasswd。
|
||||
- RHEL 系统:软件包 mkpasswd 不可用。
|
||||
- Debian 系统:错误:无法定位软件包 mkpasswd。
|
||||
|
||||
所以按照上面所述安装他们的父软件包,就没问题了。
|
||||
|
||||
@ -28,9 +28,9 @@ root@kerneltalks# mkpasswd teststring << on Ubuntu
|
||||
XnlrKxYOJ3vik
|
||||
```
|
||||
|
||||
这个命令在不同的系统上表现得不一样,所以要对应工作。你也可以通过参数来控制长度等选项。你可以查阅 man 手册来探索。
|
||||
这个命令在不同的系统上表现得不一样,所以工作方式各异。你也可以通过参数来控制长度等选项,可以查阅 man 手册来探索。
|
||||
|
||||
##### 使用 openssl 生成密码
|
||||
### 使用 openssl 生成密码
|
||||
|
||||
几乎所有 Linux 发行版都包含 openssl。我们可以利用它的随机功能来生成可以用作密码的随机字母字符串。
|
||||
|
||||
@ -41,18 +41,18 @@ nU9LlHO5nsuUvw==
|
||||
|
||||
这里我们使用 `base64` 编码随机函数,最后一个数字参数表示长度。
|
||||
|
||||
##### 使用 urandom 生成密码
|
||||
### 使用 urandom 生成密码
|
||||
|
||||
设备文件 `/dev/urandom` 是另一个获得随机字符串的方法。我们使用 `tr` 功能裁剪输出来获得随机字符串,并把它作为密码。
|
||||
设备文件 `/dev/urandom` 是另一个获得随机字符串的方法。我们使用 `tr` 功能并裁剪输出来获得随机字符串,并把它作为密码。
|
||||
|
||||
```bash
|
||||
root@kerneltalks # strings /dev/urandom |tr -dc A-Za-z0-9 | head -c20; echo
|
||||
UiXtr0NAOSIkqtjK4c0X
|
||||
```
|
||||
|
||||
##### 使用 dd 命令生成密码
|
||||
### 使用 dd 命令生成密码
|
||||
|
||||
我们甚至可以使用 /dev/urandom 设备配合 [dd 命令][2] 来获取随机字符串。
|
||||
我们甚至可以使用 `/dev/urandom` 设备配合 [dd 命令][2] 来获取随机字符串。
|
||||
|
||||
```bash
|
||||
root@kerneltalks# dd if=/dev/urandom bs=1 count=15|base64 -w 0
|
||||
@ -62,16 +62,16 @@ root@kerneltalks# dd if=/dev/urandom bs=1 count=15|base64 -w 0
|
||||
QMsbe2XbrqAc2NmXp8D0
|
||||
```
|
||||
|
||||
我们需要将结果通过 `base64` 编码使它能被人类读懂。你可以使用计数值来获取想要的长度。想要获得更简洁的输出的话,可以将 std2 重定向到 `/dev/null`。简洁输出的命令是 -
|
||||
我们需要将结果通过 `base64` 编码使它能被人类可读。你可以使用数值来获取想要的长度。想要获得更简洁的输出的话,可以将“标准错误输出”重定向到 `/dev/null`。简洁输出的命令是:
|
||||
|
||||
```bash
|
||||
root@kerneltalks # dd if=/dev/urandom bs=1 count=15 2>/dev/null|base64 -w 0
|
||||
F8c3a4joS+a3BdPN9C++
|
||||
```
|
||||
|
||||
##### 使用 md5sum 生成密码
|
||||
### 使用 md5sum 生成密码
|
||||
|
||||
另一种获取可用作密码的随机字符串的方法是计算 MD5 校验值!校验值看起来确实像是随机字符串组合在一起,我们可以用作为密码。确保你的计算源是个变量,这样的话每次运行命令时生成的校验值都不一样。比如 `date`![date 命令][3] 总会生成不同的输出。
|
||||
另一种获取可用作密码的随机字符串的方法是计算 MD5 校验值!校验值看起来确实像是随机字符串组合在一起,我们可以用作密码。确保你的计算源是个变量,这样的话每次运行命令时生成的校验值都不一样。比如 `date` ![date 命令][3] 总会生成不同的输出。
|
||||
|
||||
```bash
|
||||
root@kerneltalks # date |md5sum
|
||||
@ -80,9 +80,9 @@ root@kerneltalks # date |md5sum
|
||||
|
||||
在这里我们将 `date` 命令的输出通过 `md5sum` 得到了校验和!你也可以用 [cut 命令][4] 裁剪你需要的长度。
|
||||
|
||||
##### 使用 pwgen 生成密码
|
||||
### 使用 pwgen 生成密码
|
||||
|
||||
`pwgen` 软件包在[类 EPEL 仓库][5](译者注:企业版 Linux 附加软件包)中。`pwgen` 更专注于生成可发音的密码,但它们不在英语词典中,也不是纯英文的。标准发行版仓库中可能并不包含这个工具。安装这个软件包然后运行 `pwgen` 命令行。Boom !
|
||||
`pwgen` 软件包在类似 [EPEL 软件仓库][5](LCTT 译注:企业版 Linux 附加软件包)中。`pwgen` 更专注于生成可发音的密码,但它们不在英语词典中,也不是纯英文的。标准发行版仓库中可能并不包含这个工具。安装这个软件包然后运行 `pwgen` 命令行。Boom !
|
||||
|
||||
```bash
|
||||
root@kerneltalks # pwgen
|
||||
@ -92,9 +92,10 @@ aic2OaDa iexieQu8 Aesoh4Ie Eixou9ph ShiKoh0i uThohth7 taaN3fuu Iege0aeZ
|
||||
cah3zaiW Eephei0m AhTh8guo xah1Shoo uh8Iengo aifeev4E zoo4ohHa fieDei6c
|
||||
aorieP7k ahna9AKe uveeX7Hi Ohji5pho AigheV7u Akee9fae aeWeiW4a tiex8Oht
|
||||
```
|
||||
|
||||
你的终端会呈现出一个密码列表!你还想要什么呢?好吧。你还想再仔细探索的话, `pwgen` 还有很多自定义选项,这些都可以在 man 手册里查阅到。
|
||||
|
||||
##### 使用 gpg 工具生成密码
|
||||
### 使用 gpg 工具生成密码
|
||||
|
||||
GPG 是一个遵循 OpenPGP 标准的加密及签名工具。大部分 gpg 工具都预先被安装好了(至少在我的 RHEL7 上是这样)。但如果没有的话你可以寻找 `gpg` 或 `gpg2` 软件包并[安装][6]它。
|
||||
|
||||
@ -107,10 +108,12 @@ mL8i+PKZ3IuN6a7a
|
||||
|
||||
这里我们传了生成随机字节序列选项(`--gen-random`),质量为 1(第一个参数),次数 12 (第二个参数)。选项 `--armor` 保证以 `base64` 编码输出。
|
||||
|
||||
##### 使用 xkcdpass 生成密码
|
||||
### 使用 xkcdpass 生成密码
|
||||
|
||||
著名的极客幽默网站 [xkcd][7],发表了一篇非常有趣的文章,是关于好记但又复杂的密码的。你可以在[这里][8]阅读。所以 `xkcdpass` 工具就受这篇文章启发,做了这样的工作!这是一个 Python 软件包,可以在[这里][9]的 Python 的官网上找到它。
|
||||
|
||||

|
||||
|
||||
所有的安装使用说明都在上面那个页面提及了。这里是安装步骤和我的测试 RHEL 服务器的输出,以供参考。
|
||||
|
||||
```bash
|
||||
@ -229,7 +232,7 @@ Processing dependencies for xkcdpass==1.14.3
|
||||
Finished processing dependencies for xkcdpass==1.14.3
|
||||
```
|
||||
|
||||
现在运行 xkcdpass 命令,将会随机给出你几个像下面这样的字典单词 -
|
||||
现在运行 `xkcdpass` 命令,将会随机给出你几个像下面这样的字典单词:
|
||||
|
||||
```bash
|
||||
root@kerneltalks # xkcdpass
|
||||
@ -245,9 +248,10 @@ root@kerneltalks # xkcdpass |md5sum
|
||||
root@kerneltalks # xkcdpass |md5sum
|
||||
ad79546e8350744845c001d8836f2ff2 -
|
||||
```
|
||||
|
||||
或者你甚至可以把所有单词串在一起作为一个超长的密码,不仅非常好记,也不容易被电脑程序攻破。
|
||||
|
||||
Linux 上还有像 [Diceware][10], [KeePassX][11], [Revelation][12], [PasswordMaker][13] 这样的工具,也可以考虑用来生成强随机密码。
|
||||
Linux 上还有像 [Diceware][10]、 [KeePassX][11]、 [Revelation][12]、 [PasswordMaker][13] 这样的工具,也可以考虑用来生成强随机密码。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -255,7 +259,7 @@ via: https://kerneltalks.com/tips-tricks/8-ways-to-generate-random-password-in-l
|
||||
|
||||
作者:[kerneltalks][a]
|
||||
译者:[heart4lor](https://github.com/heart4lor)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[Locez](https://github.com/locez)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -3,13 +3,13 @@
|
||||
|
||||

|
||||
|
||||
如果您从未使用过 [Git][1],甚至可能从未听说过它。莫慌张,只需要一步步地跟着入门教程,很快您就会在 [GitHub][2] 上拥有一个全新的 Git 仓库。
|
||||
如果您从未使用过 [Git][1],甚至可能从未听说过它。莫慌张,只需要一步步地跟着这篇入门教程,很快您就会在 [GitHub][2] 上拥有一个全新的 Git 仓库。
|
||||
|
||||
在开始之前,让我们先理清一个常见的误解:Git 并不是 GitHub。Git 是一套版本控制系统(或者说是一款软件),能够协助您跟踪计算机程序和文件在任何时间的更改。它同样允许您在程序、代码和文件操作上与同事协作。GitHub 以及类似服务(包括 GitLab 和 BitBucket)都属于部署了 Git 程序的网站,能够托管您的代码。
|
||||
|
||||
### 步骤 1:申请一个 GitHub 账户
|
||||
|
||||
在 [GitHub.com][3] (免费)网站上创建一个账户是最简单的方式。
|
||||
在 [GitHub.com][3] 网站上(免费)创建一个账户是最简单的方式。
|
||||
|
||||

|
||||
|
||||
@ -17,13 +17,13 @@
|
||||
|
||||

|
||||
|
||||
### 步骤 2:创建一个新的 repository
|
||||
### 步骤 2:创建一个新的仓库
|
||||
|
||||
一个 repository(仓库),类似于能储存物品的场所或是容器;在这里,我们创建仓库存储代码。在 `+` 符号内(在插图的右上角,我已经选中它了) 的下拉菜单中选择 **New Pepositiry**。
|
||||
一个仓库( repository),类似于能储存物品的场所或是容器;在这里,我们创建仓库存储代码。在 `+` 符号(在插图的右上角,我已经选中它了) 的下拉菜单中选择 **New Repository**。
|
||||
|
||||

|
||||
|
||||
给您的仓库命名(比如说,123)然后点击 **Create Repository**。无需考虑本页面的其他选项。
|
||||
给您的仓库命名(比如说,Demo)然后点击 **Create Repository**。无需考虑本页面的其他选项。
|
||||
|
||||
恭喜!您已经在 GitHub.com 中建立了您的第一个仓库。
|
||||
|
||||
@ -33,73 +33,83 @@
|
||||
|
||||

|
||||
|
||||
不必惊慌,它比看上去简单。跟紧步骤。忽略其他内容,注意截图上的“...or create a new repository on the command line,”。
|
||||
不必惊慌,它比看上去简单。跟紧步骤。忽略其他内容,注意截图上的 “...or create a new repository on the command line,”。
|
||||
|
||||
在您的计算机中打开终端。
|
||||
|
||||

|
||||
|
||||
键入 `git` 然后回车。如果命令行显示 `bash: git: command not found`,在您的操作系统或发行版使用 [安装 Git][4] 命令。键入 `git` 并回车检查是否成功安装;如果安装成功,您将看见大量关于使用说明的信息。
|
||||
键入 `git` 然后回车。如果命令行显示 `bash: git: command not found`,在您的操作系统或发行版 [安装 Git][4] 命令。键入 `git` 并回车检查是否成功安装;如果安装成功,您将看见大量关于使用该命令的说明信息。
|
||||
|
||||
在终端内输入:
|
||||
|
||||
```
|
||||
mkdir Demo
|
||||
```
|
||||
|
||||
这个命令将会创建一个名为 Demo 的目录(文件夹)。
|
||||
|
||||
如下命令将会切换终端目录,跳转到 Demo 目录:
|
||||
|
||||
```
|
||||
cd Demo
|
||||
```
|
||||
|
||||
然后输入:
|
||||
|
||||
```
|
||||
echo "#Demo" >> README.md
|
||||
```
|
||||
|
||||
创建一个名为 `README.md` 的文件,并写入 `#Demo`。检查文件是否创建成功,请输入:
|
||||
|
||||
```
|
||||
cat README.md
|
||||
```
|
||||
|
||||
这将会为您显示 `README.md` 文件的内容,如果文件创建成功,您的终端会有如下显示:
|
||||
|
||||

|
||||
|
||||
使用 Git 程序告诉您的电脑,Demo 是一个被 Git 托管的目录,请输入:
|
||||
使用 Git 程序告诉您的电脑,Demo 是一个被 Git 管理的目录,请输入:
|
||||
|
||||
```
|
||||
git init
|
||||
```
|
||||
|
||||
然后,告诉 Git 程序您关心的文件并且想在此刻起跟踪它的任何改变,请输入:
|
||||
|
||||
```
|
||||
git add README.md
|
||||
```
|
||||
|
||||
### 步骤 4:创建一次提交
|
||||
|
||||
目前为止,您已经创建了一个文件,并且已经通知了 Git,现在,是时候创建一次提交了。提交被看作为一个里程碑。每当完成一些工作之时,您都可以创建一次提交,保存文件当前版本,这样一来,您可以返回之前的版本,并且查看那时候的文件内容。无论那一次,您对修改过后的文件创建的新的存档,都和上一次的不一样。
|
||||
目前为止,您已经创建了一个文件,并且已经通知了 Git,现在,是时候创建一次<ruby>提交<rt>commit</rt></ruby>了。提交可以看作是一个里程碑。每当完成一些工作之时,您都可以创建一次提交,保存文件当前版本,这样一来,您可以返回之前的版本,并且查看那时候的文件内容。无论何时您修改了文件,都可以对文件创建一个上一次的不一样的新版本。
|
||||
|
||||
创建一次提交,请输入:
|
||||
|
||||
```
|
||||
git commit -m "first commit"
|
||||
```
|
||||
|
||||
就是这样!刚才您创建了包含一条注释为“first commit”的 Git 提交。每次提交,您都必须编辑注释信息;它不仅能协助您识别提交,而且能让您理解此时您对文件做了什么修改。这样到了明天,如果您在文件中添加新的代码,您可以写一句提交信息:添加了新的代码,然后当您一个月后回来查看提交记录或者 Git 日志(提交列表),您还能知道当时的您在文件夹里做了什么。
|
||||
就是这样!刚才您创建了包含一条注释为 “first commit” 的 Git 提交。每次提交,您都必须编辑注释信息;它不仅能协助您识别提交,而且能让您理解此时您对文件做了什么修改。这样到了明天,如果您在文件中添加新的代码,您可以写一句提交信息:“添加了新的代码”,然后当您一个月后回来查看提交记录或者 Git 日志(即提交列表),您还能知道当时的您在文件夹里做了什么。
|
||||
|
||||
### 步骤 5: 将您的计算机连接到 GitHub 仓库
|
||||
### 步骤 5: 将您的计算机与 GitHub 仓库相连接
|
||||
|
||||
现在,是时候用如下命令将您的计算机连接到 GitHub 仓库了:
|
||||
|
||||
```
|
||||
git remote add origin https://github.com/<your_username>/Demo.git
|
||||
```
|
||||
|
||||
让我们一步步的分析这行命令。我们通知 Git 去添加一个叫做 `origin` 的,拥有地址为 `https://github.com/<your_username>/Demo.git`(它也是您的 GitHub 地址仓库) 的 `remote`。当您递送代码时,允许您在 GitHub.com 和 Git 仓库交互时使用 `origin` 而不是完整的 Git 地址。为什么叫做 `origin`?当然,您可以叫点别的,只要您喜欢。
|
||||
让我们一步步的分析这行命令。我们通知 Git 去添加一个叫做 `origin` (起源)的,拥有地址为 `https://github.com/<your_username>/Demo.git`(它也是您的仓库的 GitHub 地址) 的 `remote` (远程仓库)。当您提交代码时,这允许您在 GitHub.com 和 Git 仓库交互时使用 `origin` 这个名称而不是完整的 Git 地址。为什么叫做 `origin`?当然,您可以叫点别的,只要您喜欢(惯例而已)。
|
||||
|
||||
现在,在 GitHub.com 我们已经连接并复制本地 Demo 仓库副本到远程仓库。您的设备会有如下显示:
|
||||
现在,我们已经将本地 Demo 仓库副本连接到了其在 GitHub.com 远程副本上。您的终端看起来如下:
|
||||
|
||||

|
||||
|
||||
此刻我们已经连接到远程仓库,可以推送我们的代码(上传 `README.md` 文件) 到 GitHub.com。
|
||||
此刻我们已经连接到远程仓库,可以推送我们的代码 到 GitHub.com(例如上传 `README.md` 文件)。
|
||||
|
||||
执行完毕后,您的终端会显示如下信息:
|
||||
|
||||
@ -109,7 +119,7 @@ git remote add origin https://github.com/<your_username>/Demo.git
|
||||
|
||||

|
||||
|
||||
就是这么回事!您已经创建了您的第一个 GitHub 仓库,连接到了您的电脑,并且在 GitHub.com 推送(或者称:上传)名叫 Demo 的文件到您的远程仓库。下一次,我将编写关于 Git 复制、添加新文件、修改现存文件、推送(上传)文件到 GitHub。
|
||||
就是这么回事!您已经创建了您的第一个 GitHub 仓库,连接到了您的电脑,并且从你的计算机推送(或者称:上传)一个文件到 GitHub.com 名叫 Demo 的远程仓库上了。下一次,我将编写关于 Git 复制(从 GitHub 上下载文件到你的计算机上)、添加新文件、修改现存文件、推送(上传)文件到 GitHub。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -117,7 +127,7 @@ via: https://opensource.com/article/18/1/step-step-guide-git
|
||||
|
||||
作者:[Kedar Vijay Kulkarni][a]
|
||||
译者:[CYLeft](https://github.com/CYLeft)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,61 @@
|
||||
Linux 内核 4.15:“一个不同寻常的发布周期”
|
||||
============================================================
|
||||
|
||||
|
||||

|
||||
|
||||
> Linus Torvalds 在周日发布了 Linux 的 4.15 版内核,比原计划发布时间晚了一周。了解这次发行版的关键更新。
|
||||
|
||||
Linus Torvalds 在周日(1 月 28 日)[发布了 Linux 内核的 4.15 版][7],再一次比原计划晚了一周。延迟发布的罪魁祸首是 “Meltdown” 和 “Spectre” bug,由于这两个漏洞,使开发者不得不在这最后的周期中提交重大补丁。Torvalds 不愿意“赶工”,因此,他又给了一周时间去制作这个发行版本。
|
||||
|
||||
不出意外的话,这第一批补丁将是去修补前面提及的 [Meltdown 和 Spectre][8] 漏洞。为防范 Meltdown —— 这个影响 Intel 芯片的问题,在 x86 架构上[开发者实现了页表隔离(PTI)][9]。不论什么理由你如果想去关闭它,你可以使用内核引导选项 `pti=off` 去关闭这个特性。
|
||||
|
||||
Spectre v2 漏洞对 Intel 和 AMD 芯片都有影响,为防范它,[内核现在带来了 retpoline 机制][10]。Retpoline 要求 GCC 的版本支持 `-mindirect-branch=thunk-extern` 功能。由于使用了 PTI,Spectre 抑制机制可以被关闭,如果需要去关闭它,在引导时使用 `spectre_v2=off` 选项。尽管开发者努力去解决 Spectre v1,但是,到目前为止还没有一个解决方案,因此,在 4.15 的内核版本中并没有这个 bug 的修补程序。
|
||||
|
||||
对于在 ARM 上的 Meltdown 解决方案也将在下一个开发周期中推送。但是,[对于 PowerPC 上的 bug,在这个发行版中包含了一个补救措施,那就是使用 L1-D 缓存的 RFI 冲刷特性][11]。
|
||||
|
||||
一个有趣的事情是,上面提及的所有受影响的新内核中,都带有一个 `/sys/devices/system/cpu/vulnerabilities/` 虚拟目录。这个目录显示了影响你的 CPU 的漏洞以及当前应用的补救措施。
|
||||
|
||||
芯片带 bug (以及保守秘密的制造商)的问题重新唤起了开发可行的开源替代品的呼声。这使得已经合并到主线版本的内核提供了对 [RISC-V][12] 芯片的部分支持。RISC-V 是一个开源的指令集架构,它允许制造商去设计他们自己的基于 RISC-V 芯片的实现。并且因此也有了几个开源的芯片。虽然 RISC-V 芯片目前主要用于嵌入式设备,它能够去做像智能硬盘或者像 Arduino 这样的开发板,RISC-V 的支持者认为这个架构也可以用于个人电脑甚至是多节点的超级计算机上。
|
||||
|
||||
正如在上面提到的,[对 RISC-V 的支持][13],仍然没有全部完成,它虽然包含了架构代码,但是没有设备驱动。这意味着,虽然 Linux 内核可以在 RISC-V 芯片上运行,但是没有可行的方式与底层的硬件进行实质的交互。也就是说,RISC-V 不会受到其它闭源架构上的任何 bug 的影响,并且对它的支持的开发工作也在加速进行,因为,[RISC-V 基金会已经得到了一些行业巨头的支持][14]。
|
||||
|
||||
### 4.15 版新内核中的其它新特性
|
||||
|
||||
Torvalds 经常说他喜欢的事情是很无聊的。对他来说,幸运的是,除了 Spectre 和 Meltdown 引发的混乱之外,在 4.15 内核中的大部分其它东西都很普通,比如,对驱动的进一步改进、对新设备的支持等等。但是,还有几点需要重点指出,它们是:
|
||||
|
||||
* [AMD 对虚拟化安全加密的支持][3]。它允许内核通过加密来实现对虚拟机内存的保护。加密的内存仅能够被使用它的虚拟机所解密。就算是 hypervisor 也不能看到它内部的数据。这意味着在云中虚拟机正在处理的数据,在虚拟机外的任何进程都看不到。
|
||||
* 由于 [包含了_显示代码_][4], AMD GPU 得到了极大的提升,这使得 Radeon RX Vega 和 Raven Ridge 显卡得到了内核主线版本的支持,并且也在 AMD 显卡中实现了 HDMI/DP 音频。
|
||||
* 树莓派的爱好者应该很高兴,因为在新内核中, [7" 触摸屏现在已经得到原生支持][5],这将产生成百上千的有趣的项目。
|
||||
|
||||
要发现更多的特性,你可以去查看在 [Kernel Newbies][15] 和 [Phoronix][16] 上的内容。
|
||||
|
||||
_想学习更多的 Linux 的知识,可以去学习来自 Linux 基金会和 edX 的免费课程 —— ["了解 Linux" ][6]。_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/intro-to-linux/2018/1/linux-kernel-415-unusual-release-cycle
|
||||
|
||||
作者:[PAUL BROWN][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/bro66
|
||||
[1]:https://www.linux.com/licenses/category/creative-commons-zero
|
||||
[2]:https://www.linux.com/files/images/background-penguinpng
|
||||
[3]:https://git.kernel.org/linus/33e63acc119d15c2fac3e3775f32d1ce7a01021b
|
||||
[4]:https://git.kernel.org/torvalds/c/f6705bf959efac87bca76d40050d342f1d212587
|
||||
[5]:https://git.kernel.org/linus/2f733d6194bd58b26b705698f96b0f0bd9225369
|
||||
[6]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
||||
[7]:https://lkml.org/lkml/2018/1/28/173
|
||||
[8]:https://meltdownattack.com/
|
||||
[9]:https://git.kernel.org/linus/5aa90a84589282b87666f92b6c3c917c8080a9bf
|
||||
[10]:https://git.kernel.org/linus/76b043848fd22dbf7f8bf3a1452f8c70d557b860
|
||||
[11]:https://git.kernel.org/linus/aa8a5e0062ac940f7659394f4817c948dc8c0667
|
||||
[12]:https://riscv.org/
|
||||
[13]:https://git.kernel.org/torvalds/c/b293fca43be544483b6488d33ad4b3ed55881064
|
||||
[14]:https://riscv.org/membership/
|
||||
[15]:https://kernelnewbies.org/Linux_4.15
|
||||
[16]:https://www.phoronix.com/scan.php?page=search&q=Linux+4.15
|
103
published/20180131 Why you should use named pipes on Linux.md
Normal file
103
published/20180131 Why you should use named pipes on Linux.md
Normal file
@ -0,0 +1,103 @@
|
||||
为什么应该在 Linux 上使用命名管道
|
||||
======
|
||||
|
||||
> 命名管道并不常用,但是它们为进程间通讯提供了一些有趣的特性。
|
||||
|
||||

|
||||
|
||||
估计每一位 Linux 使用者都熟悉使用 “|” 符号将数据从一个进程传输到另一个进程的操作。它使用户能简便地从一个命令输出数据到另一个命令,并筛选出想要的数据而无须写脚本进行选择、重新格式化等操作。
|
||||
|
||||
还有另一种管道, 虽然也叫“管道”这个名字却有着非常不同的性质。即您可能尚未使用甚至尚未知晓的——命名管道。
|
||||
|
||||
普通管道与命名管道的一个主要区别就是命名管道是以文件形式实实在在地存在于文件系统中的,没错,它们表现出来就是文件。但是与其它文件不同的是,命名管道文件似乎从来没有文件内容。即使用户往命名管道中写入大量数据,该文件看起来还是空的。
|
||||
|
||||
### 如何在 Linux 上创建命名管道
|
||||
|
||||
在我们研究这些空空如也的命名管道之前,先追根溯源来看看命名管道是如何被创建的。您应该使用名为 `mkfifo` 的命令来创建它们。为什么提及“FIFO”?是因为命名管道也被认为是一种 FIFO 特殊文件。术语 “FIFO” 指的是它的<ruby>先进先出<rt>first-in, first-out</rt></ruby>特性。如果你将冰淇淋盛放到碟子中,然后可以品尝它,那么你执行的就是一个LIFO(<ruby>后进先出<rt>last-in, first-out</rt></ruby>操作。如果你通过吸管喝奶昔,那你就在执行一个 FIFO 操作。好,接下来是一个创建命名管道的例子。
|
||||
|
||||
```
|
||||
$ mkfifo mypipe
|
||||
$ ls -l mypipe
|
||||
prw-r-----. 1 shs staff 0 Jan 31 13:59 mypipe
|
||||
```
|
||||
|
||||
注意一下特殊的文件类型标记 “p” 以及该文件大小为 0。您可以将重定向数据写入命名管道文件,而文件大小依然为 0。
|
||||
|
||||
```
|
||||
$ echo "Can you read this?" > mypipe
|
||||
```
|
||||
|
||||
正如上面所说,敲击回车后似乎什么都没有发生(LCTT 译注:没有返回命令行提示符)。
|
||||
|
||||
另外再开一个终端,查看该命名管道的大小,依旧是 0:
|
||||
|
||||
```
|
||||
$ ls -l mypipe
|
||||
prw-r-----. 1 shs staff 0 Jan 31 13:59 mypipe
|
||||
```
|
||||
|
||||
也许这有违直觉,用户输入的文本已经进入该命名管道,而你仍然卡在输入端。你或者其他人应该等在输出端,并准备读取放入管道的数据。现在让我们读取看看。
|
||||
|
||||
```
|
||||
$ cat mypipe
|
||||
Can you read this?
|
||||
```
|
||||
|
||||
一旦被读取之后,管道中的内容就没有了。
|
||||
|
||||
另一种研究命名管道如何工作的方式是通过将放入数据的操作置入后台来执行两个操作(将数据放入管道,而在另外一段读取它)。
|
||||
|
||||
```
|
||||
$ echo "Can you read this?" > mypipe &
|
||||
[1] 79302
|
||||
$ cat mypipe
|
||||
Can you read this?
|
||||
[1]+ Done echo "Can you read this?" > mypipe
|
||||
```
|
||||
|
||||
一旦管道被读取或“耗干”,该管道就清空了,尽管我们还能看见它并再次使用。可为什么要费此周折呢?
|
||||
|
||||
### 为何要使用命名管道?
|
||||
|
||||
命名管道很少被使用的理由似乎很充分。毕竟在 Unix 系统上,总有多种不同的方式完成同样的操作。有多种方式写文件、读文件、清空文件,尽管命名管道比它们来得更高效。
|
||||
|
||||
值得注意的是,命名管道的内容驻留在内存中而不是被写到硬盘上。数据内容只有在输入输出端都打开时才会传送。用户可以在管道的输出端打开之前向管道多次写入。通过使用命名管道,用户可以创建一个进程写入管道并且另外一个进程读取管道的流程,而不用关心协调二者时间上的同步。
|
||||
|
||||
用户可以创建一个单纯等待数据出现在管道输出端的进程,并在拿到输出数据后对其进行操作。下列命令我们采用 `tail` 来等待数据出现。
|
||||
|
||||
```
|
||||
$ tail -f mypipe
|
||||
```
|
||||
|
||||
一旦供给管道数据的进程结束了,我们就可以看到一些输出。
|
||||
|
||||
```
|
||||
$ tail -f mypipe
|
||||
Uranus replicated to WCDC7
|
||||
Saturn replicated to WCDC8
|
||||
Pluto replicated to WCDC9
|
||||
Server replication operation completed
|
||||
```
|
||||
|
||||
如果研究一下向命名管道写入的进程,用户也许会惊讶于它的资源消耗之少。在下面的 `ps` 命令输出中,唯一显著的资源消耗是虚拟内存(VSZ 那一列)。
|
||||
|
||||
```
|
||||
ps u -P 80038
|
||||
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
|
||||
shs 80038 0.0 0.0 108488 764 pts/4 S 15:25 0:00 -bash
|
||||
```
|
||||
|
||||
命名管道与 Unix/Linux 系统上更常用的管道相比足以不同到拥有另一个名号,但是“管道”确实能反映出它们如何在进程间传送数据的形象,故将称其为“命名管道”还真是恰如其分。也许您在执行操作时就能从这个聪明的 Unix/Linux 特性中获益匪浅呢。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3251853/linux/why-use-named-pipes-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
译者:[YPBlib](https://github.com/YPBlib)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[1]:http://www.networkworld.com/article/2926630/linux/11-pointless-but-awesome-linux-terminal-tricks.html#tk.nww-fsb
|
127
published/20180201 Custom Embedded Linux Distributions.md
Normal file
127
published/20180201 Custom Embedded Linux Distributions.md
Normal file
@ -0,0 +1,127 @@
|
||||
定制嵌入式 Linux 发行版
|
||||
======
|
||||
|
||||
便宜的物联网板的普及意味着它不仅会控制应用程序,还会控制整个软件平台。 那么,如何构建一个针对特定用途的交叉编译应用程序的自定义发行版呢? 正如 Michael J. Hammel 在这里解释的那样,它并不像你想象的那么难。
|
||||
|
||||
### 为什么要定制?
|
||||
|
||||
以前,许多嵌入式项目都使用现成的发行版,然后出于种种原因,再将它们剥离到只剩下基本的必需的东西。首先,移除不需要的包以减少占用的存储空间。在启动时,嵌入式系统一般不需要大量的存储空间以及可用存储空间。在嵌入式系统运行时,可能从非易失性内存中拷贝大量的操作系统文件到内存中。第二,移除用不到的包可以降低可能的攻击面。如果你不需要它们就没有必要把这些可能有漏洞的包挂在上面。最后,移除用不到包可以降低发行版管理的开销。如果在包之间有依赖关系,意味着任何一个包请求从上游更新,那么它们都必须保持同步。那样可能就会出现验证噩梦。
|
||||
|
||||
然而,从一个现有的发行版中去移除包并不像说的那样容易。移除一个包可能会打破与其它包保持的各种依赖关系,以及可能在上游的发行版管理中改变依赖。另外,由于一些包原生集成在引导或者运行时进程中,它们并不能轻易地简单地移除。所有这些都是项目之外的平台的管理,并且有可能会导致意外的开发延迟。
|
||||
|
||||
一个流行的选择是使用上游发行版供应商提供的构建工具去构建一个定制的发行版。无论是 Gentoo 还是 Debian 都提供这种自下而上的构建方式。这些构建工具中最为流行的可能是 Debian 的 debootstrap 实用程序。它取出预构建的核心组件并允许用户去精选出它们感兴趣的包来构建用户自己的平台。但是,debootstrap 最初仅在 x86 平台上可用,虽然,现在有了 ARM(也有可能会有其它的平台)选项。debootstrap 和 Gentoo 的 catalyst 仍然需要从本地项目中将依赖管理移除。
|
||||
|
||||
一些人认为让别人去管理平台软件(像 Android 一样)要比自己亲自管理容易的多。但是,那些发行版都是多用途的,当你在一个轻量级的、资源有限的物联网设备上使用它时,你可能会再三考虑从你手中被拿走的任何资源。
|
||||
|
||||
### 系统引导的基石
|
||||
|
||||
一个定制的 Linux 发行版要求许多软件组件。其中第一个就是<ruby>工具链<rt>toolchain</rt></ruby>。工具链是用于编译软件的一套工具集。包括(但不限于)一个编译器、链接器、二进制操作工具以及标准的 C 库。工具链是为一个特定的目标硬件设备专门构建的。如果一个构建在 x86 系统上的工具链想要用于树莓派,那么这个工具链就被称为交叉编译工具链。当在内存和存储都十分有限的小型嵌入式设备上工作时,最好是使用一个交叉编译工具链。需要注意的是,即便是使用像 JavaScript 这样的需要运行在特定平台的脚本语言为特定用途编写的应用程序,也需要使用交叉编译工具链编译。
|
||||
|
||||

|
||||
|
||||
*图 1. 编译依赖和引导顺序*
|
||||
|
||||
交叉编译工具链用于为目标硬件构建软件组件。需要的第一个组件是<ruby>引导加载程序<rt>bootloader</rt></ruby>。当计算机主板加电之后,处理器(可能有差异,取决于设计)尝试去跳转到一个特定的内存位置去开始运行软件。那个内存位置就是保存引导加载程序的地方。硬件可能有内置的引导加载程序,它可能直接从它的存储位置或者可能在它运行前首先拷贝到内存中。也可能会有多个引导加载程序。例如,第一阶段的引导加载程序可能位于硬件的 NAND 或者 NOR 闪存中。它唯一的功能是设置硬件以便于执行第二阶段的引导加载程序——比如,存储在 SD 卡中的可以被加载并运行的引导加载程序。
|
||||
|
||||
引导加载程序能够从硬件中取得足够的信息,将 Linux 加载到内存中并跳转到正确的位置,将控制权有效地移交到 Linux。Linux 是一个操作系统。这意味着,在这种设计中,它除了监控硬件和向上层软件(也就是应用程序)提供服务外,它实际上什么都不做。[Linux 内核][1] 中通常是各种各样的固件块。那些预编译的软件对象,通常包含硬件平台使用的设备的专用 IP(知识资产)。当构建一个定制发行版时,在开始编译内核之前,它可能会要求获得一些 Linux 内核源代码树没有提供的必需的固件块。
|
||||
|
||||
应用程序保存在根文件系统中,这个根文件系统是通过编译构建的,它集合了各种软件库、工具、脚本以及配置文件。总的来说,它们都提供各种服务,比如,网络配置和 USB 设备挂载,这些都是将要运行的项目应用程序所需要的。
|
||||
|
||||
总的来说,一个完整的系统构建要求下列的组件:
|
||||
|
||||
1. 一个交叉编译工具链
|
||||
2. 一个或多个引导加载程序
|
||||
3. Linux 内核和相关的固件块
|
||||
4. 一个包含库、工具以及实用程序的根文件系统
|
||||
5. 定制的应用程序
|
||||
|
||||
### 使用适当的工具开始构建
|
||||
|
||||
交叉编译工具链的组件可以手工构建,但这是一个很复杂的过程。幸运的是,现有的工具可以很容易地完成这一过程。构建交叉编译工具链的最好工具可能是 [Crosstool-NG][2],这个工具使用了与 Linux 内核相同的 kconfig 菜单系统来构建工具链的每个细节和方面。使用这个工具的关键是,为目标平台找到正确的配置项。配置项通常包含下列内容:
|
||||
|
||||
1. 目标架构,比如,是 ARM 还是 x86。
|
||||
2. 字节顺序:小端字节顺序(一般情况下,Intel 采用这种顺序)还是大端字节顺序(一般情况下,ARM 或者其它的平台采用这种顺序)。
|
||||
3. 编译器已知的 CPU 类型,比如,GCC 可以使用 `-mcpu` 或 `--with-cpu`。
|
||||
4. 支持的浮点类型,如果有的话,比如,GCC 可以使用 `-mfpu` 或 `--with-fpu`。
|
||||
5. <ruby>二进制实用工具<rt>binutils</rt></ruby>、C 库以及 C 编译器的特定版本信息。
|
||||
|
||||

|
||||
|
||||
*图 2. Crosstool-NG 配置菜单*
|
||||
|
||||
前四个一般情况下可以从处理器制造商的文档中获得。对于较新的处理器,它们可能不容易找到,但是,像树莓派或者 BeagleBoards(以及它们的后代和分支),你可以在像 [嵌入式 Linux Wiki][3] 这样的地方找到相关信息。
|
||||
|
||||
二进制实用工具、C 库、以及 C 编译器的版本,将与任何第三方提供的其它工具链分开。首先,它们中的每一个都有多个提供者。Linaro 为最新的处理器类型提供了最先进的版本,同时致力于将该支持合并到像 GNU C 库这样的上游项目中。尽管你可以使用各种提供者的工具,你可能依然想去使用现成的 GNU 工具链或者相同的 Linaro 版本。
|
||||
|
||||
在 Crosstool-NG 中的另外的重要选择是 Linux 内核的版本。这个选择将得到用于各种工具链组件的<ruby>头文件<rt>headers</rt></ruby>,但是它没有必要一定与你在目标硬件上将要引导的 Linux 内核相同。选择一个不比目标硬件的内核更新的 Linux 内核是很重要的。如果可能的话,尽量选择一个比目标硬件使用的内核更老的长周期支持(LTS)的内核。
|
||||
|
||||
对于大多数不熟悉构建定制发行版的开发者来说,工具链的构建是最为复杂的过程。幸运的是,大多数硬件平台的二进制工具链都可以想办法得到。如果构建一个定制的工具链有问题,可以在线搜索像 [嵌入式 Linux Wiki][4] 这样的地方去查找预构建工具链。
|
||||
|
||||
### 引导选项
|
||||
|
||||
在构建完工具链之后,接下来的工作是引导加载程序。引导加载程序用于设置硬件,以便于越来越复杂的软件能够使用这些硬件。第一阶段的引导加载程序通常由目标平台制造商提供,它通常被烧录到类似于 EEPROM 或者 NOR 闪存这类的在硬件上的存储中。第一阶段的引导加载程序将使设备从这里(比如,一个 SD 存储卡)开始引导。树莓派的引导加载程序就是这样的,它样做也就没有必要再去创建一个定制引导加载程序。
|
||||
|
||||
尽管如此,许多项目还是增加了第二阶段的引导加载程序,以便于去执行一个多样化的任务。在无需使用 Linux 内核或者像 plymouth 这样的用户空间工具的情况下提供一个启动动画,就是其中一个这样的任务。一个更常见的第二阶段引导加载程序的任务是去提供基于网络的引导或者使连接到 PCI 上的磁盘可用。在那种情况下,一个第三阶段的引导加载程序,比如 GRUB,可能才是让系统运行起来所必需的。
|
||||
|
||||
最重要的是,引导加载程序加载 Linux 内核并使它开始运行。如果第一阶段引导加载程序没有提供一个在启动时传递内核参数的机制,那么,在第二阶段的引导加载程序中就必须要提供。
|
||||
|
||||
有许多的开源引导加载程序可以使用。[U-Boot 项目][5] 通常用于像树莓派这样的 ARM 平台。CoreBoot 一般是用于像 Chromebook 这样的 x86 平台。引导加载程序是特定于目标硬件专用的。引导加载程序的选择总体上取决于项目的需求以及目标硬件(可以去网络上在线搜索开源引导加载程序的列表)。
|
||||
|
||||
### 现在到了 Linux 登场的时候
|
||||
|
||||
引导加载程序将加载 Linux 内核到内存中,然后去运行它。Linux 就像一个扩展的引导加载程序:它进行进行硬件设置以及准备加载高级软件。内核的核心将设置和提供在应用程序和硬件之间共享使用的内存;提供任务管理器以允许多个应用程序同时运行;初始化没有被引导加载程序配置的或者是已经配置了但是没有完成的硬件组件;以及开启人机交互界面。内核也许不会配置为在自身完成这些工作,但是,它可以包含一个嵌入的、轻量级的文件系统,这类文件系统大家熟知的有 initramfs 或者 initrd,它们可以独立于内核而创建,用于去辅助设置硬件。
|
||||
|
||||
内核操作的另外的事情是去下载二进制块(通常称为固件)到硬件设备。固件是用特定格式预编译的对象文件,用于在引导加载程序或者内核不能访问的地方去初始化特定硬件。许多这种固件对象可以从 Linux 内核源仓库中获取,但是,还有很多其它的固件只能从特定的硬件供应商处获得。例如,经常由它们自己提供固件的设备有数字电视调谐器或者 WiFi 网卡等。
|
||||
|
||||
固件可以从 initramfs 中加载,也或者是在内核从根文件系统中启动 init 进程之后加载。但是,当你去创建一个定制的 Linux 发行版时,创建内核的过程常常就是获取各种固件的过程。
|
||||
|
||||
### 轻量级核心平台
|
||||
|
||||
Linux 内核做的最后一件事情是尝试去运行一个被称为 init 进程的专用程序。这个专用程序的名字可能是 init 或者 linuxrc 或者是由加载程序传递给内核的名字。init 进程保存在一个能够被内核访问的文件系统中。在 initramfs 这种情况下,这个文件系统保存在内存中(它可能是被内核自己放置到那里,也可能是被引导加载程序放置在那里)。但是,对于运行更复杂的应用程序,initramfs 通常并不够完整。因此需要另外一个文件系统,这就是众所周知的根文件系统。
|
||||
|
||||

|
||||
|
||||
*图 3. 构建 root 配置菜单*
|
||||
|
||||
initramfs 文件系统可以使用 Linux 内核自身构建,但是更常用的作法是,使用一个被称为 [BusyBox][6] 的项目去创建。BusyBox 组合许多 GNU 实用程序(比如,grep 或者 awk)到一个单个的二进制文件中,以便于减小文件系统自身的大小。BusyBox 通常用于去启动根文件系统的创建过程。
|
||||
|
||||
但是,BusyBox 是特意轻量化设计的。它并不打算提供目标平台所需要的所有工具,甚至提供的工具也是经过功能简化的。BusyBox 有一个“姊妹”项目叫做 [Buildroot][7],它可以用于去得到一个完整的根文件系统,提供了各种库、实用程序,以及脚本语言。像 Crosstool-NG 和 Linux 内核一样,BusyBox 和 Buildroot 也都允许使用 kconfig 菜单系统去定制配置。更重要的是,Buildroot 系统自动处理依赖关系,因此,选定的实用程序将会保证该程序所需要的软件也会被构建并安装到 root 文件系统。
|
||||
|
||||
Buildroot 可以用多种格式去生成一个根文件系统包。但是,需要重点注意的是,这个文件系统是被归档的。单个的实用程序和库并不是以 Debian 或者 RPM 格式打包进去的。使用 Buildroot 将生成一个根文件系统镜像,但是它的内容不是单独的包。即使如此,Buildroot 还是提供了对 opkg 和 rpm 包管理器的支持的。这意味着,虽然根文件系统自身并不支持包管理,但是,安装在根文件系统上的定制应用程序能够进行包管理。
|
||||
|
||||
### 交叉编译和脚本化
|
||||
|
||||
Buildroot 的其中一个特性是能够生成一个临时树。这个目录包含库和实用程序,它可以被用于去交叉编译其它应用程序。使用临时树和交叉编译工具链,使得在主机系统上而不是目标平台上对 Buildroot 之外的其它应用程序编译成为可能。使用 rpm 或者 opkg 包管理软件之后,这些应用程序可以在运行时使用包管理软件安装在目标平台的根文件系统上。
|
||||
|
||||
大多数定制系统的构建都是围绕着用脚本语言构建应用程序的想法去构建的。如果需要在目标平台上运行脚本,在 Buildroot 上有多种可用的选择,包括 Python、PHP、Lua 以及基于 Node.js 的 JavaScript。对于需要使用 OpenSSL 加密的应用程序也提供支持。
|
||||
|
||||
### 接下来做什么
|
||||
|
||||
Linux 内核和引导加载程序的编译过程与大多数应用程序是一样的。它们的构建系统被设计为去构建一个专用的软件位。Crosstool-NG 和 Buildroot 是<ruby>元构建<rt>metabuild</rt></ruby>。元构建是将一系列有自己构建系统的软件集合封装为一个构建系统。可靠的元构建包括 [Yocto][8] 和 [OpenEmbedded][9]。Buildroot 的好处是可以将更高级别的元构建进行轻松的封装,以便于将定制 Linux 发行版的构建过程自动化。这样做之后,将会打开 Buildroot 指向到项目专用的缓存仓库的选项。使用缓存仓库可以加速开发过程,并且可以在无需担心上游仓库变化的情况下提供构建快照。
|
||||
|
||||
一个实现高级构建系统的示例是 [PiBox][10]。PiBox 就是封装了在本文中讨论的各种工具的一个元构建。它的目的是围绕所有工具去增加一个通用的 GNU Make 目标架构,以生成一个核心平台,这个平台可以构建或分发其它软件。PiBox 媒体中心和 kiosk 项目是安装在核心平台之上的应用层软件的实现,目的是用于去产生一个构建平台。[Iron Man 项目][11] 是为了家庭自动化的目的而扩展了这种应用程序,它集成了语音管理和物联网设备的管理。
|
||||
|
||||
但是,PiBox 如果没有这些核心的软件工具,它什么也做不了。并且,如果不去深入了解一个完整的定制发行版的构建过程,那么你将无法正确运行 PiBox。而且,如果没有 PiBox 开发团队对这个项目的长期奉献,也就没有 PiBox 项目,它完成了定制发行版构建中的大量任务。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/custom-embedded-linux-distributions
|
||||
|
||||
作者:[Michael J.Hammel][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/user/1000879
|
||||
[1]:https://www.kernel.org
|
||||
[2]:http://crosstool-ng.github.io
|
||||
[3]:https://elinux.org/Main_Page
|
||||
[4]:https://elinux.org/Main_Page
|
||||
[5]:https://www.denx.de/wiki/U-Boot
|
||||
[6]:https://busybox.net
|
||||
[7]:https://buildroot.org
|
||||
[8]:https://www.yoctoproject.org
|
||||
[9]:https://www.openembedded.org/wiki/Main_Page
|
||||
[10]:https://www.piboxproject.com
|
||||
[11]:http://redmine.graphics-muse.org/projects/ironman/wiki/Getting_Started
|
91
published/20180202 Tuning MySQL 3 Simple Tweaks.md
Normal file
91
published/20180202 Tuning MySQL 3 Simple Tweaks.md
Normal file
@ -0,0 +1,91 @@
|
||||
优化 MySQL: 3 个简单的小调整
|
||||
============================================================
|
||||
|
||||
如果你不改变 MySQL 的缺省配置,你的服务器的性能就像下图的坏在一档的法拉利一样 “虎落平阳被犬欺” …
|
||||
|
||||

|
||||
|
||||
我并不期望成为一个专家级的 DBA,但是,在我优化 MySQL 时,我推崇 80/20 原则,明确说就是通过简单的调整一些配置,你可以压榨出高达 80% 的性能提升。尤其是在服务器资源越来越便宜的当下。
|
||||
|
||||
### 警告
|
||||
|
||||
1. 没有两个数据库或者应用程序是完全相同的。这里假设我们要调整的数据库是为一个“典型”的 Web 网站服务的,优先考虑的是快速查询、良好的用户体验以及处理大量的流量。
|
||||
2. 在你对服务器进行优化之前,请做好数据库备份!
|
||||
|
||||
### 1、 使用 InnoDB 存储引擎
|
||||
|
||||
如果你还在使用 MyISAM 存储引擎,那么是时候转换到 InnoDB 了。有很多的理由都表明 InnoDB 比 MyISAM 更有优势,如果你关注性能,那么,我们来看一下它们是如何利用物理内存的:
|
||||
|
||||
* MyISAM:仅在内存中保存索引。
|
||||
* InnoDB:在内存中保存索引**和**数据。
|
||||
|
||||
结论:保存在内存的内容访问速度要比磁盘上的更快。
|
||||
|
||||
下面是如何在你的表上去转换存储引擎的命令:
|
||||
|
||||
```
|
||||
ALTER TABLE table_name ENGINE=InnoDB;
|
||||
```
|
||||
|
||||
*注意:你已经创建了所有合适的索引,对吗?为了更好的性能,创建索引永远是第一优先考虑的事情。*
|
||||
|
||||
### 2、 让 InnoDB 使用所有的内存
|
||||
|
||||
你可以在 `my.cnf` 文件中编辑你的 MySQL 配置。使用 `innodb_buffer_pool_size` 参数去配置在你的服务器上允许 InnoDB 使用物理内存数量。
|
||||
|
||||
对此(假设你的服务器_仅仅_运行 MySQL),公认的“经验法则”是设置为你的服务器物理内存的 80%。在保证操作系统不使用交换分区而正常运行所需要的足够内存之后 ,尽可能多地为 MySQL 分配物理内存。
|
||||
|
||||
因此,如果你的服务器物理内存是 32 GB,可以将那个参数设置为多达 25 GB。
|
||||
|
||||
```
|
||||
innodb_buffer_pool_size = 25600M
|
||||
```
|
||||
|
||||
*注意:(1)如果你的服务器内存较小并且小于 1 GB。为了适用本文的方法,你应该去升级你的服务器。 (2) 如果你的服务器内存特别大,比如,它有 200 GB,那么,根据一般常识,你也没有必要为操作系统保留多达 40 GB 的内存。 *
|
||||
|
||||
### 3、 让 InnoDB 多任务运行
|
||||
|
||||
如果服务器上的参数 `innodb_buffer_pool_size` 的配置是大于 1 GB,将根据参数 `innodb_buffer_pool_instances` 的设置, 将 InnoDB 的缓冲池划分为多个。
|
||||
|
||||
拥有多于一个的缓冲池的好处有:
|
||||
|
||||
> 在多线程同时访问缓冲池时可能会遇到瓶颈。你可以通过启用多缓冲池来最小化这种争用情况:
|
||||
|
||||
对于缓冲池数量的官方建议是:
|
||||
|
||||
> 为了实现最佳的效果,要综合考虑 `innodb_buffer_pool_instances` 和 `innodb_buffer_pool_size` 的设置,以确保每个实例至少有不小于 1 GB 的缓冲池。
|
||||
|
||||
因此,在我们的示例中,将参数 `innodb_buffer_pool_size` 设置为 25 GB 的拥有 32 GB 物理内存的服务器上。一个合适的设置为 25600M / 24 = 1.06 GB
|
||||
|
||||
```
|
||||
innodb_buffer_pool_instances = 24
|
||||
```
|
||||
|
||||
### 注意!
|
||||
|
||||
在修改了 `my.cnf` 文件后需要重启 MySQL 才能生效:
|
||||
|
||||
```
|
||||
sudo service mysql restart
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
还有更多更科学的方法来优化这些参数,但是这几点可以作为一个通用准则来应用,将使你的 MySQL 服务器性能更好。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
我喜欢商业技术以及跑车 | 集团 CTO @ Parcel Monkey, Cloud Fulfilment & Kong。
|
||||
|
||||
------
|
||||
|
||||
via: https://medium.com/@richb_/tuning-mysql-3-simple-tweaks-6356768f9b90
|
||||
|
||||
作者:[Rich Barrett](https://medium.com/@richb_)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,66 @@
|
||||
开源软件二十年 —— 过去,现在,未来
|
||||
=================================================================
|
||||
|
||||
> 谨以此文纪念 “<ruby>开源软件<rt>open source software</rt></ruby>” 这个名词的二十周年纪念日,开源软件是怎么占有软件的主导地位的 ?以后会如何发展?
|
||||
|
||||

|
||||
|
||||
二十年以前,在 1998 年 2 月,“<ruby>开源<rt>open source</rt></ruby>” 这个词汇第一次出现在“软件”这一名词之前。不久之后,<ruby>开源的定义<rt>Open Source Definition</rt></ruby>(OSD) 这一文档被创建,并成为了撒播 <ruby>开放源代码促进会<rt>Open Source Initiative</rt></ruby>(OSI)的种子。正如 OSD 的作者 [Bruce Perens 所说][9]:
|
||||
|
||||
> “开源”是这场宣传自由软件的既有概念到商业软件,并将许可证化为一系列规则的运动的正式名称。
|
||||
|
||||
二十年后,我们能看到这一运动是非常成功的,甚至超出了当时参与这一活动的任何人的想象。 如今,开源软件无处不在。它是互联网和网络的基础。它为我们所有使用的电脑和移动设备,以及它们所连接的网络提供动力。没有它,云计算和新兴的物联网将不可能发展,甚至不可能出现。它使新的业务方式可以被测试和验证,还可以让像谷歌和 Facebook 这样的大公司在已有的基础之上继续攀登。
|
||||
|
||||
如任何人类的造物一样,它也有黑暗的一面。它也让反乌托邦的监视和必然导致的专制控制的出现成为了可能。它为犯罪分子提供了欺骗受害者的新的途径,还让匿名且大规模的欺凌得以存在。它让有破环性的狂热分子可以暗中组织而不会感到有何不便。这些都是开源的能力之下的黑暗投影。所有的人类工具都是如此,既可以养育人类,亦可以有害于人类。我们需要帮助下一代,让他们能争取无可取代的创新。就如 [费曼所说][10]:
|
||||
|
||||
> 每个人都掌握着一把开启天堂之门的钥匙,但这把钥匙亦能打开地狱之门。
|
||||
|
||||
开源运动已经渐渐成熟。我们讨论和理解它的方式也渐渐的成熟。如果说第一个十年是拥护与非议对立的十年,那么第二个十年就是接纳和适应并存的十年。
|
||||
|
||||
1. 在第一个十年里面,关键问题就是商业模型 —— “我怎样才能自由的贡献代码,且从中受益?” —— 之后,还有更多的人提出了有关管理的难题—— “我怎么才能参与进来,且不受控制 ?”
|
||||
2. 第一个十年的开源项目主要是替代现有的产品;而在第二个十年中,它们更多地是作为更大的解决方案的组成部分。
|
||||
3. 第一个十年的项目往往由非正式的个人组织进行;而在第二个十年中,它们经常由创建于各个项目上的机构经营。
|
||||
4. 第一个十年的开源开发者经常是投入于单一的项目,并经常在业余时间工作。 在第二个十年里,他们越来越多地受雇从事于一个专门的技术 —— 他们成了专业人员。
|
||||
5. 尽管开源一直被认为是提升软件自由度的一种方式,但在第一个十年中,这个运动与那些更喜欢使用“<ruby>自由软件<rt>free software</rt></ruby>”的人产生了冲突。在第二个十年里,随着开源运动的加速发展,这个冲突基本上被忽略了。
|
||||
|
||||
第三个十年会带来什么?
|
||||
|
||||
1. _更复杂的商业模式_ —— 主要的商业模式涉及到将很多开源组件整合而产生的复杂性的解决方案,特别是部署和扩展方面。 开源治理的需求将反映这一点。
|
||||
2. _开源拼图_ —— 开源项目将主要是一系列组件,彼此衔接构成组件堆栈。由此产生的解决方案将是开源组件的拼图。
|
||||
3. _项目族_ —— 越来越多的项目将由诸如 Linux Foundation 和 OpenStack 等联盟/行业协会以及 Apache 和 Software Freedom Conservancy 等机构主办。
|
||||
4. _专业通才_ —— 开源开发人员将越来越多地被雇来将诸多技术集成到复杂的解决方案里,这将有助于一系列的项目的开发。
|
||||
5. _软件自由度降低_ —— 随着新问题的出现,软件自由(将四项自由应用于用户和开发人员之间的灵活性)将越来越多地应用于识别解决方案是否适用于协作社区和独立部署人员。
|
||||
|
||||
2018 年,我将在全球各地的主题演讲中阐述这些内容。欢迎观看 [OSI 20 周年纪念全球巡演][11]!
|
||||
|
||||
_本文最初发表于 [Meshed Insights Ltd.][2] , 已获转载授权,本文,以及我在 OSI 的工作,由 [Patreon patrons][3] 支持_
|
||||
|
||||
### 关于作者
|
||||
|
||||
Simon Phipps - 计算机工业和开源软件专家 Simon Phipps 创办了[公共软件公司][4],这是一个欧洲开源项目的托管公司,以志愿者身份成为 OSI 的总裁,还是 The Document Foundation 的一名总监。 他的作品是由 [Patreon patrons][5] 赞助 —— 如果你想看更多的话,来做赞助人吧! 在超过 30 年的职业生涯中,他一直在参与世界领先的战略层面的开发 ... [关于 Simon Phipps][6]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/open-source-20-years-and-counting
|
||||
|
||||
作者:[Simon Phipps][a]
|
||||
译者:[name1e5s](https://github.com/name1e5s)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/simonphipps
|
||||
[1]:https://opensource.com/article/18/2/open-source-20-years-and-counting?rate=TZxa8jxR6VBcYukor0FDsTH38HxUrr7Mt8QRcn0sC2I
|
||||
[2]:https://meshedinsights.com/2017/12/21/20-years-and-counting/
|
||||
[3]:https://patreon.com/webmink
|
||||
[4]:https://publicsoftware.eu/
|
||||
[5]:https://patreon.com/webmink
|
||||
[6]:https://opensource.com/users/simonphipps
|
||||
[7]:https://opensource.com/users/simonphipps
|
||||
[8]:https://opensource.com/user/12532/feed
|
||||
[9]:https://perens.com/2017/09/26/on-usage-of-the-phrase-open-source/
|
||||
[10]:https://www.brainpickings.org/2013/07/19/richard-feynman-science-morality-poem/
|
||||
[11]:https://opensource.org/node/905
|
||||
[12]:https://opensource.com/users/simonphipps
|
||||
[13]:https://opensource.com/users/simonphipps
|
||||
[14]:https://opensource.com/users/simonphipps
|
265
published/20180206 Simple TensorFlow Examples.md
Normal file
265
published/20180206 Simple TensorFlow Examples.md
Normal file
@ -0,0 +1,265 @@
|
||||
TensorFlow 的简单例子
|
||||
======
|
||||
|
||||

|
||||
|
||||
在本文中,我们将看一些 TensorFlow 的例子,并从中感受到在定义<ruby>张量<rt>tensor</rt></ruby>和使用张量做数学计算方面有多么容易,我还会举些别的机器学习相关的例子。
|
||||
|
||||
### TensorFlow 是什么?
|
||||
|
||||
TensorFlow 是 Google 为了解决复杂的数学计算耗时过久的问题而开发的一个库。
|
||||
|
||||
事实上,TensorFlow 能干许多事。比如:
|
||||
|
||||
* 求解复杂数学表达式
|
||||
* 机器学习技术。你往其中输入一组数据样本用以训练,接着给出另一组数据样本基于训练的数据而预测结果。这就是人工智能了!
|
||||
* 支持 GPU 。你可以使用 GPU(图像处理单元)替代 CPU 以更快的运算。TensorFlow 有两个版本: CPU 版本和 GPU 版本。
|
||||
|
||||
开始写例子前,需要了解一些基本知识。
|
||||
|
||||
### 什么是张量?
|
||||
|
||||
<ruby>张量<rt>tensor</rt></ruby>是 TensorFlow 使用的主要的数据块,它类似于变量,TensorFlow 使用它来处理数据。张量拥有维度和类型的属性。
|
||||
|
||||
维度指张量的行和列数,读到后面你就知道了,我们可以定义一维张量、二维张量和三维张量。
|
||||
|
||||
类型指张量元素的数据类型。
|
||||
|
||||
### 定义一维张量
|
||||
|
||||
可以这样来定义一个张量:创建一个 NumPy 数组(LCTT 译注:NumPy 系统是 Python 的一种开源数字扩展,包含一个强大的 N 维数组对象 Array,用来存储和处理大型矩阵 )或者一个 [Python 列表][1] ,然后使用 `tf_convert_to_tensor` 函数将其转化成张量。
|
||||
|
||||
可以像下面这样,使用 NumPy 创建一个数组:
|
||||
|
||||
```
|
||||
import numpy as np arr = np.array([1, 5.5, 3, 15, 20])
|
||||
arr = np.array([1, 5.5, 3, 15, 20])
|
||||
```
|
||||
|
||||
运行结果显示了这个数组的维度和形状。
|
||||
|
||||
```
|
||||
import numpy as np
|
||||
arr = np.array([1, 5.5, 3, 15, 20])
|
||||
print(arr)
|
||||
print(arr.ndim)
|
||||
print(arr.shape)
|
||||
print(arr.dtype)
|
||||
```
|
||||
|
||||
它和 Python 列表很像,但是在这里,元素之间没有逗号。
|
||||
|
||||
现在使用 `tf_convert_to_tensor` 函数把这个数组转化为张量。
|
||||
|
||||
```
|
||||
import numpy as np
|
||||
import tensorflow as tf
|
||||
arr = np.array([1, 5.5, 3, 15, 20])
|
||||
tensor = tf.convert_to_tensor(arr,tf.float64)
|
||||
print(tensor)
|
||||
```
|
||||
|
||||
这次的运行结果显示了张量具体的含义,但是不会展示出张量元素。
|
||||
|
||||
要想看到张量元素,需要像下面这样,运行一个会话:
|
||||
|
||||
```
|
||||
import numpy as np
|
||||
import tensorflow as tf
|
||||
arr = np.array([1, 5.5, 3, 15, 20])
|
||||
tensor = tf.convert_to_tensor(arr,tf.float64)
|
||||
sess = tf.Session()
|
||||
print(sess.run(tensor))
|
||||
print(sess.run(tensor[1]))
|
||||
```
|
||||
|
||||
### 定义二维张量
|
||||
|
||||
定义二维张量,其方法和定义一维张量是一样的,但要这样来定义数组:
|
||||
|
||||
```
|
||||
arr = np.array([(1, 5.5, 3, 15, 20),(10, 20, 30, 40, 50), (60, 70, 80, 90, 100)])
|
||||
```
|
||||
|
||||
接着转化为张量:
|
||||
|
||||
```
|
||||
import numpy as np
|
||||
import tensorflow as tf
|
||||
arr = np.array([(1, 5.5, 3, 15, 20),(10, 20, 30, 40, 50), (60, 70, 80, 90, 100)])
|
||||
tensor = tf.convert_to_tensor(arr)
|
||||
sess = tf.Session()
|
||||
print(sess.run(tensor))
|
||||
```
|
||||
|
||||
现在你应该知道怎么定义张量了,那么,怎么在张量之间跑数学运算呢?
|
||||
|
||||
### 在张量上进行数学运算
|
||||
|
||||
假设我们有以下两个数组:
|
||||
|
||||
```
|
||||
arr1 = np.array([(1,2,3),(4,5,6)])
|
||||
arr2 = np.array([(7,8,9),(10,11,12)])
|
||||
```
|
||||
|
||||
利用 TenserFlow ,你能做许多数学运算。现在我们需要对这两个数组求和。
|
||||
|
||||
使用加法函数来求和:
|
||||
|
||||
```
|
||||
import numpy as np
|
||||
import tensorflow as tf
|
||||
arr1 = np.array([(1,2,3),(4,5,6)])
|
||||
arr2 = np.array([(7,8,9),(10,11,12)])
|
||||
arr3 = tf.add(arr1,arr2)
|
||||
sess = tf.Session()
|
||||
tensor = sess.run(arr3)
|
||||
print(tensor)
|
||||
```
|
||||
|
||||
也可以把数组相乘:
|
||||
|
||||
```
|
||||
import numpy as np
|
||||
import tensorflow as tf
|
||||
arr1 = np.array([(1,2,3),(4,5,6)])
|
||||
arr2 = np.array([(7,8,9),(10,11,12)])
|
||||
arr3 = tf.multiply(arr1,arr2)
|
||||
sess = tf.Session()
|
||||
tensor = sess.run(arr3)
|
||||
print(tensor)
|
||||
```
|
||||
|
||||
现在你知道了吧。
|
||||
|
||||
## 三维张量
|
||||
|
||||
我们已经知道了怎么使用一维张量和二维张量,现在,来看一下三维张量吧,不过这次我们不用数字了,而是用一张 RGB 图片。在这张图片上,每一块像素都由 x、y、z 组合表示。
|
||||
|
||||
这些组合形成了图片的宽度、高度以及颜色深度。
|
||||
|
||||
首先使用 matplotlib 库导入一张图片。如果你的系统中没有 matplotlib ,可以 [使用 pip][2]来安装它。
|
||||
|
||||
将图片放在 Python 文件的同一目录下,接着使用 matplotlib 导入图片:
|
||||
|
||||
```
|
||||
import matplotlib.image as img
|
||||
myfile = "likegeeks.png"
|
||||
myimage = img.imread(myfile)
|
||||
print(myimage.ndim)
|
||||
print(myimage.shape)
|
||||
```
|
||||
|
||||
从运行结果中,你应该能看到,这张三维图片的宽为 150 、高为 150 、颜色深度为 3 。
|
||||
|
||||
你还可以查看这张图片:
|
||||
|
||||
```
|
||||
import matplotlib.image as img
|
||||
import matplotlib.pyplot as plot
|
||||
myfile = "likegeeks.png"
|
||||
myimage = img.imread(myfile)
|
||||
plot.imshow(myimage)
|
||||
plot.show()
|
||||
```
|
||||
|
||||
真酷!
|
||||
|
||||
那怎么使用 TensorFlow 处理图片呢?超级容易。
|
||||
|
||||
### 使用 TensorFlow 生成或裁剪图片
|
||||
|
||||
首先,向一个占位符赋值:
|
||||
|
||||
```
|
||||
myimage = tf.placeholder("int32",[None,None,3])
|
||||
```
|
||||
|
||||
使用裁剪操作来裁剪图像:
|
||||
|
||||
```
|
||||
cropped = tf.slice(myimage,[10,0,0],[16,-1,-1])
|
||||
```
|
||||
|
||||
最后,运行这个会话:
|
||||
|
||||
```
|
||||
result = sess.run(cropped, feed\_dict={slice: myimage})
|
||||
```
|
||||
|
||||
然后,你就能看到使用 matplotlib 处理过的图像了。
|
||||
|
||||
这是整段代码:
|
||||
|
||||
```
|
||||
import tensorflow as tf
|
||||
import matplotlib.image as img
|
||||
import matplotlib.pyplot as plot
|
||||
myfile = "likegeeks.png"
|
||||
myimage = img.imread(myfile)
|
||||
slice = tf.placeholder("int32",[None,None,3])
|
||||
cropped = tf.slice(myimage,[10,0,0],[16,-1,-1])
|
||||
sess = tf.Session()
|
||||
result = sess.run(cropped, feed_dict={slice: myimage})
|
||||
plot.imshow(result)
|
||||
plot.show()
|
||||
```
|
||||
|
||||
是不是很神奇?
|
||||
|
||||
### 使用 TensorFlow 改变图像
|
||||
|
||||
在本例中,我们会使用 TensorFlow 做一下简单的转换。
|
||||
|
||||
首先,指定待处理的图像,并初始化 TensorFlow 变量值:
|
||||
|
||||
```
|
||||
myfile = "likegeeks.png"
|
||||
myimage = img.imread(myfile)
|
||||
image = tf.Variable(myimage,name='image')
|
||||
vars = tf.global_variables_initializer()
|
||||
```
|
||||
|
||||
然后调用 transpose 函数转换,这个函数用来翻转输入网格的 0 轴和 1 轴。
|
||||
|
||||
```
|
||||
sess = tf.Session()
|
||||
flipped = tf.transpose(image, perm=[1,0,2])
|
||||
sess.run(vars)
|
||||
result=sess.run(flipped)
|
||||
```
|
||||
|
||||
接着你就能看到使用 matplotlib 处理过的图像了。
|
||||
|
||||
```
|
||||
import tensorflow as tf
|
||||
import matplotlib.image as img
|
||||
import matplotlib.pyplot as plot
|
||||
myfile = "likegeeks.png"
|
||||
myimage = img.imread(myfile)
|
||||
image = tf.Variable(myimage,name='image')
|
||||
vars = tf.global_variables_initializer()
|
||||
sess = tf.Session()
|
||||
flipped = tf.transpose(image, perm=[1,0,2])
|
||||
sess.run(vars)
|
||||
result=sess.run(flipped)
|
||||
plot.imshow(result)
|
||||
plot.show()
|
||||
```
|
||||
|
||||
以上例子都向你表明了使用 TensorFlow 有多么容易。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.codementor.io/likegeeks/define-and-use-tensors-using-simple-tensorflow-examples-ggdgwoy4u
|
||||
|
||||
作者:[LikeGeeks][a]
|
||||
译者:[ghsgz](https://github.com/ghsgz)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.codementor.io/likegeeks
|
||||
[1]:https://likegeeks.com/python-list-functions/
|
||||
[2]:https://likegeeks.com/import-create-install-reload-alias-python-modules/#Install-Python-Modules-Using-pip
|
@ -1,115 +0,0 @@
|
||||
Manjaro Gaming: Gaming on Linux Meets Manjaro’s Awesomeness
|
||||
======
|
||||
[![Meet Manjaro Gaming, a Linux distro designed for gamers with the power of Manjaro][1]][1]
|
||||
|
||||
[Gaming on Linux][2]? Yes, that's very much possible and we have a dedicated new Linux distribution aiming for gamers.
|
||||
|
||||
Manjaro Gaming is a Linux distro designed for gamers with the power of Manjaro. Those who have used Manjaro Linux before, know exactly why it is a such a good news for gamers.
|
||||
|
||||
[Manjaro][3] is a Linux distro based on one of the most popular distro - [Arch Linux][4]. Arch Linux is widely known for its bleeding-edge nature offering a lightweight, powerful, extensively customizable and up-to-date experience. And while all those are absolutely great, the main drawback is that Arch Linux embraces the DIY (do it yourself) approach where users need to possess a certain level of technical expertise to get along with it.
|
||||
|
||||
Manjaro strips that requirement and makes Arch accessible to newcomers, and at the same time provides all the advanced and powerful features of Arch for the experienced users as well. In short, Manjaro is an user-friendly Linux distro that works straight out of the box.
|
||||
|
||||
The reasons why Manjaro makes a great and extremely suitable distro for gaming are:
|
||||
|
||||
* Manjaro automatically detects computer's hardware (e.g. Graphics cards)
|
||||
* Automatically installs the necessary drivers and software (e.g. Graphics drivers)
|
||||
* Various codecs for media files playback comes pre-installed with it
|
||||
* Has dedicated repositories that deliver fully tested and stable packages
|
||||
|
||||
|
||||
|
||||
Manjaro Gaming is packed with all of Manjaro's awesomeness with the addition of various tweaks and software packages dedicated to make gaming on Linux smooth and enjoyable.
|
||||
|
||||
![Inside Manjaro Gaming][5]
|
||||
|
||||
#### Tweaks
|
||||
|
||||
Some of the tweaks made on Manjaro Gaming are:
|
||||
|
||||
* Manjaro Gaming uses highly customizable XFCE desktop environment with an overall dark theme.
|
||||
* Sleep mode is disabled for preventing computers from sleeping while playing games with GamePad or watching long cutscenes.
|
||||
|
||||
|
||||
|
||||
#### Softwares
|
||||
|
||||
Maintaining Manjaro's tradition of working straight out of the box, Manjaro Gaming comes bundled with various Open Source software to provide often needed functionalities for gamers. Some of the software included are:
|
||||
|
||||
* [**KdenLIVE**][6]: Videos editing software for editing gaming videos
|
||||
* [**Mumble**][7]: Voice chatting software for gamers
|
||||
* [**OBS Studio**][8]: Software for video recording and live streaming games videos on [Twitch][9]
|
||||
* **[OpenShot][10]** : Powerful video editor for Linux
|
||||
* [**PlayOnLinux**][11]: For running Windows games on Linux with [Wine][12] backend
|
||||
* [**Shutter**][13]: Feature-rich screenshot tool
|
||||
|
||||
|
||||
|
||||
#### Emulators
|
||||
|
||||
Manjaro Gaming comes with a long list of gaming emulators:
|
||||
|
||||
* **[DeSmuME][14]** : Nintendo DS emulator
|
||||
* **[Dolphin Emulator][15]** : GameCube and Wii emulator
|
||||
* [**DOSBox**][16]: DOS Games emulator
|
||||
* **[FCEUX][17]** : Nintendo Entertainment System (NES), Famicom, and Famicom Disk System (FDS) emulator
|
||||
* **Gens/GS** : Sega Mega Drive emulator
|
||||
* **[PCSXR][18]** : PlayStation Emulator
|
||||
* [**PCSX2**][19]: Playstation 2 emulator
|
||||
* [**PPSSPP**][20]: PSP emulator
|
||||
* **[Stella][21]** : Atari 2600 VCS emulator
|
||||
* [**VBA-M**][22]: Gameboy and GameboyAdvance emulator
|
||||
* [**Yabause**][23]: Sega Saturn Emulator
|
||||
* **[ZSNES][24]** : Super Nintendo emulator
|
||||
|
||||
|
||||
|
||||
#### Others
|
||||
|
||||
There are some terminal add-ons - Color, ILoveCandy and Screenfetch. [Conky Manager][25] with Retro Conky theme is also included.
|
||||
|
||||
**Point to be noted: Not all the features mentioned are included in the current release of Manjaro Gaming (which is 16.03). Some of them are scheduled to be included in the next release - Manjaro Gaming 16.06.**
|
||||
|
||||
### Downloads
|
||||
|
||||
Manjaro Gaming 16.06 is going to be the first proper release of Manjaro Gaming. But if you are interested enough to try it now, Manjaro Gaming 16.03 is available for downloading on the Sourceforge [project page][26]. Go there and grab the ISO.
|
||||
|
||||
How do you feel about this new Gaming Linux distro? Are you thinking of giving it a try? Let us know!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/manjaro-gaming-linux/
|
||||
|
||||
作者:[Munif Tanjim][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/munif/
|
||||
[1]:https://itsfoss.com/wp-content/uploads/2016/06/Manjaro-Gaming.jpg
|
||||
[2]:https://itsfoss.com/linux-gaming-guide/
|
||||
[3]:https://manjaro.github.io/
|
||||
[4]:https://www.archlinux.org/
|
||||
[5]:https://itsfoss.com/wp-content/uploads/2016/06/Manjaro-Gaming-Inside-1024x576.png
|
||||
[6]:https://kdenlive.org/
|
||||
[7]:https://www.mumble.info
|
||||
[8]:https://obsproject.com/
|
||||
[9]:https://www.twitch.tv/
|
||||
[10]:http://www.openshot.org/
|
||||
[11]:https://www.playonlinux.com
|
||||
[12]:https://www.winehq.org/
|
||||
[13]:http://shutter-project.org/
|
||||
[14]:http://desmume.org/
|
||||
[15]:https://dolphin-emu.org
|
||||
[16]:https://www.dosbox.com/
|
||||
[17]:http://www.fceux.com/
|
||||
[18]:https://pcsxr.codeplex.com
|
||||
[19]:http://pcsx2.net/
|
||||
[20]:http://www.ppsspp.org/
|
||||
[21]:http://stella.sourceforge.net/
|
||||
[22]:http://vba-m.com/
|
||||
[23]:https://yabause.org/
|
||||
[24]:http://www.zsnes.com/
|
||||
[25]:https://itsfoss.com/conky-gui-ubuntu-1304/
|
||||
[26]:https://sourceforge.net/projects/mgame/
|
56
sources/talk/20171128 Your API is missing Swagger.md
Normal file
56
sources/talk/20171128 Your API is missing Swagger.md
Normal file
@ -0,0 +1,56 @@
|
||||
Your API is missing Swagger
|
||||
======
|
||||
|
||||

|
||||
|
||||
We have all struggled through thrown together, convoluted API documentation. It is frustrating, and in the worst case, can lead to bad requests. The process of understanding an API is something most developers go through on a regular basis, so it is any wonder that the majority of APIs have horrific documentation.
|
||||
|
||||
[Swagger][1] is the solution to this problem. Swagger came out in 2011 and is an open source software framework which has many tools that help developers design, build, document, and consume RESTful APIs. Designing an API using Swagger, or documenting it after with Swagger helps everyone consumers of your API seamlessly. One of the amazing features which many people do not know about Swagger is that you can actually **generate** a client from it! That's right, if a service you're consuming has Swagger documentation you can generate a client to consume it!
|
||||
|
||||
All major languages support Swagger and connect it to your API. Depending on the language you're writing your API in you can have the Swagger documentation generated from the actual code. Here are some of the standout Swagger libraries I've seen recently.
|
||||
|
||||
### Golang
|
||||
|
||||
Golang has a couple great tools for integrating Swagger into your API. The first is [go-swagger][2], which is a tool that lets you generate the scaffolding for an API from a Swagger file. This is a fundamentally different way of thinking about APIs. Instead of building the endpoints and thinking about new ones on the fly, go-swagger gets you to think through your API before you write a single line of code. This can help visualize what you want the API to do first. Another tool which Golang has is called [Goa][3]. A quote from their website sums up what Goa is:
|
||||
|
||||
> goa provides a novel approach for developing microservices that saves time when working on independent services and helps with keeping the overall system consistent. goa uses code generation to handle both the boilerplate and ancillary artifacts such as documentation, client modules, and client tools.
|
||||
|
||||
They take designing the API before implementing it to a new level. Goa has a DSL to help you programmatically describe your entire API, from endpoints to payloads, to responses. From this DSL Goa generates a Swagger file for anyone that consumes your API, and it will enforce your endpoints output the correct data, which will keep your API and documentation in sync. This is counter-intuitive when you start, but after actually implementing an API with Goa, you will not know how you ever did it before.
|
||||
|
||||
### Python
|
||||
|
||||
[Flask][4] has a great extension for building an API with Swagger called [Flask-RESTPlus][5].
|
||||
|
||||
> If you are familiar with Flask, Flask-RESTPlus should be easy to pick up. It provides a coherent collection of decorators and tools to describe your API and expose its documentation properly using Swagger.
|
||||
|
||||
It uses python decorators to generate swagger documentation and can be used to enforce endpoint output similar to Goa. It can be very powerful and makes generating swagger from an API stupid easy.
|
||||
|
||||
### NodeJS
|
||||
|
||||
Finally, NodeJS has a powerful tool for working with Swagger called [swagger-js-codegen][6]. It can generate both servers and clients from a swagger file.
|
||||
|
||||
> This package generates a nodejs, reactjs or angularjs class from a swagger specification file. The code is generated using mustache templates and is quality checked by jshint and beautified by js-beautify.
|
||||
|
||||
It is not quite as easy to use as Goa and Flask-RESTPlus, but if Node is your thing, this will do the job. It shines when it comes to generating frontend code to interface with your API, which is perfect if you're developing a web app to go along with the API.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Swagger is a simple yet powerful representation of your RESTful API. When used properly it can help flush out your API design and make it easier to consume. Harnessing its full power can save you time by forming and visualizing your API before you write a line of code, then generate the boilerplate surrounding the core logic. And with tools like [Goa][3], [Flask-RESTPlus][5], and [swagger-js-codegen][6] which will make the whole experience of architecting and implementing an API painless, there is no excuse not to have Swagger.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://ryanmccue.ca/your-api-is-missing-swagger/
|
||||
|
||||
作者:[Ryan McCue][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ryanmccue.ca/author/ryan/
|
||||
[1]:http://swagger.io
|
||||
[2]:https://github.com/go-swagger/go-swagger
|
||||
[3]:https://goa.design/
|
||||
[4]:http://flask.pocoo.org/
|
||||
[5]:https://github.com/noirbizarre/flask-restplus
|
||||
[6]:https://github.com/wcandillon/swagger-js-codegen
|
@ -0,0 +1,54 @@
|
||||
5 Podcasts Every Dev Should Listen to
|
||||
======
|
||||
|
||||

|
||||
|
||||
Being a developer is a tough job, the landscape is constantly changing, and new frameworks and best practices come out every month. Having a great go-to list of podcasts keeping you up to date on the industry can make a huge difference. I've done some of the hard work and created a list of the top 5 podcasts I personally listen too.
|
||||
|
||||
### This Developer's Life
|
||||
|
||||
Unlike many developer-focused podcasts, there is no talk of code or explanations of software architecture in [This Developer's Life][1]. There are just relatable stories from other developers. This Developer's Life dives into the issues developers face in their daily lives, from a developers point of view. [Rob Conery][2] and [Scott Hanselman][3] host the show and it focuses on all aspects of a developers life. For example, what it feels like to get fired. To hit a home run. To be competitive. It is a very well made podcast and isn't just for developers, but it can also be enjoyed by those that love and live with them.
|
||||
|
||||
### Developer Tea
|
||||
|
||||
Don’t have a lot of time? [Developer Tea][4] is "A podcast for developers designed to fit inside your tea break." The podcast exists to help driven developers connect with their purpose and excel at their work so that they can make an impact. Hosted by [Jonathan Cutrell][5], the director of technology at Whiteboard, Developer Tea breaks down the news and gives useful insights into all aspects of a developers life in and out of work. Cutrell explains listener questions mixed in with news, interviews, and career advice during his show, which releases multiple episodes every week.
|
||||
|
||||
### Software Engineering Today
|
||||
|
||||
[Software Engineering Daily][6] is a daily podcast which focuses on heavily technical topics like software development and system architecture. It covering a range of topics from load balancing at scale and serverless event-driven architecture to augmented reality. Hosted by [Jeff Meyerson][7], this podcast is great for developers who have a passion for learning about complicated software topics to expand their knowledge base.
|
||||
|
||||
### Talking Code
|
||||
|
||||
The [Talking Code][8] podcast is from 2015, and contains 24 episodes which have "short expert interviews that help you decode what developers are saying." The hosts, [Josh Smith][9] and [Venkat Dinavahi][10], talk about diverse web development topics like how to become an effective junior developer and how to go from junior to senior developer, to topics like building modern web applications and making the most out of your analytics. This podcast is perfect for those getting into web development and those who look to level up their web development skills.
|
||||
|
||||
### The Laracasts Snippet
|
||||
|
||||
[The Laracasts Snippet][11] is a bite-size podcast where each episode offers a single thought on some aspect of web development. The host, [Jeffrey Way][12], is a prominent character in the Laravel community and runs the site [Laracasts][12]. His insights are broad and are useful for developers of all backgrounds.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Podcasts are on the rise and more and more developers are listening to them. With such a rapidly expanding list of new podcasts coming out it can be tough to pick the top 5, but if you listen to these podcasts, you will have a competitive edge as a developer.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://ryanmccue.ca/podcasts-every-developer-should-listen-too/
|
||||
|
||||
作者:[Ryan McCue][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ryanmccue.ca/author/ryan/
|
||||
[1]:http://thisdeveloperslife.com/
|
||||
[2]:https://rob.conery.io/
|
||||
[3]:https://www.hanselman.com/
|
||||
[4]:https://developertea.com/
|
||||
[5]:http://jonathancutrell.com/
|
||||
[6]:https://softwareengineeringdaily.com/
|
||||
[7]:http://jeffmeyerson.com/
|
||||
[8]:http://talkingcode.com/
|
||||
[9]:https://twitter.com/joshsmith
|
||||
[10]:https://twitter.com/venkatdinavahi
|
||||
[11]:https://laracasts.simplecast.fm/
|
||||
[12]:https://laracasts.com
|
@ -0,0 +1,48 @@
|
||||
Blueprint for Simple Scalable Microservices
|
||||
======
|
||||
|
||||

|
||||
|
||||
When you're building a microservice, what do you value? A fully managed and scalable system? It's hard to know where to start with AWS; there are so many options for hosting code, you can use EC2, ECS, Elastic Beanstalk, Lambda. Everyone has patterns for deploying microservices. Using the pattern below will provide a great structure for a scalable microservice architecture.
|
||||
|
||||
### Elastic Beanstalk
|
||||
|
||||
The first and most important piece is [Elastic Beanstalk][1]. It is a great, simple way to deploy auto-scaling microservices. All you need to do is upload your code to Elastic Beanstalk via their command line tool or management console. Once it's in Elastic Beanstalk the deployment, capacity provisioning, load balancing, auto-scaling is handled by AWS.
|
||||
|
||||
### S3
|
||||
|
||||
Another important service is [S3][2]; it is an object storage built to store and retrieve data. S3 has lots of uses, from storing images, to backups. Particular use cases are storing sensitive files such as private keys, environment variable files which will be accessed and used by multiple instances or services. Finally, using S3 for less sensitive, publically accessible files like configuration files, Dockerfiles, and images.
|
||||
|
||||
### Kinesis
|
||||
|
||||
[Kinesis][3] is a tool which allows for microservices to communicate with each other and other projects like Lambda, which we will discuss farther down. Kinesis does this by real-time, persistent data streaming, which enables microservices to emit events. Data can be persisted for up to 7 days for persistent and batch processing.
|
||||
|
||||
### RDS
|
||||
|
||||
[Amazon RDS][4] is a great, fully managed relational database hosted by AWS. Using RDS over your own database server is beneficial because AWS manages everything. It makes it easy to set up, operate, and scale a relational databases.
|
||||
|
||||
### Lambda
|
||||
|
||||
Finally, [AWS Lambda][5] lets you run code without provisioning or managing servers. Lambda has many uses; you can even create the whole APIs with it. Some great uses for it in a microservice architecture are cron jobs and image manipulation. Crons can be scheduled with [CloudWatch][6].
|
||||
|
||||
### Conclusion
|
||||
|
||||
These AWS products you can create fully scalable, stateless microservices that can communicate with each other. Using Elastic Beanstalk to run microservices, S3 to store files, Kinesis to emit events and Lambdas to subscribe to them and run other tasks. Finally, RDS for easily managing and scaling relational databases.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://ryanmccue.ca/blueprint-for-simple-scalable-microservices/
|
||||
|
||||
作者:[Ryan McCue][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ryanmccue.ca/author/ryan/
|
||||
[1]:https://aws.amazon.com/elasticbeanstalk/?nc2=h_m1
|
||||
[2]:https://aws.amazon.com/s3/?nc2=h_m1
|
||||
[3]:https://aws.amazon.com/kinesis/?nc2=h_m1
|
||||
[4]:https://aws.amazon.com/rds/?nc2=h_m1
|
||||
[5]:https://aws.amazon.com/lambda/?nc2=h_m1
|
||||
[6]:https://aws.amazon.com/cloudwatch/?nc2=h_m1
|
@ -0,0 +1,60 @@
|
||||
5 Things to Look for When You Contract Out the Backend of Your App
|
||||
======
|
||||
|
||||

|
||||
|
||||
For many app developers, it can be hard to know what to do when it comes to the backend of your app. There are a few options, Firebase, throw together a quick Node API, contract it out. I am going to make a blog post soon weighing the pros and cons of each of these options, but for now, let's assume you want the API done professionally.
|
||||
|
||||
You are going to want to look for specific things before you give the contract to some freelancer or agency.
|
||||
|
||||
### 1. Documentation
|
||||
|
||||
Documentation is one of the most important pieces here, the API could be amazing, but if it is impossible to understand which endpoints are available, what parameters they provide, and what they respond with you won't have much luck integrating the API into your app. Surprisingly this is one of the pieces with most contractors get wrong.
|
||||
|
||||
So what are you looking for? First, make sure they understand the importance of documentation, this alone makes a huge difference. Second, the should preferably be using an open standard like [Swagger][1] for documentation. If they do both of these things, you should have documentation covered.
|
||||
|
||||
### 2. Communication
|
||||
|
||||
You know the saying "communication is key," well that applies to API development. This is harder to gauge, but sometimes a developer will get the contract, and then disappear. This doesn't mean they aren't working on it, but it means there isn't a good feedback loop to sort out problems before they get too large.
|
||||
|
||||
A good way to get around this is to have a weekly, or however often you want, meeting to go over progress and make sure the API is shaping up the way you want. Even if the meeting is just going over the endpoints and confirming they are returning the data you need.
|
||||
|
||||
### 3. Error Handling
|
||||
|
||||
Error handling is crucial, this basically means if there is an error on the backend, whether it's an invalid request or an unexpected internal server error, it will be handled properly and a useful response is given to the client. It's important that they are handled gracefully. Often this can get overlooked in the API development process.
|
||||
|
||||
This is a tricky thing to look out for, but by letting them know you expect useful error messages and maybe put it into the contract, you should get the error messages you need. This may seem like a small thing but being able to present the user of your app with the actual thing they've done wrong, like "Passwords must be between 6-64 characters" improves the UX immensely.
|
||||
|
||||
### 4. Database
|
||||
|
||||
This section may be a bit controversial, but I think that 90% of apps really just need a SQL database. I know NoSQL is sexy, but you get so many extra benefits from using SQL I feel that's what you should use for the backend of your app. Of course, there are cases where NoSQL is the better option, but broadly speaking you should probably just use a SQL database.
|
||||
|
||||
SQL adds so much added flexibility by being able to add, modify, and remove columns. The option to aggregate data with a simple query is also immensely useful. And finally, the ability to do transactions and be sure all your data is valid will help you sleep better at night.
|
||||
|
||||
The reason I say all the above is because I would recommend looking for someone who is willing to build your API with a SQL database.
|
||||
|
||||
### 5. Infrastructure
|
||||
|
||||
The last major thing to look for when contracting out your backend is infrastructure. This is essential because you want your app to scale. If you get 10,000 users join your app in one day for some reason, you want your backend to handle that. Using services like [AWS Elastic Beanstalk][2] or [Heroku][3] you can create APIs which will scale up automatically with load. That means if your app takes off overnight your API will scale with the load and not buckle under it.
|
||||
|
||||
Making sure your contractor is building it with scalability in mind is key. I wrote a [post on scalable APIs][4] if you're interested in learning more about a good AWS stack.
|
||||
|
||||
### Conclusion
|
||||
|
||||
It is important to get a quality backend when you contract it out. You're paying for a professional to design and build the backend of your app, so if they're lacking in any of the above points it will reduce the chance of success for but the backend, but for your app. If you make a checklist with these points and go over them with contractors, you should be able to weed out the under-qualified applicants and focus your attention on the contractors that know what they're doing.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://ryanmccue.ca/things-to-look-for-when-you-contract-out-the-backend-your-app/
|
||||
|
||||
作者:[Ryan McCue][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ryanmccue.ca/author/ryan/
|
||||
[1]:https://swagger.io/
|
||||
[2]:https://aws.amazon.com/elasticbeanstalk/
|
||||
[3]:https://www.heroku.com/
|
||||
[4]:https://ryanmccue.ca/blueprint-for-simple-scalable-microservices/
|
88
sources/talk/20171225 Where to Get Your App Backend Built.md
Normal file
88
sources/talk/20171225 Where to Get Your App Backend Built.md
Normal file
@ -0,0 +1,88 @@
|
||||
Where to Get Your App Backend Built
|
||||
======
|
||||
|
||||

|
||||
|
||||
Building a great app takes lots of work. From designing the views to adding the right transitions and images. One thing which is often overlooked is the backend, connecting your app to the outside world. A backend which is not up to the same quality as your app can wreck even the most perfect user interface. That is why choosing the right option for your backend budget and needs is essential.
|
||||
|
||||
There are three main choices you have when you're getting it built. First, you have agencies, they are a company with salespeople, project managers, and developers. Second, you have market rate freelancers, they are developers who charge market rate for their work and are often in North America or western Europe. Finally, there are budget freelancers, they are inexpensive and usually in parts of Asia and South America.
|
||||
|
||||
I am going to break down the pros and cons of each of these options.
|
||||
|
||||
### Agency
|
||||
|
||||
Agencies are often a safe bet if you're looking for a more hands-off approach agencies are often the way to go, they have project managers who will manage your project and communicate your requirements to developers. This takes some of the work off of your plate and can free it up to work on your app. Agencies also often have a team of developers at their disposal, so if the developer working on your project takes a vacation, they can swap another developer in without much hassle.
|
||||
|
||||
With all these upsides there is a downside. Price. Having a sales team, a project management team, and a developer team isn't cheap. Agencies often cost quite a bit of money compared to freelancers.
|
||||
|
||||
So in summary:
|
||||
|
||||
#### Pros
|
||||
|
||||
* Hands Off
|
||||
* No Single Point of Failure
|
||||
|
||||
|
||||
|
||||
#### Cons
|
||||
|
||||
* Very expensive
|
||||
|
||||
|
||||
|
||||
### Market Rate Freelancer
|
||||
|
||||
Another option you have are market rate freelancers, these are highly skilled developers who often have worked in agencies, but decided to go their own way and get clients themselves. They generally produce high-quality work at a lower cost than agencies.
|
||||
|
||||
The downside to freelancers is since they're only one person they might not be available right away to start your work. Especially high demand freelancers you may have to wait a few weeks or months before they start development. They also are hard to replace, if they get sick or go on vacation, it can often be hard to find someone to continue the work, unless you get a good recommendation from the freelancer.
|
||||
|
||||
#### Pros
|
||||
|
||||
* Cost Effective
|
||||
* Similar quality to agency
|
||||
* Great for short term
|
||||
|
||||
|
||||
|
||||
#### Cons
|
||||
|
||||
* May not be available
|
||||
* Hard to replace
|
||||
|
||||
|
||||
|
||||
### Budget Freelancer
|
||||
|
||||
The last option I'm going over is budget freelancers who are often found on job boards such as Fiverr and Upwork. They work for very cheap, but that often comes at the cost of quality and communication. Often you will not get what you're looking for, or it will be very brittle code which buckles under strain.
|
||||
|
||||
If you're on a very tight budget, it may be worth rolling the dice on a highly rated budget freelancer, although you must be okay with the risk of potentially throwing the code away.
|
||||
|
||||
#### Pros
|
||||
|
||||
* Very cheap
|
||||
|
||||
|
||||
|
||||
#### Cons
|
||||
|
||||
* Often low quality
|
||||
* May not be what you asked for
|
||||
|
||||
|
||||
|
||||
### Conclusion
|
||||
|
||||
Getting the right backend for your app is important. It is often a good idea to stick with agencies or market rate freelancers due to the predictability and higher quality code, but if you're on a very tight budget rolling the dice with budget freelancers could pay off. At the end of the day, it doesn't matter where the code is from, as long as it works and does what it's supposed to do.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://ryanmccue.ca/where-to-get-your-app-backend-built/
|
||||
|
||||
作者:[Ryan McCue][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ryanmccue.ca/author/ryan/
|
@ -1,84 +0,0 @@
|
||||
Why isn't open source hot among computer science students?
|
||||
======
|
||||
|
||||

|
||||
|
||||
Image by : opensource.com
|
||||
|
||||
The technical savvy and inventive energy of young programmers is alive and well.
|
||||
|
||||
This was clear from the diligent work that I witnessed while participating in this year's [PennApps][1], the nation's largest college hackathon. Over the course of 48 hours, my high school- and college-age peers created projects ranging from a [blink-based communication device for shut-in patients][2] to a [burrito maker with IoT connectivity][3]. The spirit of open source was tangible throughout the event, as diverse groups bonded over a mutual desire to build, the free flow of ideas and tech know-how, fearless experimentation and rapid prototyping, and an overwhelming eagerness to participate.
|
||||
|
||||
Why then, I wondered, wasn't open source a hot topic among my tech geek peers?
|
||||
|
||||
To learn more about what college students think when they hear "open source," I surveyed several college students who are members of the same professional computer science organization I belong to. All members of this community must apply during high school or college and are selected based on their computer science-specific achievements and leadership--whether that means leading a school robotics team, founding a nonprofit to bring coding into insufficiently funded classrooms, or some other worthy endeavor. Given these individuals' accomplishments in computer science, I thought that their perspectives would help in understanding what young programmers find appealing (or unappealing) about open source projects.
|
||||
|
||||
The online survey I prepared and disseminated included the following questions:
|
||||
|
||||
* Do you like to code personal projects? Have you ever contributed to an open source project?
|
||||
* Do you feel like it's more beneficial to you to start your own programming projects, or to contribute to existing open source efforts?
|
||||
* How would you compare the prestige associated with coding for an organization that produces open source software versus proprietary software?
|
||||
|
||||
|
||||
|
||||
Though the overwhelming majority said that they at least occasionally enjoyed coding personal projects in their spare time, most had never contributed to an open source project. When I further explored this trend, a few common preconceptions about open source projects and organizations came to light. To persuade my peers that open source projects are worth their time, and to provide educators and open source organizations insight on their students, I'll address the three top preconceptions.
|
||||
|
||||
### Preconception #1: Creating personal projects from scratch is better experience than contributing to an existing open source project.
|
||||
|
||||
Of the college-age programmers I surveyed, 24 out of 26 asserted that starting their own personal projects felt potentially more beneficial than building on open source ones.
|
||||
|
||||
As a bright-eyed freshman in computer science, I believed this too. I had often heard from older peers that personal projects would make me more appealing to intern recruiters. No one ever mentioned the possibility of contributing to open source projects--so in my mind, it wasn't relevant.
|
||||
|
||||
I now realize that open source projects offer powerful preparation for the real world. Contributing to open source projects cultivates [an awareness of how tools and languages piece together][4] in a way that even individual projects cannot. Moreover, open source is an exercise in coordination and collaboration, building students' [professional skills in communication, teamwork, and problem-solving. ][5]
|
||||
|
||||
### Preconception #2: My coding skills just won't cut it.
|
||||
|
||||
A few respondents said they were intimidated by open source projects, unsure of where to contribute, or fearful of stunting project progress. Unfortunately, feelings of inferiority, which too often especially affect female programmers, do not stop at the open source community. In fact, "Imposter Syndrome" may even be magnified, as [open source advocates typically reject bureaucracy][6]--and as difficult as bureaucracy makes internal mobility, it helps newcomers know their place in an organization.
|
||||
|
||||
I remember how intimidated I felt by contribution guidelines while looking through open source projects on GitHub for the first time. However, guidelines are not intended to encourage exclusivity, but to provide a [guiding hand][7]. To that end, I think of guidelines as a way of establishing expectations without relying on a hierarchical structure.
|
||||
|
||||
Several open source projects actively carve a place for new project contributors. [TEAMMATES][8], an educational feedback management tool, is one of the many open source projects that marks issues "up for grabs" for first-timers. In the comments, programmers of all skill levels iron out implementation details, demonstrating that open source is a place for eager new programmers and seasoned software veterans alike. For young programmers who are still hesitant, [a few open source projects][9] have been thoughtful enough to adopt an [Imposter Syndrome disclaimer][10].
|
||||
|
||||
### Preconception #3: Proprietary software firms do better work than open source software organizations.
|
||||
|
||||
Only five of the 26 respondents I surveyed thought that open and proprietary software organizations were considered equal in prestige. This is likely due to the misperception that "open" means "profitless," and thus low-quality (see [Doesn't 'open source' just mean something is free of charge?][11]).
|
||||
|
||||
However, open source software and profitable software are not mutually exclusive. In fact, small and large businesses alike often pay for free open source software to receive technical support services. As [Red Hat CEO Jim Whitehurst explains][12], "We have engineering teams that track every single change--a bug fix, security enhancement, or whatever--made to Linux, and ensure our customers' mission-critical systems remain up-to-date and stable."
|
||||
|
||||
Moreover, the nature of openness facilitates rather than hinders quality by enabling more people to examine source code. [Igor Faletski, CEO of Mobify][13], writes that Mobify's team of "25 software developers and quality assurance professionals" is "no match for the all the software developers in the world who might make use of [Mobify's open source] platform. Each of them is a potential tester of, or contributor to, the project."
|
||||
|
||||
Another problem may be that young programmers are not aware of the open source software they interact with every day. I used many tools--including MySQL, Eclipse, Atom, Audacity, and WordPress--for months or even years without realizing they were open source. College students, who often rush to download syllabus-specified software to complete class assignments, may be unaware of which software is open source. This makes open source seem more foreign than it is.
|
||||
|
||||
So students, don't knock open source before you try it. Check out this [list of beginner-friendly projects][14] and [these six starting points][15] to begin your open source journey.
|
||||
|
||||
Educators, remind your students of the open source community's history of successful innovation, and lead them toward open source projects outside the classroom. You will help develop sharper, better-prepared, and more confident students.
|
||||
|
||||
### About the author
|
||||
Susie Choi - Susie is an undergraduate student studying computer science at Duke University. She is interested in the implications of technological innovation and open source principles for issues relating to education and socioeconomic inequality.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/12/students-and-open-source-3-common-preconceptions
|
||||
|
||||
作者:[Susie Choi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/susiechoi
|
||||
[1]:http://pennapps.com/
|
||||
[2]:https://devpost.com/software/blink-9o2iln
|
||||
[3]:https://devpost.com/software/daburrito
|
||||
[4]:https://hackernoon.com/benefits-of-contributing-to-open-source-2c97b6f529e9
|
||||
[5]:https://opensource.com/education/16/8/5-reasons-student-involvement-open-source
|
||||
[6]:https://opensource.com/open-organization/17/7/open-thinking-curb-bureaucracy
|
||||
[7]:https://opensource.com/life/16/3/contributor-guidelines-template-and-tips
|
||||
[8]:https://github.com/TEAMMATES/teammates/issues?q=is%3Aissue+is%3Aopen+label%3Ad.FirstTimers
|
||||
[9]:https://github.com/adriennefriend/imposter-syndrome-disclaimer/blob/master/examples.md
|
||||
[10]:https://github.com/adriennefriend/imposter-syndrome-disclaimer
|
||||
[11]:https://opensource.com/resources/what-open-source
|
||||
[12]:https://hbr.org/2013/01/yes-you-can-make-money-with-op
|
||||
[13]:https://hbr.org/2012/10/open-sourcing-may-be-worth
|
||||
[14]:https://github.com/MunGell/awesome-for-beginners
|
||||
[15]:https://opensource.com/life/16/1/6-beginner-open-source
|
@ -1,3 +1,4 @@
|
||||
XLCYun 翻译中
|
||||
How to get into DevOps
|
||||
======
|
||||

|
||||
|
@ -1,73 +0,0 @@
|
||||
How to price cryptocurrencies
|
||||
======
|
||||
|
||||

|
||||
|
||||
Predicting cryptocurrency prices is a fool's game, yet this fool is about to try. The drivers of a single cryptocurrency's value are currently too varied and vague to make assessments based on any one point. News is trending up on Bitcoin? Maybe there's a hack or an API failure that is driving it down at the same time. Ethereum looking sluggish? Who knows: Maybe someone will build a new smarter DAO tomorrow that will draw in the big spenders.
|
||||
|
||||
So how do you invest? Or, more correctly, on which currency should you bet?
|
||||
|
||||
The key to understanding what to buy or sell and when to hold is to use the tools associated with assessing the value of open-source projects. This has been said again and again, but to understand the current crypto boom you have to go back to the quiet rise of Linux.
|
||||
|
||||
Linux appeared on most radars during the dot-com bubble. At that time, if you wanted to set up a web server, you had to physically ship a Windows server or Sun Sparc Station to a server farm where it would do the hard work of delivering Pets.com HTML. At the same time, Linux, like a freight train running on a parallel path to Microsoft and Sun, would consistently allow developers to build one-off projects very quickly and easily using an OS and toolset that were improving daily. In comparison, then, the massive hardware and software expenditures associated with the status quo solution providers were deeply inefficient, and very quickly all of the tech giants that made their money on software now made their money on services or, like Sun, folded.
|
||||
|
||||
From the acorn of Linux an open-source forest bloomed. But there was one clear problem: You couldn't make money from open source. You could consult and you could sell products that used open-source components, but early builders built primarily for the betterment of humanity and not the betterment of their bank accounts.
|
||||
|
||||
Cryptocurrencies have followed the Linux model almost exactly, but cryptocurrencies have cash value. Therefore, when you're working on a crypto project you're not doing it for the common good or for the joy of writing free software. You're writing it with the expectation of a big payout. This, therefore, clouds the value judgements of many programmers. The same folks that brought you Python, PHP, Django and Node.js are back… and now they're programming money.
|
||||
|
||||
### Check the codebase
|
||||
|
||||
This year will be the year of great reckoning in the token sale and cryptocurrency space. While many companies have been able to get away with poor or unusable codebases, I doubt developers will let future companies get away with so much smoke and mirrors. It's safe to say we can [expect posts like this one detailing Storj's anemic codebase to become the norm][1] and, more importantly, that these commentaries will sink many so-called ICOs. Though massive, the money trough that is flowing from ICO to ICO is finite and at some point there will be greater scrutiny paid to incomplete work.
|
||||
|
||||
What does this mean? It means to understand cryptocurrency you have to treat it like a startup. Does it have a good team? Does it have a good product? Does the product work? Would someone want to use it? It's far too early to assess the value of cryptocurrency as a whole, but if we assume that tokens or coins will become the way computers pay each other in the future, this lets us hand wave away a lot of doubt. After all, not many people knew in 2000 that Apache was going to beat nearly every other web server in a crowded market or that Ubuntu instances would be so common that you'd spin them up and destroy them in an instant.
|
||||
|
||||
The key to understanding cryptocurrency pricing is to ignore the froth, hype and FUD and instead focus on true utility. Do you think that some day your phone will pay another phone for, say, an in-game perk? Do you expect the credit card system to fold in the face of an Internet of Value? Do you expect that one day you'll move through life splashing out small bits of value in order to make yourself more comfortable? Then by all means, buy and hold or speculate on things that you think will make your life better. If you don't expect the Internet of Value to improve your life the way the TCP/IP internet did (or you do not understand enough to hold an opinion), then you're probably not cut out for this. NASDAQ is always open, at least during banker's hours.
|
||||
|
||||
Still will us? Good, here are my predictions.
|
||||
|
||||
### The rundown
|
||||
|
||||
Here is my assessment of what you should look at when considering an "investment" in cryptocurrencies. There are a number of caveats we must address before we begin:
|
||||
|
||||
* Crypto is not a monetary investment in a real currency, but an investment in a pie-in-the-sky technofuture. That's right: When you buy crypto you're basically assuming that we'll all be on the deck of the Starship Enterprise exchanging them like Galactic Credits one day. This is the only inevitable future for crypto bulls. While you can force crypto into various economic models and hope for the best, the entire platform is techno-utopianist and assumes all sorts of exciting and unlikely things will come to pass in the next few years. If you have spare cash lying around and you like Star Wars, then you're golden. If you bought bitcoin on a credit card because your cousin told you to, then you're probably going to have a bad time.
|
||||
* Don't trust anyone. There is no guarantee and, in addition to offering the disclaimer that this is not investment advice and that this is in no way an endorsement of any particular cryptocurrency or even the concept in general, we must understand that everything I write here could be wrong. In fact, everything ever written about crypto could be wrong, and anyone who is trying to sell you a token with exciting upside is almost certainly wrong. In short, everyone is wrong and everyone is out to get you, so be very, very careful.
|
||||
* You might as well hold. If you bought when BTC was $18,000 you'd best just hold on. Right now you're in Pascal's Wager territory. Yes, maybe you're angry at crypto for screwing you, but maybe you were just stupid and you got in too high and now you might as well keep believing because nothing is certain, or you can admit that you were a bit overeager and now you're being punished for it but that there is some sort of bitcoin god out there watching over you. Ultimately you need to take a deep breath, agree that all of this is pretty freaking weird, and hold on.
|
||||
|
||||
|
||||
|
||||
Now on with the assessments.
|
||||
|
||||
**Bitcoin** - Expect a rise over the next year that will surpass the current low. Also expect [bumps as the SEC and other federal agencies][2] around the world begin regulating the buying and selling of cryptocurrencies in very real ways. Now that banks are in on the joke they're going to want to reduce risk. Therefore, the bitcoin will become digital gold, a staid, boring and volatility proof safe haven for speculators. Although all but unusable as a real currency, it's good enough for what we need it to do and we also can expect quantum computing hardware to change the face of the oldest and most familiar cryptocurrency.
|
||||
|
||||
**Ethereum** - Ethereum could sustain another few thousand dollars on its price as long as Vitalik Buterin, the creator, doesn't throw too much cold water on it. Like a remorseful Victor Frankenstein, Buterin tends to make amazing things and then denigrate them online, a sort of self-flagellation that is actually quite useful in a space full of froth and outright lies. Ethereum is the closest we've come to a useful cryptocurrency, but it is still the Raspberry Pi of distributed computing -- it's a useful and clever hack that makes it easy to experiment but no one has quite replaced the old systems with new distributed data stores or applications. In short, it's a really exciting technology, but nobody knows what to do with it.
|
||||
|
||||
![][3]
|
||||
|
||||
Where will the price go? It will hover around $1,000 and possibly go as high as $1,500 this year, but this is a principled tech project and not a store of value.
|
||||
|
||||
**Altcoins** - One of the signs of a bubble is when average people make statements like "I couldn't afford a Bitcoin so I bought a Litecoin." This is exactly what I've heard multiple times from multiple people and it's akin to saying "I couldn't buy hamburger so I bought a pound of sawdust instead. I think the kids will eat it, right?" Play at your own risk. Altcoins are a very useful low-risk play for many, and if you create an algorithm -- say to sell when the asset hits a certain level -- then you could make a nice profit. Further, most altcoins will not disappear overnight. I would honestly recommend playing with Ethereum instead of altcoins, but if you're dead set on it, then by all means, enjoy.
|
||||
|
||||
**Tokens** - This is where cryptocurrency gets interesting. Tokens require research, education and a deep understanding of technology to truly assess. Many of the tokens I've seen are true crapshoots and are used primarily as pump and dump vehicles. I won't name names, but the rule of thumb is that if you're buying a token on an open market then you've probably already missed out. The value of the token sale as of January 2018 is to allow crypto whales to turn a few cent per token investment into a 100X return. While many founders talk about the magic of their product and the power of their team, token sales are quite simply vehicles to turn 4 cents into 20 cents into a dollar. Multiply that by millions of tokens and you see the draw.
|
||||
|
||||
The answer is simple: find a few projects you like and lurk in their message boards. Assess if the team is competent and figure out how to get in very, very early. Also expect your money to disappear into a rat hole in a few months or years. There are no sure things, and tokens are far too bleeding-edge a technology to assess sanely.
|
||||
|
||||
You are reading this post because you are looking to maintain confirmation bias in a confusing space. That's fine. I've spoken to enough crypto-heads to know that nobody knows anything right now and that collusion and dirty dealings are the rule of the day. Therefore, it's up to folks like us to slowly buy surely begin to understand just what's going on and, perhaps, profit from it. At the very least we'll all get a new Linux of Value when we're all done.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://techcrunch.com/2018/01/22/how-to-price-cryptocurrencies/
|
||||
|
||||
作者:[John Biggs][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://techcrunch.com/author/john-biggs/
|
||||
[1]:https://shitcoin.com/storj-not-a-dropbox-killer-1a9f27983d70
|
||||
[2]:http://www.businessinsider.com/bitcoin-price-cryptocurrency-warning-from-sec-cftc-2018-1
|
||||
[3]:https://tctechcrunch2011.files.wordpress.com/2018/01/vitalik-twitter-1312.png?w=525&h=615
|
||||
[4]:https://unsplash.com/photos/pElSkGRA2NU?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[5]:https://unsplash.com/search/photos/cash?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -1,3 +1,4 @@
|
||||
translating by leowang
|
||||
Moving to Linux from dated Windows machines
|
||||
======
|
||||
|
||||
|
@ -0,0 +1,94 @@
|
||||
6 pivotal moments in open source history
|
||||
============================================================
|
||||
|
||||
### Here's how open source developed from a printer jam solution at MIT to a major development model in the tech industry today.
|
||||
|
||||

|
||||
Image credits : [Alan Levine][4]. [CC0 1.0][5]
|
||||
|
||||
Open source has taken a prominent role in the IT industry today. It is everywhere from the smallest embedded systems to the biggest supercomputer, from the phone in your pocket to the software running the websites and infrastructure of the companies we engage with every day. Let's explore how we got here and discuss key moments from the past 40 years that have paved a path to the current day.
|
||||
|
||||
### 1\. RMS and the printer
|
||||
|
||||
In the late 1970s, [Richard M. Stallman (RMS)][6] was a staff programmer at MIT. His department, like those at many universities at the time, shared a PDP-10 computer and a single printer. One problem they encountered was that paper would regularly jam in the printer, causing a string of print jobs to pile up in a queue until someone fixed the jam. To get around this problem, the MIT staff came up with a nice social hack: They wrote code for the printer driver so that when it jammed, a message would be sent to everyone who was currently waiting for a print job: "The printer is jammed, please fix it." This way, it was never stuck for long.
|
||||
|
||||
In 1980, the lab accepted a donation of a brand-new laser printer. When Stallman asked for the source code for the printer driver, however, so he could reimplement the social hack to have the system notify users on a paper jam, he was told that this was proprietary information. He heard of a researcher in a different university who had the source code for a research project, and when the opportunity arose, he asked this colleague to share it—and was shocked when they refused. They had signed an NDA, which Stallman took as a betrayal of the hacker culture.
|
||||
|
||||
The late '70s and early '80s represented an era where software, which had traditionally been given away with the hardware in source code form, was seen to be valuable. Increasingly, MIT researchers were starting software companies, and selling licenses to the software was key to their business models. NDAs and proprietary software licenses became the norms, and the best programmers were hired from universities like MIT to work on private development projects where they could no longer share or collaborate.
|
||||
|
||||
As a reaction to this, Stallman resolved that he would create a complete operating system that would not deprive users of the freedom to understand how it worked, and would allow them to make changes if they wished. It was the birth of the free software movement.
|
||||
|
||||
### 2\. Creation of GNU and the advent of free software
|
||||
|
||||
By late 1983, Stallman was ready to announce his project and recruit supporters and helpers. In September 1983, [he announced the creation of the GNU project][7] (GNU stands for GNU's Not Unix—a recursive acronym). The goal of the project was to clone the Unix operating system to create a system that would give complete freedom to users.
|
||||
|
||||
In January 1984, he started working full-time on the project, first creating a compiler system (GCC) and various operating system utilities. Early in 1985, he published "[The GNU Manifesto][8]," which was a call to arms for programmers to join the effort, and launched the Free Software Foundation in order to accept donations to support the work. This document is the founding charter of the free software movement.
|
||||
|
||||
### 3\. The writing of the GPL
|
||||
|
||||
Until 1989, software written and released by the [Free Software Foundation][9] and RMS did not have a single license. Emacs was released under the Emacs license, GCC was released under the GCC license, and so on; however, after a company called Unipress forced Stallman to stop distributing copies of an Emacs implementation they had acquired from James Gosling (of Java fame), he felt that a license to secure user freedoms was important.
|
||||
|
||||
The first version of the GNU General Public License was released in 1989, and it encapsulated the values of copyleft (a play on words—what is the opposite of copyright?): You may use, copy, distribute, and modify the software covered by the license, but if you make changes, you must share the modified source code alongside the modified binaries. This simple requirement to share modified software, in combination with the advent of the internet in the 1990s, is what enabled the decentralized, collaborative development model of the free software movement to flourish.
|
||||
|
||||
### 4\. "The Cathedral and the Bazaar"
|
||||
|
||||
By the mid-1990s, Linux was starting to take off, and free software had become more mainstream—or perhaps "less fringe" would be more accurate. The Linux kernel was being developed in a way that was completely different to anything people had been seen before, and was very successful doing it. Out of the chaos of the kernel community came order, and a fast-moving project.
|
||||
|
||||
In 1997, Eric S. Raymond published the seminal essay, "[The Cathedral and the Bazaar][10]," comparing and contrasting the development methodologies and social structure of GCC and the Linux kernel and talking about his own experiences with a "bazaar" development model with the Fetchmail project. Many of the principles that Raymond describes in this essay will later become central to agile development and the DevOps movement—"release early, release often," refactoring of code, and treating users as co-developers are all fundamental to modern software development.
|
||||
|
||||
This essay has been credited with bringing free software to a broader audience, and with convincing executives at software companies at the time that releasing their software under a free software license was the right thing to do. Raymond went on to be instrumental in the coining of the term "open source" and the creation of the Open Source Institute.
|
||||
|
||||
"The Cathedral and the Bazaar" was credited as a key document in the 1998 release of the source code for the Netscape web browser Mozilla. At the time, this was the first major release of an existing, widely used piece of desktop software as free software, which brought it further into the public eye.
|
||||
|
||||
### 5\. Open source
|
||||
|
||||
As far back as 1985, the ambiguous nature of the word "free", used to describe software freedom, was identified as problematic by RMS himself. In the GNU Manifesto, he identified "give away" and "for free" as terms that confused zero price and user freedom. "Free as in freedom," "Speech not beer," and similar mantras were common when free software hit a mainstream audience in the late 1990s, but a number of prominent community figures argued that a term was needed that made the concept more accessible to the general public.
|
||||
|
||||
After Netscape released the source code for Mozilla in 1998 (see #4), a group of people, including Eric Raymond, Bruce Perens, Michael Tiemann, Jon "Maddog" Hall, and many of the leading lights of the free software world, gathered in Palo Alto to discuss an alternative term. The term "open source" was [coined by Christine Peterson][11] to describe free software, and the Open Source Institute was later founded by Bruce Perens and Eric Raymond. The fundamental difference with proprietary software, they argued, was the availability of the source code, and so this was what should be put forward first in the branding.
|
||||
|
||||
Later that year, at a summit organized by Tim O'Reilly, an extended group of some of the most influential people in the free software world at the time gathered to debate various new brands for free software. In the end, "open source" edged out "sourceware," and open source began to be adopted by many projects in the community.
|
||||
|
||||
There was some disagreement, however. Richard Stallman and the Free Software Foundation continued to champion the term "free software," because to them, the fundamental difference with proprietary software was user freedom, and the availability of source code was just a means to that end. Stallman argued that removing the focus on freedom would lead to a future where source code would be available, but the user of the software would not be able to avail of the freedom to modify the software. With the advent of web-deployed software-as-a-service and open source firmware embedded in devices, the battle continues to be waged today.
|
||||
|
||||
### 6\. Corporate investment in open source—VA Linux, Red Hat, IBM
|
||||
|
||||
In the late 1990s, a series of high-profile events led to a huge increase in the professionalization of free and open source software. Among these, the highest-profile events were the IPOs of VA Linux and Red Hat in 1999\. Both companies had massive gains in share price on their opening days as publicly traded companies, proving that open source was now going commercial and mainstream.
|
||||
|
||||
Also in 1999, IBM announced that they were supporting Linux by investing $1 billion in its development, making is less risky to traditional enterprise users. The following year, Sun Microsystems released the source code to its cross-platform office suite, StarOffice, and created the [OpenOffice.org][12] project.
|
||||
|
||||
The combined effect of massive Silicon Valley funding of open source projects, the attention of Wall Street for young companies built around open source software, and the market credibility that tech giants like IBM and Sun Microsystems brought had combined to create the massive adoption of open source, and the embrace of the open development model that helped it thrive have led to the dominance of Linux and open source in the tech industry today.
|
||||
|
||||
_Which pivotal moments would you add to the list? Let us know in the comments._
|
||||
|
||||
### About the author
|
||||
|
||||
[][13] Dave Neary - Dave Neary is a member of the Open Source and Standards team at Red Hat, helping make Open Source projects important to Red Hat be successful. Dave has been around the free and open source software world, wearing many different hats, since sending his first patch to the GIMP in 1999.[More about me][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/pivotal-moments-history-open-source
|
||||
|
||||
作者:[Dave Neary ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/dneary
|
||||
[1]:https://opensource.com/article/18/2/pivotal-moments-history-open-source?rate=gsG-JrjfROWACP7i9KUoqmH14JDff8-31C2IlNPPyu8
|
||||
[2]:https://opensource.com/users/dneary
|
||||
[3]:https://opensource.com/user/16681/feed
|
||||
[4]:https://www.flickr.com/photos/cogdog/6476689463/in/photolist-aSjJ8H-qHAvo4-54QttY-ofm5ZJ-9NnUjX-tFxS7Y-bPPjtH-hPYow-bCndCk-6NpFvF-5yQ1xv-7EWMXZ-48RAjB-5EzYo3-qAFAdk-9gGty4-a2BBgY-bJsTcF-pWXATc-6EBTmq-SkBnSJ-57QJco-ddn815-cqt5qG-ddmYSc-pkYxRz-awf3n2-Rvnoxa-iEMfeG-bVfq5-jXy74D-meCC1v-qx22rx-fMScsJ-ci1435-ie8P5-oUSXhp-xJSm9-bHgApk-mX7ggz-bpsxd7-8ukud7-aEDmBj-qWkytq-ofwhdM-b7zSeD-ddn5G7-ddn5gb-qCxnB2-S74vsk
|
||||
[5]:https://creativecommons.org/publicdomain/zero/1.0/
|
||||
[6]:https://en.wikipedia.org/wiki/Richard_Stallman
|
||||
[7]:https://groups.google.com/forum/#!original/net.unix-wizards/8twfRPM79u0/1xlglzrWrU0J
|
||||
[8]:https://www.gnu.org/gnu/manifesto.en.html
|
||||
[9]:https://www.fsf.org/
|
||||
[10]:https://en.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar
|
||||
[11]:https://opensource.com/article/18/2/coining-term-open-source-software
|
||||
[12]:http://www.openoffice.org/
|
||||
[13]:https://opensource.com/users/dneary
|
||||
[14]:https://opensource.com/users/dneary
|
||||
[15]:https://opensource.com/users/dneary
|
||||
[16]:https://opensource.com/article/18/2/pivotal-moments-history-open-source#comments
|
||||
[17]:https://opensource.com/tags/licensing
|
@ -1,133 +0,0 @@
|
||||
Custom Embedded Linux Distributions
|
||||
======
|
||||
### Why Go Custom?
|
||||
|
||||
In the past, many embedded projects used off-the-shelf distributions and stripped them down to bare essentials for a number of reasons. First, removing unused packages reduced storage requirements. Embedded systems are typically shy of large amounts of storage at boot time, and the storage available, in non-volatile memory, can require copying large amounts of the OS to memory to run. Second, removing unused packages reduced possible attack vectors. There is no sense hanging on to potentially vulnerable packages if you don't need them. Finally, removing unused packages reduced distribution management overhead. Having dependencies between packages means keeping them in sync if any one package requires an update from the upstream distribution. That can be a validation nightmare.
|
||||
|
||||
Yet, starting with an existing distribution and removing packages isn't as easy as it sounds. Removing one package might break dependencies held by a variety of other packages, and dependencies can change in the upstream distribution management. Additionally, some packages simply cannot be removed without great pain due to their integrated nature within the boot or runtime process. All of this takes control of the platform outside the project and can lead to unexpected delays in development.
|
||||
|
||||
A popular alternative is to build a custom distribution using build tools available from an upstream distribution provider. Both Gentoo and Debian provide options for this type of bottom-up build. The most popular of these is probably the Debian debootstrap utility. It retrieves prebuilt core components and allows users to cherry-pick the packages of interest in building their platforms. But, debootstrap originally was only for x86 platforms. Although there are ARM (and possibly other) options now, debootstrap and Gentoo's catalyst still take dependency management away from the local project.
|
||||
|
||||
Some people will argue that letting someone else manage the platform software (like Android) is much easier than doing it yourself. But, those distributions are general-purpose, and when you're sitting on a lightweight, resource-limited IoT device, you may think twice about any any advantage that is taken out of your hands.
|
||||
|
||||
### System Bring-Up Primer
|
||||
|
||||
A custom Linux distribution requires a number of software components. The first is the toolchain. A toolchain is a collection of tools for compiling software, including (but not limited to) a compiler, linker, binary manipulation tools and standard C library. Toolchains are built specifically for a target hardware device. A toolchain built on an x86 system that is intended for use with a Raspberry Pi is called a cross-toolchain. When working with small embedded devices with limited memory and storage, it's always best to use a cross-toolchain. Note that even applications written for a specific purpose in a scripted language like JavaScript will need to run on a software platform that needs to be compiled with a cross-toolchain.
|
||||
|
||||

|
||||
|
||||
Figure 1\. Compile Dependencies and Boot Order
|
||||
|
||||
The cross-toolchain is used to build software components for the target hardware. The first component needed is a bootloader. When power is applied to a board, the processor (depending on design) attempts to jump to a specific memory location to start running software. That memory location is where a bootloader is stored. Hardware can have a built-in bootloader that can be run directly from its storage location or it may be copied into memory first before it is run. There also can be multiple bootloaders. A first-stage bootloader would reside on the hardware in NAND or NOR flash, for example. Its sole purpose would be to set up the hardware so a second-stage bootloader, such as one stored on an SD card, can be loaded and run.
|
||||
|
||||
Bootloaders have enough knowledge to get the hardware to the point where it can load Linux into memory and jump to it, effectively handing control over to Linux. Linux is an operating system. This means that, by design, it doesn't actually do anything other than monitor the hardware and provide services to higher layer software—aka applications. The [Linux kernel][1] often is accompanied by a variety of firmware blobs. These are software objects that have been precompiled, often containing proprietary IP (intellectual property) for devices used with the hardware platform. When building a custom distribution, it may be necessary to acquire any firmware blobs not provided by the Linux kernel source tree before beginning compilation of the kernel.
|
||||
|
||||
Applications are stored in the root filesystem. The root filesystem is constructed by compiling and collecting a variety of software libraries, tools, scripts and configuration files. Collectively, these all provide the services, such as network configuration and USB device mounting, required by applications the project will run.
|
||||
|
||||
In summary, a complete system build requires the following components:
|
||||
|
||||
1. A cross-toolchain.
|
||||
|
||||
2. One or more bootloaders.
|
||||
|
||||
3. The Linux kernel and associated firmware blobs.
|
||||
|
||||
4. A root filesystem populated with libraries, tools and utilities.
|
||||
|
||||
5. Custom applications.
|
||||
|
||||
### Start with the Right Tools
|
||||
|
||||
The components of the cross-toolchain can be built manually, but it's a complex process. Fortunately, tools exist that make this process easier. The best of them is probably [Crosstool-NG][2]. This project utilizes the same kconfig menu system used by the Linux kernel to configure the bits and pieces of the toolchain. The key to using this tool is finding the correct configuration items for the target platform. This typically includes the following items:
|
||||
|
||||
1. The target architecture, such as ARM or x86.
|
||||
|
||||
2. Endianness: little (typically Intel) or big (typically ARM or others).
|
||||
|
||||
3. CPU type as it's known to the compiler, such as GCC's use of either -mcpu or --with-cpu.
|
||||
|
||||
4. The floating point type supported, if any, by the CPU, such as GCC's use of either -mfpu or --with-fpu.
|
||||
|
||||
5. Specific version information for the binutils package, the C library and the C compiler.
|
||||
|
||||

|
||||
|
||||
Figure 2. Crosstool-NG Configuration Menu
|
||||
|
||||
The first four are typically available from the processor maker's documentation. It can be hard to find these for relatively new processors, but for the Raspberry Pi or BeagleBoards (and their offspring and off-shoots), you can find the information online at places like the [Embedded Linux Wiki][3].
|
||||
|
||||
The versions of the binutils, C library and C compiler are what will separate the toolchain from any others that might be provided from third parties. First, there are multiple providers of each of these things. Linaro provides bleeding-edge versions for newer processor types, while working to merge support into upstream projects like the GNU C Library. Although you can use a variety of providers, you may want to stick to the stock GNU toolchain or the Linaro versions of the same.
|
||||
|
||||
Another important selection in Crosstool-NG is the version of the Linux kernel. This selection gets headers for use with various toolchain components, but it is does not have to be the same as the Linux kernel you will boot on the target hardware. It's important to choose a kernel that is not newer than the target hardware's kernel. When possible, pick a long-term support kernel that is older than the kernel that will be used on the target hardware.
|
||||
|
||||
For most developers new to custom distribution builds, the toolchain build is the most complex process. Fortunately, binary toolchains are available for many target hardware platforms. If building a custom toolchain becomes problematic, search online at places like the [Embedded Linux Wiki][4] for links to prebuilt toolchains.
|
||||
|
||||
### Booting Options
|
||||
|
||||
The next component to focus on after the toolchain is the bootloader. A bootloader sets up hardware so it can be used by ever more complex software. A first-stage bootloader is often provided by the target platform maker, burned into on-hardware storage like an EEPROM or NOR flash. The first-stage bootloader will make it possible to boot from, for example, an SD card. The Raspberry Pi has such a bootloader, which makes creating a custom bootloader unnecessary.
|
||||
|
||||
Despite that, many projects add a secondary bootloader to perform a variety of tasks. One such task could be to provide a splash animation without using the Linux kernel or userspace tools like plymouth. A more common secondary bootloader task is to make network-based boot or PCI-connected disks available. In those cases, a tertiary bootloader, such as GRUB, may be necessary to get the system running.
|
||||
|
||||
Most important, bootloaders load the Linux kernel and start it running. If the first-stage bootloader doesn't provide a mechanism for passing kernel arguments at boot time, a second-stage bootloader may be necessary.
|
||||
|
||||
A number of open-source bootloaders are available. The [U-Boot project][5] often is used for ARM platforms like the Raspberry Pi. CoreBoot typically is used for x86 platform like the Chromebook. Bootloaders can be very specific to target hardware. The choice of bootloader will depend on overall project requirements and target hardware (search for lists of open-source bootloaders be online).
|
||||
|
||||
### Now Bring the Penguin
|
||||
|
||||
The bootloader will load the Linux kernel into memory and start it running. Linux is like an extended bootloader: it continues hardware setup and prepares to load higher-level software. The core of the kernel will set up and prepare memory for sharing between applications and hardware, prepare task management to allow multiple applications to run at the same time, initialize hardware components that were not configured by the bootloader or were configured incompletely and begin interfaces for human interaction. The kernel may not be configured to do this on its own, however. It may include an embedded lightweight filesystem, known as the initramfs or initrd, that can be created separately from the kernel to assist in hardware setup.
|
||||
|
||||
Another thing the kernel handles is downloading binary blobs, known generically as firmware, to hardware devices. Firmware is pre-compiled object files in formats specific to a particular device that is used to initialize hardware in places that the bootloader and kernel cannot access. Many such firmware objects are available from the Linux kernel source repositories, but many others are available only from specific hardware vendors. Examples of devices that often provide their own firmware include digital TV tuners or WiFi network cards.
|
||||
|
||||
Firmware may be loaded from the initramfs or may be loaded after the kernel starts the init process from the root filesystem. However, creating the kernel often will be the process where obtaining firmware will occur when creating a custom Linux distribution.
|
||||
|
||||
### Lightweight Core Platforms
|
||||
|
||||
The last thing the Linux kernel does is to attempt to run a specific program called the init process. This can be named init or linuxrc or the name of the program can be passed to the kernel by the bootloader. The init process is stored in a file system that the kernel can access. In the case of the initramfs, the file system is stored in memory (either by the kernel itself or by the bootloader placing it there). But the initramfs is not typically complete enough to run more complex applications. So another file system, known as the root file system, is required.
|
||||
|
||||

|
||||
|
||||
Figure 3\. Buildroot Configuration Menu
|
||||
|
||||
The initramfs filesystem can be built using the Linux kernel itself, but more commonly, it is created using a project called [BusyBox][6]. BusyBox combines a collection of GNU utilities, such as grep or awk, into a single binary in order to reduce the size of the filesystem itself. BusyBox often is used to jump-start the root filesystem's creation.
|
||||
|
||||
But, BusyBox is purposely lightweight. It isn't intended to provide every tool that a target platform will need, and even those it does provide can be feature-reduced. BusyBox has a sister project known as [Buildroot][7], which can be used to get a complete root filesystem, providing a variety of libraries, utilities and scripting languages. Like Crosstool-NG and the Linux kernel, both BusyBox and Buildroot allow custom configuration using the kconfig menu system. More important, the Buildroot system handles dependencies automatically, so selection of a given utility will guarantee that any software it requires also will be built and installed in the root filesystem.
|
||||
|
||||
Buildroot can generate a root filesystem archive in a variety of formats. However, it is important to note that the filesystem only is archived. Individual utilities and libraries are not packaged in either Debian or RPM formats. Using Buildroot will generate a root filesystem image, but its contents are not managed packages. Despite this, Buildroot does provide support for both the opkg and rpm package managers. This means custom applications that will be installed on the root filesystem can be package-managed, even if the root filesystem itself is not.
|
||||
|
||||
### Cross-Compiling and Scripting
|
||||
|
||||
One of Buildroot's features is the ability to generate a staging tree. This directory contains libraries and utilities that can be used to cross-compile other applications. With a staging tree and the cross toolchain, it becomes possible to compile additional applications outside Buildroot on the host system instead of on the target platform. Using rpm or opkg, those applications then can be installed to the root filesystem on the target at runtime using package management software.
|
||||
|
||||
Most custom systems are built around the idea of building applications with scripting languages. If scripting is required on the target platform, a variety of choices are available from Buildroot, including Python, PHP, Lua and JavaScript via Node.js. Support also exists for applications requiring encryption using OpenSSL.
|
||||
|
||||
### What's Next
|
||||
|
||||
The Linux kernel and bootloaders are compiled like most applications. Their build systems are designed to build a specific bit of software. Crosstool-NG and Buildroot are metabuilds. A metabuild is a wrapper build system around a collection of software, each with their own build systems. Alternatives to these include [Yocto][8] and [OpenEmbedded][9]. The benefit of Buildroot is the ease with which it can be wrapped by an even higher-level metabuild to automate customized Linux distribution builds. Doing this opens the option of pointing Buildroot to project-specific cache repositories. Using cache repositories can speed development and offers snapshot builds without worrying about changes to upstream repositories.
|
||||
|
||||
An example implementation of a higher-level build system is [PiBox][10]. PiBox is a metabuild wrapped around all of the tools discussed in this article. Its purpose is to add a common GNU Make target construction around all the tools in order to produce a core platform on which additional software can be built and distributed. The PiBox Media Center and kiosk projects are implementations of application-layer software installed on top of the core platform to produce a purpose-built platform. The [Iron Man project][11] is intended to extend these applications for home automation, integrated with voice control and IoT management.
|
||||
|
||||
But PiBox is nothing without these core software tools and could never run without an in-depth understanding of a complete custom distribution build process. And, PiBox could not exist without the long-term dedication of the teams of developers for these projects who have made custom-distribution-building a task for the masses.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/custom-embedded-linux-distributions
|
||||
|
||||
作者:[Michael J.Hammel][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/user/1000879
|
||||
[1]:https://www.kernel.org
|
||||
[2]:http://crosstool-ng.github.io
|
||||
[3]:https://elinux.org/Main_Page
|
||||
[4]:https://elinux.org/Main_Page
|
||||
[5]:https://www.denx.de/wiki/U-Boot
|
||||
[6]:https://busybox.net
|
||||
[7]:https://buildroot.org
|
||||
[8]:https://www.yoctoproject.org
|
||||
[9]:https://www.openembedded.org/wiki/Main_Page
|
||||
[10]:https://www.piboxproject.com
|
||||
[11]:http://redmine.graphics-muse.org/projects/ironman/wiki/Getting_Started
|
103
sources/talk/20180201 How I coined the term open source.md
Normal file
103
sources/talk/20180201 How I coined the term open source.md
Normal file
@ -0,0 +1,103 @@
|
||||
How I coined the term 'open source'
|
||||
============================================================
|
||||
|
||||
### Christine Peterson finally publishes her account of that fateful day, 20 years ago.
|
||||
|
||||

|
||||
Image by : opensource.com
|
||||
|
||||
In a few days, on February 3, the 20th anniversary of the introduction of the term "[open source software][6]" is upon us. As open source software grows in popularity and powers some of the most robust and important innovations of our time, we reflect on its rise to prominence.
|
||||
|
||||
I am the originator of the term "open source software" and came up with it while executive director at Foresight Institute. Not a software developer like the rest, I thank Linux programmer Todd Anderson for supporting the term and proposing it to the group.
|
||||
|
||||
This is my account of how I came up with it, how it was proposed, and the subsequent reactions. Of course, there are a number of accounts of the coining of the term, for example by Eric Raymond and Richard Stallman, yet this is mine, written on January 2, 2006.
|
||||
|
||||
It has never been published, until today.
|
||||
|
||||
* * *
|
||||
|
||||
The introduction of the term "open source software" was a deliberate effort to make this field of endeavor more understandable to newcomers and to business, which was viewed as necessary to its spread to a broader community of users. The problem with the main earlier label, "free software," was not its political connotations, but that—to newcomers—its seeming focus on price is distracting. A term was needed that focuses on the key issue of source code and that does not immediately confuse those new to the concept. The first term that came along at the right time and fulfilled these requirements was rapidly adopted: open source.
|
||||
|
||||
This term had long been used in an "intelligence" (i.e., spying) context, but to my knowledge, use of the term with respect to software prior to 1998 has not been confirmed. The account below describes how the term [open source software][7] caught on and became the name of both an industry and a movement.
|
||||
|
||||
### Meetings on computer security
|
||||
|
||||
In late 1997, weekly meetings were being held at Foresight Institute to discuss computer security. Foresight is a nonprofit think tank focused on nanotechnology and artificial intelligence, and software security is regarded as central to the reliability and security of both. We had identified free software as a promising approach to improving software security and reliability and were looking for ways to promote it. Interest in free software was starting to grow outside the programming community, and it was increasingly clear that an opportunity was coming to change the world. However, just how to do this was unclear, and we were groping for strategies.
|
||||
|
||||
At these meetings, we discussed the need for a new term due to the confusion factor. The argument was as follows: those new to the term "free software" assume it is referring to the price. Oldtimers must then launch into an explanation, usually given as follows: "We mean free as in freedom, not free as in beer." At this point, a discussion on software has turned into one about the price of an alcoholic beverage. The problem was not that explaining the meaning is impossible—the problem was that the name for an important idea should not be so confusing to newcomers. A clearer term was needed. No political issues were raised regarding the free software term; the issue was its lack of clarity to those new to the concept.
|
||||
|
||||
### Releasing Netscape
|
||||
|
||||
On February 2, 1998, Eric Raymond arrived on a visit to work with Netscape on the plan to release the browser code under a free-software-style license. We held a meeting that night at Foresight's office in Los Altos to strategize and refine our message. In addition to Eric and me, active participants included Brian Behlendorf, Michael Tiemann, Todd Anderson, Mark S. Miller, and Ka-Ping Yee. But at that meeting, the field was still described as free software or, by Brian, "source code available" software.
|
||||
|
||||
While in town, Eric used Foresight as a base of operations. At one point during his visit, he was called to the phone to talk with a couple of Netscape legal and/or marketing staff. When he was finished, I asked to be put on the phone with them—one man and one woman, perhaps Mitchell Baker—so I could bring up the need for a new term. They agreed in principle immediately, but no specific term was agreed upon.
|
||||
|
||||
Between meetings that week, I was still focused on the need for a better name and came up with the term "open source software." While not ideal, it struck me as good enough. I ran it by at least four others: Eric Drexler, Mark Miller, and Todd Anderson liked it, while a friend in marketing and public relations felt the term "open" had been overused and abused and believed we could do better. He was right in theory; however, I didn't have a better idea, so I thought I would try to go ahead and introduce it. In hindsight, I should have simply proposed it to Eric Raymond, but I didn't know him well at the time, so I took an indirect strategy instead.
|
||||
|
||||
Todd had agreed strongly about the need for a new term and offered to assist in getting the term introduced. This was helpful because, as a non-programmer, my influence within the free software community was weak. My work in nanotechnology education at Foresight was a plus, but not enough for me to be taken very seriously on free software questions. As a Linux programmer, Todd would be listened to more closely.
|
||||
|
||||
### The key meeting
|
||||
|
||||
Later that week, on February 5, 1998, a group was assembled at VA Research to brainstorm on strategy. Attending—in addition to Eric Raymond, Todd, and me—were Larry Augustin, Sam Ockman, and attending by phone, Jon "maddog" Hall.
|
||||
|
||||
The primary topic was promotion strategy, especially which companies to approach. I said little, but was looking for an opportunity to introduce the proposed term. I felt that it wouldn't work for me to just blurt out, "All you technical people should start using my new term." Most of those attending didn't know me, and for all I knew, they might not even agree that a new term was greatly needed, or even somewhat desirable.
|
||||
|
||||
Fortunately, Todd was on the ball. Instead of making an assertion that the community should use this specific new term, he did something less directive—a smart thing to do with this community of strong-willed individuals. He simply used the term in a sentence on another topic—just dropped it into the conversation to see what happened. I went on alert, hoping for a response, but there was none at first. The discussion continued on the original topic. It seemed only he and I had noticed the usage.
|
||||
|
||||
Not so—memetic evolution was in action. A few minutes later, one of the others used the term, evidently without noticing, still discussing a topic other than terminology. Todd and I looked at each other out of the corners of our eyes to check: yes, we had both noticed what happened. I was excited—it might work! But I kept quiet: I still had low status in this group. Probably some were wondering why Eric had invited me at all.
|
||||
|
||||
Toward the end of the meeting, the [question of terminology][8] was brought up explicitly, probably by Todd or Eric. Maddog mentioned "freely distributable" as an earlier term, and "cooperatively developed" as a newer term. Eric listed "free software," "open source," and "sourceware" as the main options. Todd advocated the "open source" model, and Eric endorsed this. I didn't say much, letting Todd and Eric pull the (loose, informal) consensus together around the open source name. It was clear that to most of those at the meeting, the name change was not the most important thing discussed there; a relatively minor issue. Only about 10% of my notes from this meeting are on the terminology question.
|
||||
|
||||
But I was elated. These were some key leaders in the community, and they liked the new name, or at least didn't object. This was a very good sign. There was probably not much more I could do to help; Eric Raymond was far better positioned to spread the new meme, and he did. Bruce Perens signed on to the effort immediately, helping set up [Opensource.org][9] and playing a key role in spreading the new term.
|
||||
|
||||
For the name to succeed, it was necessary, or at least highly desirable, that Tim O'Reilly agree and actively use it in his many projects on behalf of the community. Also helpful would be use of the term in the upcoming official release of the Netscape Navigator code. By late February, both O'Reilly & Associates and Netscape had started to use the term.
|
||||
|
||||
### Getting the name out
|
||||
|
||||
After this, there was a period during which the term was promoted by Eric Raymond to the media, by Tim O'Reilly to business, and by both to the programming community. It seemed to spread very quickly.
|
||||
|
||||
On April 7, 1998, Tim O'Reilly held a meeting of key leaders in the field. Announced in advance as the first "[Freeware Summit][10]," by April 14 it was referred to as the first "[Open Source Summit][11]."
|
||||
|
||||
These months were extremely exciting for open source. Every week, it seemed, a new company announced plans to participate. Reading Slashdot became a necessity, even for those like me who were only peripherally involved. I strongly believe that the new term was helpful in enabling this rapid spread into business, which then enabled wider use by the public.
|
||||
|
||||
A quick Google search indicates that "open source" appears more often than "free software," but there still is substantial use of the free software term, which remains useful and should be included when communicating with audiences who prefer it.
|
||||
|
||||
### A happy twinge
|
||||
|
||||
When an [early account][12] of the terminology change written by Eric Raymond was posted on the Open Source Initiative website, I was listed as being at the VA brainstorming meeting, but not as the originator of the term. This was my own fault; I had neglected to tell Eric the details. My impulse was to let it pass and stay in the background, but Todd felt otherwise. He suggested to me that one day I would be glad to be known as the person who coined the name "open source software." He explained the situation to Eric, who promptly updated his site.
|
||||
|
||||
Coming up with a phrase is a small contribution, but I admit to being grateful to those who remember to credit me with it. Every time I hear it, which is very often now, it gives me a little happy twinge.
|
||||
|
||||
The big credit for persuading the community goes to Eric Raymond and Tim O'Reilly, who made it happen. Thanks to them for crediting me, and to Todd Anderson for his role throughout. The above is not a complete account of open source history; apologies to the many key players whose names do not appear. Those seeking a more complete account should refer to the links in this article and elsewhere on the net.
|
||||
|
||||
### About the author
|
||||
|
||||
[][13] Christine Peterson - Christine Peterson writes, lectures, and briefs the media on coming powerful technologies, especially nanotechnology, artificial intelligence, and longevity. She is Cofounder and Past President of Foresight Institute, the leading nanotech public interest group. Foresight educates the public, technical community, and policymakers on coming powerful technologies and how to guide their long-term impact. She serves on the Advisory Board of the [Machine Intelligence... ][2][more about Christine Peterson][3][More about me][4]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/coining-term-open-source-software
|
||||
|
||||
作者:[ Christine Peterson][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/christine-peterson
|
||||
[1]:https://opensource.com/article/18/2/coining-term-open-source-software?rate=HFz31Mwyy6f09l9uhm5T_OFJEmUuAwpI61FY-fSo3Gc
|
||||
[2]:http://intelligence.org/
|
||||
[3]:https://opensource.com/users/christine-peterson
|
||||
[4]:https://opensource.com/users/christine-peterson
|
||||
[5]:https://opensource.com/user/206091/feed
|
||||
[6]:https://opensource.com/resources/what-open-source
|
||||
[7]:https://opensource.org/osd
|
||||
[8]:https://wiki2.org/en/Alternative_terms_for_free_software
|
||||
[9]:https://opensource.org/
|
||||
[10]:http://www.oreilly.com/pub/pr/636
|
||||
[11]:http://www.oreilly.com/pub/pr/796
|
||||
[12]:https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Alternative_terms_for_free_software.html
|
||||
[13]:https://opensource.com/users/christine-peterson
|
||||
[14]:https://opensource.com/users/christine-peterson
|
||||
[15]:https://opensource.com/users/christine-peterson
|
||||
[16]:https://opensource.com/article/18/2/coining-term-open-source-software#comments
|
95
sources/talk/20180201 IT automation- How to make the case.md
Normal file
95
sources/talk/20180201 IT automation- How to make the case.md
Normal file
@ -0,0 +1,95 @@
|
||||
IT automation: How to make the case
|
||||
======
|
||||
At the start of any significant project or change initiative, IT leaders face a proverbial fork in the road.
|
||||
|
||||
Path #1 might seem to offer the shortest route from A to B: Simply force-feed the project to everyone by executive mandate, essentially saying, “You’re going to do this – or else.”
|
||||
|
||||
Path #2 might appear less direct, because on this journey you take the time to explain the strategy and the reasons behind it. In fact, you’re going to be making pit stops along this route, rather than marathoning from start to finish: “Here’s what we’re doing – and why we’re doing it.”
|
||||
|
||||
Guess which path bears better results?
|
||||
|
||||
If you said #2, you’ve traveled both paths before – and experienced the results first-hand. Getting people on board with major changes beforehand is almost always the smarter choice.
|
||||
|
||||
IT leaders know as well as anyone that with significant change often comes [significant fear][1], skepticism, and other challenges. It may be especially true with IT automation. The term alone sounds scary to some people, and it is often tied to misconceptions. Helping people understand the what, why, and how of your company’s automation strategy is a necessary step to achieving your goals associated with that strategy.
|
||||
|
||||
[ **Read our related article,** [**IT automation best practices: 7 keys to long-term success**][2]. ]
|
||||
|
||||
With that in mind, we asked a variety of IT leaders for their advice on making the case for automation in your organization:
|
||||
|
||||
## 1. Show people what’s in it for them
|
||||
|
||||
Let’s face it: Self-interest and self-preservation are natural instincts. Tapping into that human tendency is a good way to get people on board: Show people how your automation strategy will benefit them and their jobs. Will automating a particular process in the software pipeline mean fewer middle-of-the-night calls for team members? Will it enable some people to dump low-skill, manual tasks in favor of more strategic, higher-order work – the sort that helps them take the next step in their career?
|
||||
|
||||
“Convey what’s in it for them, and how it will benefit clients and the whole company,” advises Vipul Nagrath, global CIO at [ADP][3]. “Compare the current state to a brighter future state, where the company enjoys greater stability, agility, efficiency, and security.”
|
||||
|
||||
The same approach holds true when making the case outside of IT; just lighten up on the jargon when explaining the benefits to non-technical stakeholders, Nagrath says.
|
||||
|
||||
Setting up a before-and-after picture is a good storytelling device for helping people see the upside.
|
||||
|
||||
“You want to paint a picture of the current state that people can relate to,” Nagrath says. “Present what’s working, but also highlight what’s causing teams to be less than agile.” Then explain how automating certain processes will improve that current state.
|
||||
|
||||
## 2. Connect automation to specific business goals
|
||||
|
||||
Part of making a strong case entails making sure people understand that you’re not just trend-chasing. If you’re automating simply for the sake of automating, people will sniff that out and become more resistant – perhaps especially within IT.
|
||||
|
||||
“The case for automation needs to be driven by a business demand signal, such as revenue or operating expense,” says David Emerson, VP and deputy CISO at [Cyxtera][4]. “No automation endeavor is self-justifying, and no technical feat, generally, should be a means unto itself, unless it’s a core competency of the company.”
|
||||
|
||||
Like Nagrath, Emerson recommends promoting the incentives associated with achieving the business goals of automation, and working toward these goals (and corresponding incentives) in an iterative, step-by-step fashion.
|
||||
|
||||
## 3. Break the automation plan into manageable pieces
|
||||
|
||||
Even if your automation strategy is literally “automate everything,” that’s a tough sell (and probably unrealistic) for most organizations. You’ll make a stronger case with a plan that approaches automation manageable piece by manageable piece, and that enables greater flexibility to adapt along the way.
|
||||
|
||||
“When making a case for automation, I recommend clearly illustrating the incentive to move to an automated process, and allowing iteration toward that goal to introduce and prove the benefits at lower risk,” Emerson says.
|
||||
|
||||
Sergey Zuev, founder at [GA Connector][5], shares an in-the-trenches account of why automating incrementally is crucial – and how it will help you build a stronger, longer-lasting argument for your strategy. Zuev should know: His company’s tool automates the import of data from CRM applications into Google Analytics. But it was actually the company’s internal experience automating its own customer onboarding process that led to a lightbulb moment.
|
||||
|
||||
“At first, we tried to build the whole onboarding funnel at once, and as a result, the project dragged [on] for months,” Zuev says. “After realizing that it [was] going nowhere, we decided to select small chunks that would have the biggest immediate effect, and start with that. As a result, we managed to implement one of the email sequences in just a week, and are already reaping the benefits of the desecrated manual effort.”
|
||||
|
||||
## 4. Sell the big-picture benefits too
|
||||
|
||||
A step-by-step approach does not preclude painting a bigger picture. Just as it’s a good idea to make the case at the individual or team level, it’s also a good idea for help people understand the company-wide benefits.
|
||||
|
||||
“If we can accelerate the time it takes for the business to get what it needs, it will silence the skeptics.”
|
||||
|
||||
Eric Kaplan, CTO at [AHEAD][6], agrees that using small wins to show automation’s value is a smart strategy for winning people over. But the value those so-called “small” wins reveal can actually help you sharpen the big picture for people. Kaplan points to the value of individual and organizational time as an area everyone can connect with easily.
|
||||
|
||||
“The best place to do this is where you can show savings in terms of time,” Kaplan says. “If we can accelerate the time it takes for the business to get what it needs, it will silence the skeptics.”
|
||||
|
||||
Time and scalability are powerful benefits business and IT colleagues, both charged with growing the business, can grasp.
|
||||
|
||||
“The result of automation is scalability – less effort per person to maintain and grow your IT environment, as [Red Hat][7] VP, Global Services John Allessio recently [noted][8]. “If adding manpower is the only way to grow your business, then scalability is a pipe dream. Automation reduces your manpower requirements and provides the flexibility required for continued IT evolution.” (See his full article, [What DevOps teams really need from a CIO][8].)
|
||||
|
||||
## 5. Promote the heck out of your results
|
||||
|
||||
At the outset of your automation strategy, you’ll likely be making the case based on goals and the anticipated benefits of achieving those goals. But as your automation strategy evolves, there’s no case quite as convincing as one grounded in real-world results.
|
||||
|
||||
“Seeing is believing,” says Nagrath, ADP’s CIO. “Nothing quiets skeptics like a track record of delivery.”
|
||||
|
||||
That means, of course, not only achieving your goals, but also doing so on time – another good reason for the iterative, step-by-step approach.
|
||||
|
||||
While quantitative results such as percentage improvements or cost savings can speak loudly, Nagrath advises his fellow IT leaders not to stop there when telling your automation story.
|
||||
|
||||
“Making a case for automation is also a qualitative discussion, where we can promote the issues prevented, overall business continuity, reductions in failures/errors, and associates taking on [greater] responsibility as they tackle more value-added tasks.”
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://enterprisersproject.com/article/2018/1/how-make-case-it-automation
|
||||
|
||||
作者:[Kevin Casey][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://enterprisersproject.com/user/kevin-casey
|
||||
[1]:https://enterprisersproject.com/article/2017/10/how-beat-fear-and-loathing-it-change
|
||||
[2]:https://enterprisersproject.com/article/2018/1/it-automation-best-practices-7-keys-long-term-success?sc_cid=70160000000h0aXAAQ
|
||||
[3]:https://www.adp.com/
|
||||
[4]:https://www.cyxtera.com/
|
||||
[5]:http://gaconnector.com/
|
||||
[6]:https://www.thinkahead.com/
|
||||
[7]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA
|
||||
[8]:https://enterprisersproject.com/article/2017/12/what-devops-teams-really-need-cio
|
||||
[9]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ
|
@ -0,0 +1,108 @@
|
||||
Open source is 20: How it changed programming and business forever
|
||||
======
|
||||
![][1]
|
||||
|
||||
Every company in the world now uses open-source software. Microsoft, once its greatest enemy, is [now an enthusiastic open supporter][2]. Even [Windows is now built using open-source techniques][3]. And if you ever searched on Google, bought a book from Amazon, watched a movie on Netflix, or looked at your friend's vacation pictures on Facebook, you're an open-source user. Not bad for a technology approach that turns 20 on February 3.
|
||||
|
||||
Now, free software has been around since the first computers, but the philosophy of both free software and open source are both much newer. In the 1970s and 80s, companies rose up which sought to profit by making proprietary software. In the nascent PC world, no one even knew about free software. But, on the Internet, which was dominated by Unix and ITS systems, it was a different story.
|
||||
|
||||
In the late 70s, [Richard M. Stallman][6], also known as RMS, then an MIT programmer, created a free printer utility based on its source code. But then a new laser printer arrived on the campus and he found he could no longer get the source code and so he couldn't recreate the utility. The angry [RMS created the concept of "Free Software."][7]
|
||||
|
||||
RMS's goal was to create a free operating system, [Hurd][8]. To make this happen in September 1983, [he announced the creation of the GNU project][9] (GNU stands for GNU's Not Unix -- a recursive acronym). By January 1984, he was working full-time on the project. To help build it he created the grandfather of all free software/open-source compiler system [GCC][10] and other operating system utilities. Early in 1985, he published "[The GNU Manifesto][11]," which was the founding charter of the free software movement and launched the [Free Software Foundation (FSF)][12].
|
||||
|
||||
This went well for a few years, but inevitably, [RMS collided with proprietary companies][13]. The company Unipress took the code to a variation of his [EMACS][14] programming editor and turned it into a proprietary program. RMS never wanted that to happen again so he created the [GNU General Public License (GPL)][15] in 1989. This was the first copyleft license. It gave users the right to use, copy, distribute, and modify a program's source code. But if you make source code changes and distribute it to others, you must share the modified code. While there had been earlier free licenses, such as [1980's four-clause BSD license][16], the GPL was the one that sparked the free-software, open-source revolution.
|
||||
|
||||
In 1997, [Eric S. Raymond][17] published his vital essay, "[The Cathedral and the Bazaar][18]." In it, he showed the advantages of the free-software development methodologies using GCC, the Linux kernel, and his experiences with his own [Fetchmail][19] project as examples. This essay did more than show the advantages of free software. The programming principles he described led the way for both [Agile][20] development and [DevOps][21]. Twenty-first century programming owes a large debt to Raymond.
|
||||
|
||||
Like all revolutions, free software quickly divided its supporters. On one side, as John Mark Walker, open-source expert and Strategic Advisor at Glyptodon, recently wrote, "[Free software is a social movement][22], with nary a hint of business interests -- it exists in the realm of religion and philosophy. Free software is a way of life with a strong moral code."
|
||||
|
||||
On the other were numerous people who wanted to bring "free software" to business. They would become the founders of "open source." They argued that such phrases as "Free as in freedom" and "Free speech, not beer," left most people confused about what that really meant for software.
|
||||
|
||||
The [release of the Netscape web browser source code][23] sparked a meeting of free software leaders and experts at [a strategy session held on February 3rd][24], 1998 in Palo Alto, CA. There, Eric S. Raymond, Michael Tiemann, Todd Anderson, Jon "maddog" Hall, Larry Augustin, Sam Ockman, and Christine Peterson hammered out the first steps to open source.
|
||||
|
||||
Peterson created the "open-source term." She remembered:
|
||||
|
||||
> [The introduction of the term "open source software" was a deliberate effort][25] to make this field of endeavor more understandable to newcomers and to business, which was viewed as necessary to its spread to a broader community of users. The problem with the main earlier label, "free software," was not its political connotations, but that -- to newcomers -- its seeming focus on price is distracting. A term was needed that focuses on the key issue of source code and that does not immediately confuse those new to the concept. The first term that came along at the right time and fulfilled these requirements was rapidly adopted: open source.
|
||||
|
||||
To help clarify what open source was, and wasn't, Raymond and Bruce Perens founded the [Open Source Initiative (OSI)][26]. Its purpose was, and still is, to define what are real open-source software licenses and what aren't.
|
||||
|
||||
Stallman was enraged by open source. He wrote:
|
||||
|
||||
> The two terms describe almost the same method/category of software, but they stand for [views based on fundamentally different values][27]. Open source is a development methodology; free software is a social movement. For the free software movement, free software is an ethical imperative, essential respect for the users' freedom. By contrast, the philosophy of open source considers issues in terms of how to make software 'better' -- in a practical sense only. It says that non-free software is an inferior solution to the practical problem at hand. Most discussion of "open source" pays no attention to right and wrong, only to popularity and success.
|
||||
|
||||
He saw open source as kowtowing to business and taking the focus away from the personal freedom of being able to have free access to the code. Twenty years later, he's still angry about it.
|
||||
|
||||
In a recent e-mail to me, Stallman said, it is a "common error is connecting me or my work or free software in general with the term 'Open Source.' That is the slogan adopted in 1998 by people who reject the philosophy of the Free Software Movement." In another message, he continued, "I rejected 'open source' because it was meant to bury the "free software" ideas of freedom. Open source inspired the release ofu seful free programs, but what's missing is the idea that users deserve control of their computing. We libre-software activists say, 'Software you can't change and share is unjust, so let's escape to our free replacement.' Open source says only, 'If you let users change your code, they might fix bugs.' What it does says is not wrong, but weak; it avoids saying the deeper point."
|
||||
|
||||
Philosophical conflicts aside, open source has indeed become the model for practical software development. Larry Augustin, CEO of [SugarCRM][28], the open-source customer relationship management (CRM) Software-as-a-Service (SaaS), was one of the first to practice open-source in a commercial software business. Augustin showed that a successful business could be built on open-source software.
|
||||
|
||||
Other companies quickly embraced this model. Besides Linux companies such as [Canonical][29], [Red Hat][30] and [SUSE][31], technology businesses such as [IBM][32] and [Oracle][33] also adopted it. This, in turn, led to open source's commercial success. More recently companies you would never think of for a moment as open-source businesses like [Wal-Mart][34] and [Verizon][35], now rely on open-source programs and have their own open-source projects.
|
||||
|
||||
As Jim Zemlin, director of [The Linux Foundation][36], observed in 2014:
|
||||
|
||||
> A [new business model][37] has emerged in which companies are joining together across industries to share development resources and build common open-source code bases on which they can differentiate their own products and services.
|
||||
|
||||
Today, Hall looked back and said "I look at 'closed source' as a blip in time." Raymond is unsurprised at open-source's success. In an e-mail interview, Raymond said, "Oh, yeah, it *has* been 20 years -- and that's not a big deal because we won most of the fights we needed to quite a while ago, like in the first decade after 1998."
|
||||
|
||||
"Ever since," he continued, "we've been mainly dealing with the problems of success rather than those of failure. And a whole new class of issues, like IoT devices without upgrade paths -- doesn't help so much for the software to be open if you can't patch it."
|
||||
|
||||
In other words, he concludes, "The reward of victory is often another set of battles."
|
||||
|
||||
These are battles that open source is poised to win. Jim Whitehurst, Red Hat's CEO and president told me:
|
||||
|
||||
> The future of open source is bright. We are on the cusp of a new wave of innovation that will come about because information is being separated from physical objects thanks to the Internet of Things. Over the next decade, we will see entire industries based on open-source concepts, like the sharing of information and joint innovation, become mainstream. We'll see this impact every sector, from non-profits, like healthcare, education and government, to global corporations who realize sharing information leads to better outcomes. Open and participative innovation will become a key part of increasing productivity around the world.
|
||||
|
||||
Others see open source extending beyond software development methods. Nick Hopman, Red Hat's senior director of emerging technology practices, said:
|
||||
|
||||
> Open-source is much more than just a process to develop and expose technology. Open-source is a catalyst to drive change in every facet of society -- government, policy, medical diagnostics, process re-engineering, you name it -- and can leverage open principles that have been perfected through the experiences of open-source software development to create communities that drive change and innovation. Looking forward, open-source will continue to drive technology innovation, but I am even more excited to see how it changes the world in ways we have yet to even consider.
|
||||
|
||||
Indeed. Open source has turned twenty, but its influence, and not just on software and business, will continue on for decades to come.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.zdnet.com/article/open-source-turns-20/
|
||||
|
||||
作者:[Steven J. Vaughan-Nichols][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
|
||||
[1]:https://zdnet1.cbsistatic.com/hub/i/r/2018/01/08/d9527281-2972-4cb7-bd87-6464d8ad50ae/thumbnail/570x322/9d4ef9007b3a3ce34de0cc39d2b15b0c/5a4faac660b22f2aba08fc3f-1280x7201jan082018150043poster.jpg
|
||||
[2]:http://www.zdnet.com/article/microsoft-the-open-source-company/
|
||||
[3]:http://www.zdnet.com/article/microsoft-uses-open-source-software-to-create-windows/
|
||||
[4]:https://zdnet1.cbsistatic.com/hub/i/r/2016/11/18/a55b3c0c-7a8e-4143-893f-44900cb2767a/resize/220x165/6cd4e37b1904743ff1f579cb10d9e857/linux-open-source-money-penguin.jpg
|
||||
[5]:http://www.zdnet.com/article/how-do-linux-and-open-source-companies-make-money-from-free-software/
|
||||
[6]:https://stallman.org/
|
||||
[7]:https://opensource.com/article/18/2/pivotal-moments-history-open-source
|
||||
[8]:https://www.gnu.org/software/hurd/hurd.html
|
||||
[9]:https://groups.google.com/forum/#!original/net.unix-wizards/8twfRPM79u0/1xlglzrWrU0J
|
||||
[10]:https://gcc.gnu.org/
|
||||
[11]:https://www.gnu.org/gnu/manifesto.en.html
|
||||
[12]:https://www.fsf.org/
|
||||
[13]:https://www.free-soft.org/gpl_history/
|
||||
[14]:https://www.gnu.org/s/emacs/
|
||||
[15]:https://www.gnu.org/licenses/gpl-3.0.en.html
|
||||
[16]:http://www.linfo.org/bsdlicense.html
|
||||
[17]:http://www.catb.org/esr/
|
||||
[18]:http://www.catb.org/esr/writings/cathedral-bazaar/
|
||||
[19]:http://www.fetchmail.info/
|
||||
[20]:https://www.agilealliance.org/agile101/
|
||||
[21]:https://aws.amazon.com/devops/what-is-devops/
|
||||
[22]:https://opensource.com/business/16/11/open-source-not-free-software?sc_cid=70160000001273HAAQ
|
||||
[23]:http://www.zdnet.com/article/the-beginning-of-the-peoples-web-20-years-of-netscape/
|
||||
[24]:https://opensource.org/history
|
||||
[25]:https://opensource.com/article/18/2/coining-term-open-source-software
|
||||
[26]:https://opensource.org
|
||||
[27]:https://www.gnu.org/philosophy/open-source-misses-the-point.html
|
||||
[28]:https://www.sugarcrm.com/
|
||||
[29]:https://www.canonical.com/
|
||||
[30]:https://www.redhat.com/en
|
||||
[31]:https://www.suse.com/
|
||||
[32]:https://developer.ibm.com/code/open/
|
||||
[33]:http://www.oracle.com/us/technologies/open-source/overview/index.html
|
||||
[34]:http://www.zdnet.com/article/walmart-relies-on-openstack/
|
||||
[35]:https://www.networkworld.com/article/3195490/lan-wan/verizon-taps-into-open-source-white-box-fervor-with-new-cpe-offering.html
|
||||
[36]:http://www.linuxfoundation.org/
|
||||
[37]:http://www.zdnet.com/article/it-takes-an-open-source-village-to-make-commercial-software/
|
@ -0,0 +1,62 @@
|
||||
Security Is Not an Absolute
|
||||
======
|
||||
|
||||
If there’s one thing I wish people from outside the security industry knew when dealing with information security, it’s that **Security is not an absolute**. Most of the time, it’s not even quantifiable. Even in the case of particular threat models, it’s often impossible to make statements about the security of a system with certainty.
|
||||
|
||||
At work, I deal with a lot of very smart people who are not “security people”, but are well-meaning and trying to do the right thing. Online, I sometimes find myself in conversations on [/r/netsec][1], [/r/netsecstudents][2], [/r/asknetsec][3], or [security.stackexchange][4] where someone wants to know something about information security. Either way, it’s quite common that someone asks the fateful question: “Is this secure?”. There are actually only two answers to this question, and neither one is “Yes.”
|
||||
|
||||
The first answer is, fairly obviously, “No.” There are some ideas that are not secure under any reasonable definition of security. Imagine an employer that makes the PIN for your payroll system the day and month on which you started your new job. Clearly, all it takes is someone posting “started my new job today!” to social media, and their PIN has been outed. Consider transporting an encrypted hard drive with the password on a sticky note attached to the outside of the drive. Both of these systems have employed some form of “security control” (even if I use the term loosely), and both are clearly insecure to even the most rudimentary of attacker. Consequently, answering “Is this secure?” with a firm “No” seems appropriate.
|
||||
|
||||
The second answer is more nuanced: “It depends.” What it depends on, and whether those conditions exist in the system in use, are what many security professionals get paid to evaluate. For example, consider the employer in the previous paragraph. Instead of using a fixed scheme for PINs, they now generate a random 4-digit PIN and mail it to each new employee. Is this secure? That all depends on the threat model being applied to the scenario. If we allow an attacker unlimited attempts to log in as that user, then no 4 digit PIN (random or deterministic) is reasonably secure. On average, an attacker will need no more than 5000 requests to find the valid PIN. That can be done by a very basic script in 10s of minutes. If, on the other hand, we lock the account after 10 failed attempts, then we’ve reduced the attacker to a 0.1% chance of success for a given account. Is this secure? For a single account, this is probably reasonably secure (although most users might be uncomfortable at even a 1 in 1000 chance of an attacker succeeding against their personal account) but what if the attacker has a list of 1000 usernames? The attacker now has a **64%** chance of successfully accessing at least 1 account. I think most businesses would find those odds very much against their favor.
|
||||
|
||||
So why can’t we ever come up with an answer of “Yes, this is a secure system”? Well, there’s several factors at play here. The first is that very little in life in general is an absolute:
|
||||
|
||||
* Your doctor cannot tell you with certainty that you will be alive tomorrow.
|
||||
* A seismologist can’t say that there absolutely won’t be a 9.0 earthquake that levels a big chunk of the West Coast.
|
||||
* Your car manufacturer cannot guarantee that the 4 wheels on your car do not fall of on your way to work tomorrow.
|
||||
|
||||
|
||||
|
||||
However, all of these possibilities are very remote events. Most people are comfortable with these probabilities, largely because they do not think much about them, but even if they did, they would believe that it would not happen to them. (And almost always, they would be correct in that assumption.)
|
||||
|
||||
Unfortunately, in information security, we have three things working against us:
|
||||
|
||||
* The risks are much less understood by those seeking to understand them.
|
||||
* The reality is that there are enough security threats that are **much** more common than the events above.
|
||||
* The threats against which security must guard are **adaptive**.
|
||||
|
||||
|
||||
|
||||
Because most people have a hard time reasoning about the likelihood of attacks and threats against them, they seek absolute reassurance. They don’t want to be told “it depends”, they just want to hear “yes, you’re fine.” Many of these individuals are the hypochondriacs of the information security world – they think every possible attack will get them, and they want absolute reassurance they’re safe from those attacks. Alternatively, they don’t understand that there are degrees of security and threat models, and just want to be reassured that they are perfectly secure. Either way, the effect is the same – they don’t understand, but are afraid, and so want the reassurance of complete security.
|
||||
|
||||
We’re in an era where security breaches are unfortunately common, and developers and users alike are hearing about these vulnerabilities and breaches all the time. This causes them to pay far more attention to security then they otherwise would. By itself, this isn’t bad – all of us in the industry have been trying to get everyone’s attention about security issues for decades. Getting it now is better late than never. But because we’re so far behind the curve, the breaches being common, everytone is rushing to find out their risk and get reassurance now. Rather than consider the nuances of the situation, they just want a simple answer to “Am I secure?”
|
||||
|
||||
The last of these issues, however, is also the most unique to information security. For decades, we’ve looked for the formula to make a system perfectly secure. However, each countermeasure or security system is quickly defeated by attackers. We’re in a cat-and-mouse game, rather than an engineering discipline.
|
||||
|
||||
This isn’t to say that security is not an engineering practice – it certainly is in many ways (and my official title claims that I am an engineer), but just that it differs from other engineering areas. The forces faced by a building do not change in face of design changes by the structural engineer. Gravity remains a constant, wind forces are predictible for a given design, the seismic nature of an area is approximately known. Making the building have stronger doors does not suddenly increase the wind forces on the windows. In security, however, when we “strengthen the doors”, the attackers do turn to the “windows” of our system. Our threats are **adaptive** – for each control we implement, they adapt to attempt to circumvent that control. For this reason, a system that was believed secure against the known threats one year is completely broken the next.
|
||||
|
||||
Another form of the security absolutism is those that realize there are degrees of security, but want to take it to an almost ridiculous level of paranoia. Nearly always, these seem to be interested in forms of cryptography – perhaps because cryptography offers numbers that can be tweaked, giving an impression of differing levels of security.
|
||||
|
||||
* Generating RSA encryption keys of over 4k bits in length, even though all cryptographers agree this is pointless.
|
||||
* Asking why AES-512 doesn’t exist, even though SHA-512 does. (Because the length of a hash and the length of a key do not equal in effective strength against attacks.)
|
||||
* Setting up bizarre browser settings and then complaining about websites being broken. (Disabling all JavaScript, all cookies, all ciphers that are less than 256 bits and not perfect forward secrecy, etc.)
|
||||
|
||||
|
||||
|
||||
So the next time you want to know “Is this secure?”, consider the threat model: what are you trying to defend against? Recognize that there are no security absolutes and guarantees, and that good security engineering practice often involves compromise. Sometimes the compromise is one of usability or utility, sometimes the compromise involves working in a less-than-perfect world.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://systemoverlord.com/2018/02/05/security-is-not-an-absolute.html
|
||||
|
||||
作者:[David][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://systemoverlord.com/about
|
||||
[1]:https://reddit.com/r/netsec
|
||||
[2]:https://reddit.com/r/netsecstudents
|
||||
[3]:https://reddit.com/r/asknetsec
|
||||
[4]:https://security.stackexchange.com
|
@ -0,0 +1,75 @@
|
||||
Building Slack for the Linux community and adopting snaps
|
||||
======
|
||||
![][1]
|
||||
|
||||
Used by millions around the world, [Slack][2] is an enterprise software platform that allows teams and businesses of all sizes to communicate effectively. Slack works seamlessly with other software tools within a single integrated environment, providing an accessible archive of an organisation’s communications, information and projects. Although Slack has grown at a rapid rate in the 4 years since their inception, their desktop engineering team who work across Windows, MacOS and Linux consists of just 4 people currently. We spoke to Felix Rieseberg, Staff Software Engineer, who works on this team following the release of Slack’s first [snap last month][3] to discover more about the company’s attitude to the Linux community and why they decided to build a snap.
|
||||
|
||||
[Install Slack snap][4]
|
||||
|
||||
### Can you tell us about the Slack snap which has been published?
|
||||
|
||||
We launched our first snap last month as a new way to distribute to our Linux community. In the enterprise space, we find that people tend to adopt new technology at a slower pace than consumers, so we will continue to offer a .deb package.
|
||||
|
||||
### What level of interest do you see for Slack from the Linux community?
|
||||
|
||||
I’m excited that interest for Slack is growing across all platforms, so it is hard for us to say whether the interest coming out of the Linux community is different from the one we’re generally seeing. However, it is important for us to meet users wherever they do their work. We have a dedicated QA engineer focusing entirely on Linux and we really do try hard to deliver the best possible experience.
|
||||
|
||||
We generally find it is a little harder to build for Linux, than say Windows, as there is a less predictable base to work from – and this is an area where the Linux community truly shines. We have a fairly large number of users that are quite helpful when it comes to reporting bugs and hunting root causes down.
|
||||
|
||||
### How did you find out about snaps?
|
||||
|
||||
Martin Wimpress at Canonical reached out to me and explained the concept of snaps. Honestly, initially I was hesitant – even though I use Ubuntu – because it seemed like another standard to build and maintain. However, once understanding the benefits I was convinced it was a worthwhile investment.
|
||||
|
||||
### What was the appeal of snaps that made you decide to invest in them?
|
||||
|
||||
Without doubt, the biggest reason we decided to build the snap is the updating feature. We at Slack make heavy use of web technologies, which in turn allows us to offer a wide variety of features – like the integration of YouTube videos or Spotify playlists. Much like a browser, that means that we frequently need to update the application.
|
||||
|
||||
On macOS and Windows, we already had a dedicated auto-updater that doesn’t require the user to even think about updates. We have found that any sort of interruption, even for an update, is an annoyance that we’d like to avoid. Therefore, the automatic updates via snaps seemed far more seamless and easy.
|
||||
|
||||
### How does building snaps compare to other forms of packaging you produce? How easy was it to integrate with your existing infrastructure and process?
|
||||
|
||||
As far as Linux is concerned, we have not tried other “new” packaging formats, but we’ll never say never. Snaps were an easy choice given that the majority of our Linux customers do use Ubuntu. The fact that snaps also run on other distributions was a decent bonus. I think it is really neat how Canonical is making snaps cross-distro rather than focusing on just Ubuntu.
|
||||
|
||||
Building it was surprisingly easy: We have one unified build process that creates installers and packages – and our snap creation simply takes the .deb package and churns out a snap. For other technologies, we sometimes had to build in-house tools to support our buildchain, but the `snapcraft` tool turned out to be just the right thing. The team at Canonical were incredibly helpful to push it through as we did experience a few problems along the way.
|
||||
|
||||
### How do you see the store changing the way users find and install your software?
|
||||
|
||||
What is really unique about Slack is that people don’t just stumble upon it – they know about it from elsewhere and actively try to find it. Therefore, our levels of awareness are already high but having the snap available in the store, I hope, will make installation a lot easier for our users.
|
||||
|
||||
We always try to do the best for our users. The more convinced we become that it is better than other installation options, the more we will recommend the snap to our users.
|
||||
|
||||
### What are your expectations or already seen savings by using snaps instead of having to package for other distros?
|
||||
|
||||
We expect the snap to offer more convenience for our users and ensure they enjoy using Slack more. From our side, the snap will save time on customer support as users won’t be stuck on previous versions which will naturally resolve a lot of issues. Having the snap is an additional bonus for us and something to build on, rather than displacing anything we already have.
|
||||
|
||||
### What release channels (edge/beta/candidate/stable) in the store are you using or plan to use, if any?
|
||||
|
||||
We used the edge channel exclusively in the development to share with the team at Canonical. Slack for Linux as a whole is still in beta, but long-term, having the options for channels is interesting and being able to release versions to interested customers a little earlier will certainly be beneficial.
|
||||
|
||||
### How do you think packaging your software as a snap helps your users? Did you get any feedback from them?
|
||||
|
||||
Installation and updating generally being easier will be the big benefit to our users. Long-term, the question is “Will users that installed the snap experience less problems than other customers?” I have a decent amount of hope that the built-in dependencies in snaps make it likely.
|
||||
|
||||
### What advice or knowledge would you share with developers who are new to snaps?
|
||||
|
||||
I would recommend starting with the Debian package to build your snap – that was shockingly easy. It also starts the scope smaller to avoid being overwhelmed. It is a fairly small time investment and probably worth it. Also if you can, try to find someone at Canonical to work with – they have amazing engineers.
|
||||
|
||||
### Where do you see the biggest opportunity for development?
|
||||
|
||||
We are taking it step by step currently – first get people on the snap, and build from there. People using it will already be more secure as they will benefit from the latest updates.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://insights.ubuntu.com/2018/02/06/building-slack-for-the-linux-community-and-adopting-snaps/
|
||||
|
||||
作者:[Sarah][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://insights.ubuntu.com/author/sarahfd/
|
||||
[1]:https://insights.ubuntu.com/wp-content/uploads/a115/Slack_linux_screenshot@2x-2.png
|
||||
[2]:https://slack.com/
|
||||
[3]:https://insights.ubuntu.com/2018/01/18/canonical-brings-slack-to-the-snap-ecosystem/
|
||||
[4]:https://snapcraft.io/slack/
|
@ -0,0 +1,49 @@
|
||||
How to start an open source program in your company
|
||||
======
|
||||
|
||||

|
||||
|
||||
Many internet-scale companies, including Google, Facebook, and Twitter, have established formal open source programs (sometimes referred to as open source program offices, or OSPOs for short), a designated place where open source consumption and production is supported inside a company. With such an office in place, any business can execute its open source strategies in clear terms, giving the company tools needed to make open source a success. An open source program office's responsibilities may include establishing policies for code use, distribution, selection, and auditing; engaging with open source communities; training developers; and ensuring legal compliance.
|
||||
|
||||
Internet-scale companies aren't the only ones establishing open source programs; studies show that [65% of companies][1] across industries are using and contributing to open source. In the last couple of years we’ve seen [VMware][2], [Amazon][3], [Microsoft][4], and even the [UK government][5] hire open source leaders and/or create open source programs. Having an open source strategy has become critical for businesses and even governments, and all organizations should be following in their footsteps.
|
||||
|
||||
### How to start an open source program
|
||||
|
||||
Although each open source office will be customized to a specific organization’s needs, there are standard steps that every company goes through. These include:
|
||||
|
||||
* **Finding a leader:** Identifying the right person to lead the open source program is the first step. The [TODO Group][6] maintains a list of [sample job descriptions][7] that may be helpful in finding candidates.
|
||||
* **Deciding on the program structure:** There are a variety of ways to fit an open source program office into an organization's existing structure, depending on its focus. Companies with large intellectual property portfolios may be most comfortable placing the office within the legal department. Engineering-driven organizations may choose to place the office in an engineering department, especially if the focus of the office is to improve developer productivity. Others may want the office to be within the marketing department to support sales of open source products. For inspiration, the TODO Group offers [open source program case studies][8] that can be useful.
|
||||
* **Setting policies and processes:** There needs to be a standardized method for implementing the organization’s open source strategy. The policies, which should require as little oversight as possible, lay out the requirements and rules for working with open source across the organization. They should be clearly defined, easily accessible, and even automated with tooling. Ideally, employees should be able to question policies and provide recommendations for improving or revising them. Numerous organizations active in open source, such as Google, [publish their policies publicly][9], which can be a good place to start. The TODO Group offers examples of other [open source policies][10] organizations can use as resources.
|
||||
|
||||
|
||||
|
||||
### A worthy step
|
||||
|
||||
Opening an open source program office is a big step for most organizations, especially if they are (or are transitioning into) a software company. The benefits to the organization are tremendous and will more than make up for the investment in the long run—not only in employee satisfaction but also in developer efficiency. There are many resources to help on the journey. The TODO Group guides [How to Create an Open Source Program][11], [Measuring Your Open Source Program's Success][12], and [Tools for Managing Open Source Programs][13] are great starting points.
|
||||
|
||||
Open source will truly be sustainable as more companies formalize programs to contribute back to these projects. I hope these resources are useful to you, and I wish you luck on your open source program journey.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/how-start-open-source-program-your-company
|
||||
|
||||
作者:[Chris Aniszczyk][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/caniszczyk
|
||||
[1]:https://www.blackducksoftware.com/2016-future-of-open-source
|
||||
[2]:http://www.cio.com/article/3095843/open-source-tools/vmware-today-has-a-strong-investment-in-open-source-dirk-hohndel.html
|
||||
[3]:http://fortune.com/2016/12/01/amazon-open-source-guru/
|
||||
[4]:https://opensource.microsoft.com/
|
||||
[5]:https://www.linkedin.com/jobs/view/169669924
|
||||
[6]:http://todogroup.org
|
||||
[7]:https://github.com/todogroup/job-descriptions
|
||||
[8]:https://github.com/todogroup/guides/tree/master/casestudies
|
||||
[9]:https://opensource.google.com/docs/why/
|
||||
[10]:https://github.com/todogroup/policies
|
||||
[11]:https://github.com/todogroup/guides/blob/master/creating-an-open-source-program.md
|
||||
[12]:https://github.com/todogroup/guides/blob/master/measuring-your-open-source-program.md
|
||||
[13]:https://github.com/todogroup/guides/blob/master/tools-for-managing-open-source-programs.md
|
@ -0,0 +1,99 @@
|
||||
UQDS: A software-development process that puts quality first
|
||||
======
|
||||
|
||||

|
||||
|
||||
The Ultimate Quality Development System (UQDS) is a software development process that provides clear guidelines for how to use branches, tickets, and code reviews. It was invented more than a decade ago by Divmod and adopted by [Twisted][1], an event-driven framework for Python that underlies popular commercial platforms like HipChat as well as open source projects like Scrapy (a web scraper).
|
||||
|
||||
Divmod, sadly, is no longer around—it has gone the way of many startups. Luckily, since many of its products were open source, its legacy lives on.
|
||||
|
||||
When Twisted was a young project, there was no clear process for when code was "good enough" to go in. As a result, while some parts were highly polished and reliable, others were alpha quality software—with no way to tell which was which. UQDS was designed as a process to help an existing project with definite quality challenges ramp up its quality while continuing to add features and become more useful.
|
||||
|
||||
UQDS has helped the Twisted project evolve from having frequent regressions and needing multiple release candidates to get a working version, to achieving its current reputation of stability and reliability.
|
||||
|
||||
### UQDS's building blocks
|
||||
|
||||
UQDS was invented by Divmod back in 2006. At that time, Continuous Integration (CI) was in its infancy and modern version control systems, which allow easy branch merging, were barely proofs of concept. Although Divmod did not have today's modern tooling, it put together CI, some ad-hoc tooling to make [Subversion branches][2] work, and a lot of thought into a working process. Thus the UQDS methodology was born.
|
||||
|
||||
UQDS is based upon fundamental building blocks, each with their own carefully considered best practices:
|
||||
|
||||
1. Tickets
|
||||
2. Branches
|
||||
3. Tests
|
||||
4. Reviews
|
||||
5. No exceptions
|
||||
|
||||
|
||||
|
||||
Let's go into each of those in a little more detail.
|
||||
|
||||
#### Tickets
|
||||
|
||||
In a project using the UQDS methodology, no change is allowed to happen if it's not accompanied by a ticket. This creates a written record of what change is needed and—more importantly—why.
|
||||
|
||||
* Tickets should define clear, measurable goals.
|
||||
* Work on a ticket does not begin until the ticket contains goals that are clearly defined.
|
||||
|
||||
|
||||
|
||||
#### Branches
|
||||
|
||||
Branches in UQDS are tightly coupled with tickets. Each branch must solve one complete ticket, no more and no less. If a branch addresses either more or less than a single ticket, it means there was a problem with the ticket definition—or with the branch. Tickets might be split or merged, or a branch split and merged, until congruence is achieved.
|
||||
|
||||
Enforcing that each branch addresses no more nor less than a single ticket—which corresponds to one logical, measurable change—allows a project using UQDS to have fine-grained control over the commits: A single change can be reverted or changes may even be applied in a different order than they were committed. This helps the project maintain a stable and clean codebase.
|
||||
|
||||
#### Tests
|
||||
|
||||
UQDS relies upon automated testing of all sorts, including unit, integration, regression, and static tests. In order for this to work, all relevant tests must pass at all times. Tests that don't pass must either be fixed or, if no longer relevant, be removed entirely.
|
||||
|
||||
Tests are also coupled with tickets. All new work must include tests that demonstrate that the ticket goals are fully met. Without this, the work won't be merged no matter how good it may seem to be.
|
||||
|
||||
A side effect of the focus on tests is that the only platforms that a UQDS-using project can say it supports are those on which the tests run with a CI framework—and where passing the test on the platform is a condition for merging a branch. Without this restriction on supported platforms, the quality of the project is not Ultimate.
|
||||
|
||||
#### Reviews
|
||||
|
||||
While automated tests are important to the quality ensured by UQDS, the methodology never loses sight of the human factor. Every branch commit requires code review, and each review must follow very strict rules:
|
||||
|
||||
1. Each commit must be reviewed by a different person than the author.
|
||||
2. Start with a comment thanking the contributor for their work.
|
||||
3. Make a note of something that the contributor did especially well (e.g., "that's the perfect name for that variable!").
|
||||
4. Make a note of something that could be done better (e.g., "this line could use a comment explaining the choices.").
|
||||
5. Finish with directions for an explicit next step, typically either merge as-is, fix and merge, or fix and submit for re-review.
|
||||
|
||||
|
||||
|
||||
These rules respect the time and effort of the contributor while also increasing the sharing of knowledge and ideas. The explicit next step allows the contributor to have a clear idea on how to make progress.
|
||||
|
||||
#### No exceptions
|
||||
|
||||
In any process, it's easy to come up with reasons why you might need to flex the rules just a little bit to let this thing or that thing slide through the system. The most important fundamental building block of UQDS is that there are no exceptions. The entire community works together to make sure that the rules do not flex, not for any reason whatsoever.
|
||||
|
||||
Knowing that all code has been approved by a different person than the author, that the code has complete test coverage, that each branch corresponds to a single ticket, and that this ticket is well considered and complete brings a piece of mind that is too valuable to risk losing, even for a single small exception. The goal is quality, and quality does not come from compromise.
|
||||
|
||||
### A downside to UQDS
|
||||
|
||||
While UQDS has helped Twisted become a highly stable and reliable project, this reliability hasn't come without cost. We quickly found that the review requirements caused a slowdown and backlog of commits to review, leading to slower development. The answer to this wasn't to compromise on quality by getting rid of UQDS; it was to refocus the community priorities such that reviewing commits became one of the most important ways to contribute to the project.
|
||||
|
||||
To help with this, the community developed a bot in the [Twisted IRC channel][3] that will reply to the command `review tickets` with a list of tickets that still need review. The [Twisted review queue][4] website returns a prioritized list of tickets for review. Finally, the entire community keeps close tabs on the number of tickets that need review. It's become an important metric the community uses to gauge the health of the project.
|
||||
|
||||
### Learn more
|
||||
|
||||
The best way to learn about UQDS is to [join the Twisted Community][5] and see it in action. If you'd like more information about the methodology and how it might help your project reach a high level of reliability and stability, have a look at the [UQDS documentation][6] in the Twisted wiki.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/uqds
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/moshez
|
||||
[1]:https://twistedmatrix.com/trac/
|
||||
[2]:http://structure.usc.edu/svn/svn.branchmerge.html
|
||||
[3]:http://webchat.freenode.net/?channels=%23twisted
|
||||
[4]:https://twisted.reviews
|
||||
[5]:https://twistedmatrix.com/trac/wiki/TwistedCommunity
|
||||
[6]:https://twistedmatrix.com/trac/wiki/UltimateQualityDevelopmentSystem
|
@ -0,0 +1,102 @@
|
||||
Why Linux is better than Windows or macOS for security
|
||||
======
|
||||
|
||||

|
||||
|
||||
Enterprises invest a lot of time, effort and money in keeping their systems secure. The most security-conscious might have a security operations center. They of course use firewalls and antivirus tools. They probably spend a lot of time monitoring their networks, looking for telltale anomalies that could indicate a breach. What with IDS, SIEM and NGFWs, they deploy a veritable alphabet of defenses.
|
||||
|
||||
But how many have given much thought to one of the cornerstones of their digital operations: the operating systems deployed on the workforce’s PCs? Was security even a factor when the desktop OS was selected?
|
||||
|
||||
This raises a question that every IT person should be able to answer: Which operating system is the most secure for general deployment?
|
||||
|
||||
We asked some experts what they think of the security of these three choices: Windows, the ever-more-complex platform that’s easily the most popular desktop system; macOS X, the FreeBSD Unix-based operating system that powers Apple Macintosh systems; and Linux, by which we mean all the various Linux distributions and related Unix-based systems.
|
||||
|
||||
### How we got here
|
||||
|
||||
One reason enterprises might not have evaluated the security of the OS they deployed to the workforce is that they made the choice years ago. Go back far enough and all operating systems were reasonably safe, because the business of hacking into them and stealing data or installing malware was in its infancy. And once an OS choice is made, it’s hard to consider a change. Few IT organizations would want the headache of moving a globally dispersed workforce to an entirely new OS. Heck, they get enough pushback when they move users to a new version of their OS of choice.
|
||||
|
||||
Still, would it be wise to reconsider? Are the three leading desktop OSes different enough in their approach to security to make a change worthwhile?
|
||||
|
||||
Certainly the threats confronting enterprise systems have changed in the last few years. Attacks have become far more sophisticated. The lone teen hacker that once dominated the public imagination has been supplanted by well-organized networks of criminals and shadowy, government-funded organizations with vast computing resources.
|
||||
|
||||
Like many of you, I have firsthand experience of the threats that are out there: I have been infected by malware and viruses on numerous Windows computers, and I even had macro viruses that infected files on my Mac. More recently, a widespread automated hack circumvented the security on my website and infected it with malware. The effects of such malware were always initially subtle, something you wouldn’t even notice, until the malware ended up so deeply embedded in the system that performance started to suffer noticeably. One striking thing about the infestations was that I was never specifically targeted by the miscreants; nowadays, it’s as easy to attack 100,000 computers with a botnet as it is to attack a dozen.
|
||||
|
||||
### Does the OS really matter?
|
||||
|
||||
The OS you deploy to your users does make a difference for your security stance, but it isn’t a sure safeguard. For one thing, a breach these days is more likely to come about because an attacker probed your users, not your systems. A [survey][1] of hackers who attended a recent DEFCON conference revealed that “84 percent use social engineering as part of their attack strategy.” Deploying a secure operating system is an important starting point, but without user education, strong firewalls and constant vigilance, even the most secure networks can be invaded. And of course there’s always the risk of user-downloaded software, extensions, utilities, plug-ins and other software that appears benign but becomes a path for malware to appear on the system.
|
||||
|
||||
And no matter which platform you choose, one of the best ways to keep your system secure is to ensure that you apply software updates promptly. Once a patch is in the wild, after all, the hackers can reverse engineer it and find a new exploit they can use in their next wave of attacks.
|
||||
|
||||
And don’t forget the basics. Don’t use root, and don’t grant guest access to even older servers on the network. Teach your users how to pick really good passwords and arm them with tools such as [1Password][2] that make it easier for them to have different passwords on every account and website they use.
|
||||
|
||||
Because the bottom line is that every decision you make regarding your systems will affect your security, even the operating system your users do their work on.
|
||||
|
||||
**[ To comment on this story, visit[Computerworld's Facebook page][3]. ]**
|
||||
|
||||
### Windows, the popular choice
|
||||
|
||||
If you’re a security manager, it is extremely likely that the questions raised by this article could be rephrased like so: Would we be more secure if we moved away from Microsoft Windows? To say that Windows dominates the enterprise market is to understate the case. [NetMarketShare][4] estimates that a staggering 88% of all computers on the internet are running a version of Windows.
|
||||
|
||||
If your systems fall within that 88%, you’re probably aware that Microsoft has continued to beef up security in the Windows system. Among its improvements have been rewriting and re-rewriting its operating system codebase, adding its own antivirus software system, improving firewalls and implementing a sandbox architecture, where programs can’t access the memory space of the OS or other applications.
|
||||
|
||||
But the popularity of Windows is a problem in itself. The security of an operating system can depend to a large degree on the size of its installed base. For malware authors, Windows provides a massive playing field. Concentrating on it gives them the most bang for their efforts.
|
||||
As Troy Wilkinson, CEO of Axiom Cyber Solutions, explains, “Windows always comes in last in the security world for a number of reasons, mainly because of the adoption rate of consumers. With a large number of Windows-based personal computers on the market, hackers historically have targeted these systems the most.”
|
||||
|
||||
It’s certainly true that, from Melissa to WannaCry and beyond, much of the malware the world has seen has been aimed at Windows systems.
|
||||
|
||||
### macOS X and security through obscurity
|
||||
|
||||
If the most popular OS is always going to be the biggest target, then can using a less popular option ensure security? That idea is a new take on the old — and entirely discredited — concept of “security through obscurity,” which held that keeping the inner workings of software proprietary and therefore secret was the best way to defend against attacks.
|
||||
|
||||
Wilkinson flatly states that macOS X “is more secure than Windows,” but he hastens to add that “macOS used to be considered a fully secure operating system with little chance of security flaws, but in recent years we have seen hackers crafting additional exploits against macOS.”
|
||||
|
||||
In other words, the attackers are branching out and not ignoring the Mac universe.
|
||||
|
||||
Security researcher Lee Muson of Comparitech says that “macOS is likely to be the pick of the bunch” when it comes to choosing a more secure OS, but he cautions that it is not impenetrable, as once thought. Its advantage is that “it still benefits from a touch of security through obscurity versus the still much larger target presented by Microsoft’s offering.”
|
||||
|
||||
Joe Moore of Wolf Solutions gives Apple a bit more credit, saying that “off the shelf, macOS X has a great track record when it comes to security, in part because it isn’t as widely targeted as Windows and in part because Apple does a pretty good job of staying on top of security issues.”
|
||||
|
||||
### And the winner is …
|
||||
|
||||
You probably knew this from the beginning: The clear consensus among experts is that Linux is the most secure operating system. But while it’s the OS of choice for servers, enterprises deploying it on the desktop are few and far between.
|
||||
|
||||
And if you did decide that Linux was the way to go, you would still have to decide which distribution of the Linux system to choose, and things get a bit more complicated there. Users are going to want a UI that seems familiar, and you are going to want the most secure OS.
|
||||
|
||||
As Moore explains, “Linux has the potential to be the most secure, but requires the user be something of a power user.” So, not for everyone.
|
||||
|
||||
Linux distros that target security as a primary feature include [Parrot Linux][5], a Debian-based distro that Moore says provides numerous security-related tools right out of the box.
|
||||
|
||||
Of course, an important differentiator is that Linux is open source. The fact that coders can read and comment upon each other’s work might seem like a security nightmare, but it actually turns out to be an important reason why Linux is so secure, says Igor Bidenko, CISO of Simplex Solutions. “Linux is the most secure OS, as its source is open. Anyone can review it and make sure there are no bugs or back doors.”
|
||||
|
||||
Wilkinson elaborates that “Linux and Unix-based operating systems have less exploitable security flaws known to the information security world. Linux code is reviewed by the tech community, which lends itself to security: By having that much oversight, there are fewer vulnerabilities, bugs and threats.”
|
||||
|
||||
That’s a subtle and perhaps counterintuitive explanation, but by having dozens — or sometimes hundreds — of people read through every line of code in the operating system, the code is actually more robust and the chance of flaws slipping into the wild is diminished. That had a lot to do with why PC World came right out and said Linux is more secure. As Katherine Noyes [explains][6], “Microsoft may tout its large team of paid developers, but it’s unlikely that team can compare with a global base of Linux user-developers around the globe. Security can only benefit through all those extra eyeballs.”
|
||||
|
||||
Another factor cited by PC World is Linux’s better user privileges model: Windows users “are generally given administrator access by default, which means they pretty much have access to everything on the system,” according to Noyes’ article. Linux, in contrast, greatly restricts “root.”
|
||||
|
||||
Noyes also noted that the diversity possible within Linux environments is a better hedge against attacks than the typical Windows monoculture: There are simply a lot of different distributions of Linux available. And some of them are differentiated in ways that specifically address security concerns. Security Researcher Lee Muson of Comparitech offers this suggestion for a Linux distro: “The[Qubes OS][7] is as good a starting point with Linux as you can find right now, with an [endorsement from Edward Snowden][8] massively overshadowing its own extremely humble claims.” Other security experts point to specialized secure Linux distributions such as [Tails Linux][9], designed to run securely and anonymously directly from a USB flash drive or similar external device.
|
||||
|
||||
### Building security momentum
|
||||
|
||||
Inertia is a powerful force. Although there is clear consensus that Linux is the safest choice for the desktop, there has been no stampede to dump Windows and Mac machines in favor of it. Nonetheless, a small but significant increase in Linux adoption would probably result in safer computing for everyone, because in market share loss is one sure way to get Microsoft’s and Apple’s attention. In other words, if enough users switch to Linux on the desktop, Windows and Mac PCs are very likely to become more secure platforms.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.computerworld.com/article/3252823/linux/why-linux-is-better-than-windows-or-macos-for-security.html
|
||||
|
||||
作者:[Dave Taylor][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.computerworld.com/author/Dave-Taylor/
|
||||
[1]:https://www.esecurityplanet.com/hackers/fully-84-percent-of-hackers-leverage-social-engineering-in-attacks.html
|
||||
[2]:http://www.1password.com
|
||||
[3]:https://www.facebook.com/Computerworld/posts/10156160917029680
|
||||
[4]:https://www.netmarketshare.com/operating-system-market-share.aspx?options=%7B%22filter%22%3A%7B%22%24and%22%3A%5B%7B%22deviceType%22%3A%7B%22%24in%22%3A%5B%22Desktop%2Flaptop%22%5D%7D%7D%5D%7D%2C%22dateLabel%22%3A%22Trend%22%2C%22attributes%22%3A%22share%22%2C%22group%22%3A%22platform%22%2C%22sort%22%3A%7B%22share%22%3A-1%7D%2C%22id%22%3A%22platformsDesktop%22%2C%22dateInterval%22%3A%22Monthly%22%2C%22dateStart%22%3A%222017-02%22%2C%22dateEnd%22%3A%222018-01%22%2C%22segments%22%3A%22-1000%22%7D
|
||||
[5]:https://www.parrotsec.org/
|
||||
[6]:https://www.pcworld.com/article/202452/why_linux_is_more_secure_than_windows.html
|
||||
[7]:https://www.qubes-os.org/
|
||||
[8]:https://twitter.com/snowden/status/781493632293605376?lang=en
|
||||
[9]:https://tails.boum.org/about/index.en.html
|
@ -0,0 +1,47 @@
|
||||
How DevOps helps deliver cool apps to users
|
||||
======
|
||||
|
||||

|
||||
|
||||
A long time ago, in a galaxy far, far away, before DevOps became a mainstream practice, the software development process was excruciatingly slow, tedious, and methodical. By the time an application was ready to be deployed, a ginormous laundry list of changes and fixes to the next major release had already amassed. It took months to go back and work through the entire development cycle to prepare for each new release. Keep in mind that this process would be repeated again and again to deliver updates to users.
|
||||
|
||||
Today everything is done instantaneously and in real time, and this concept seems primitive. The mobile revolution has dramatically changed the way we interact with software, and companies that were early adopters of DevOps have totally changed the expectations for software development and deployment.
|
||||
|
||||
Consider Facebook: The Facebook mobile app is updated and refreshed every two weeks, like clockwork. This is the new standard, because users now expect software to be constantly fixed and updated. Any company that takes a month or more to deploy new features or simple bug fixes would surely fade into obscurity. If you cannot deliver what users expect, they will find someone who can.
|
||||
|
||||
Facebook, along with industry giants such as Amazon, Netflix, Google, and others, have forced enterprises to become faster and more efficient to meet today's customer expectations.
|
||||
|
||||
### Why DevOps?
|
||||
|
||||
Agile and DevOps are critically important to mobile app development since deployment cycles are lightning-quick. It’s a dense, fast-paced environment in which companies must outpace, out-think, and outmaneuver the competition to survive. In the App Store, the average top ten app remains in that position for only about a month.
|
||||
|
||||
To illustrate the old-school waterfall methodology, think back to when you first learned how to drive. Initially, you focused on every individual aspect, using a methodological process: You got in the car; fastened the seat belt; adjusted the seat, mirrors, and steering wheel; started the car, placed your hands at 10 and 2 o’clock, etc. Performing a simple task such as a lane change involved a painstaking, multi-step process executed in a particular order.
|
||||
|
||||
DevOps, in contrast, is how you would drive after several years of experience. Everything occurs intuitively and simultaneously, and you can move smoothly from A to B without putting much thought into the process.
|
||||
|
||||
The world of mobile apps is too fast-paced for old methods of app development. DevOps is designed to deliver effective, stable apps quickly and without the need for extensive resources. However, you cannot buy DevOps like an ordinary product or service. DevOps is about changing the culture and dynamics of how teams work together.
|
||||
|
||||
Large organizations like Amazon and Facebook are not the only ones embracing the DevOps culture; smaller mobile app companies are signing on as well. “Shortening the release cycle while keeping number of production incidents at a low level along with the overall cost of failure is what our customers looking for,” says Oleg Reshetnyak, head of engineering at mobile product agency [Reinvently.][1]
|
||||
|
||||
### DevOps: Not _if_ , but _when_
|
||||
|
||||
In today’s fast-paced business environment, choosing DevOps is like choosing to breathe: You either [do it or die][2].
|
||||
|
||||
According to the [U.S. Small Business Administration][3], only 16% of companies starting out today will last an entire generation. Mobile app companies that do not adopt DevOps risk going the way of the dinosaurs. Furthermore, the same study found that organizations that adopt DevOps are twice as likely to exceed profitability, product goals, and market share.
|
||||
|
||||
Innovating more quickly and securely requires three things: cloud, automation, and DevOps. Depending on how you define DevOps, the lines that separate these three factors can be unclear. However, one thing is certain: DevOps unifies everyone within the organization around the common goal of delivering higher-quality software more quickly and with less risk.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/devops-delivers-cool-apps-users
|
||||
|
||||
作者:[Stanislav Ivaschenko][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/ilyadudkin
|
||||
[1]:https://reinvently.com/
|
||||
[2]:https://squadex.com/insights/devops-or-die/
|
||||
[3]:https://www.sba.gov/
|
@ -0,0 +1,73 @@
|
||||
Why Mainframes Aren't Going Away Any Time Soon
|
||||
======
|
||||
|
||||

|
||||
|
||||
IBM's last earnings report showed the [first uptick in revenue in more than five years.][1] Some of that growth was from an expected source, cloud revenue, which was up 24 percent year over year and now accounts for 21 percent of Big Blue's take. Another major boost, however, came from a spike in mainframe revenue. Z series mainframe sales were up 70 percent, the company said.
|
||||
|
||||
This may sound somewhat akin to a return to vacuum tube technology in a world where transistors are yesterday's news. In actuality, this is only a sign of the changing face of IT.
|
||||
|
||||
**Related:** [One Click and Voilà, Your Entire Data Center is Encrypted][2]
|
||||
|
||||
Modern mainframes definitely aren't your father's punch card-driven machines that filled entire rooms. These days, they most often run Linux and have found a renewed place in the data center, where they're being called upon to do a lot of heavy lifting. Want to know where the largest instance of Oracle's database runs? It's on a Linux mainframe. How about the largest implementation of SAP on the planet? Again, Linux on a mainframe.
|
||||
|
||||
"Before the advent of Linux on the mainframe, the people who bought mainframes primarily were people who already had them," Leonard Santalucia explained to Data Center Knowledge several months back at the All Things Open conference. "They would just wait for the new version to come out and upgrade to it, because it would run cheaper and faster.
|
||||
|
||||
**Related:** [IBM Designs a “Performance Beast” for AI][3]
|
||||
|
||||
"When Linux came out, it opened up the door to other customers that never would have paid attention to the mainframe. In fact, probably a good three to four hundred new clients that never had mainframes before got them. They don't have any old mainframes hanging around or ones that were upgraded. These are net new mainframes."
|
||||
|
||||
Although Santalucia is CTO at Vicom Infinity, primarily an IBM reseller, at the conference he was wearing his hat as chairperson of the Linux Foundation's Open Mainframe Project. He was joined in the conversation by John Mertic, the project's director of program management.
|
||||
|
||||
Santalucia knows IBM's mainframes from top to bottom, having spent 27 years at Big Blue, the last eight as CTO for the company's systems and technology group.
|
||||
|
||||
"Because of Linux getting started with it back in 1999, it opened up a lot of doors that were closed to the mainframe," he said. "Beforehand it was just z/OS, z/VM, z/VSE, z/TPF, the traditional operating systems. When Linux came along, it got the mainframe into other areas that it never was, or even thought to be in, because of how open it is, and because Linux on the mainframe is no different than Linux on any other platform."
|
||||
|
||||
The focus on Linux isn't the only motivator behind the upsurge in mainframe use in data centers. Increasingly, enterprises with heavy IT needs are finding many advantages to incorporating modern mainframes into their plans. For example, mainframes can greatly reduce power, cooling, and floor space costs. In markets like New York City, where real estate is at a premium, electricity rates are high, and electricity use is highly taxed to reduce demand, these are significant advantages.
|
||||
|
||||
"There was one customer where we were able to do a consolidation of 25 x86 cores to one core on a mainframe," Santalucia said. "They have several thousand machines that are ten and twenty cores each. So, as far as the eye could see in this data center, [x86 server workloads] could be picked up and moved onto this box that is about the size of a sub-zero refrigerator in your kitchen."
|
||||
|
||||
In addition to saving on physical data center resources, this customer by design would likely see better performance.
|
||||
|
||||
"When you look at the workload as it's running on an x86 system, the math, the application code, the I/O to manage the disk, and whatever else is attached to that system, is all run through the same chip," he explained. "On a Z, there are multiple chip architectures built into the system. There's one specifically just for the application code. If it senses the application needs an I/O or some mathematics, it sends it off to a separate processor to do math or I/O, all dynamically handled by the underlying firmware. Your Linux environment doesn't have to understand that. When it's running on a mainframe, it knows it's running on a mainframe and it will exploit that architecture."
|
||||
|
||||
The operating system knows it's running on a mainframe because when IBM was readying its mainframe for Linux it open sourced something like 75,000 lines of code for Linux distributions to use to make sure their OS's were ready for IBM Z.
|
||||
|
||||
"A lot of times people will hear there's 170 processors on the Z14," Santalucia said. "Well, there's actually another 400 other processors that nobody counts in that count of application chips, because it is taken for granted."
|
||||
|
||||
Mainframes are also resilient when it comes to disaster recovery. Santalucia told the story of an insurance company located in lower Manhattan, within sight of the East River. The company operated a large data center in a basement that among other things housed a mainframe backed up to another mainframe located in Upstate New York. When Hurricane Sandy hit in 2012, the data center flooded, electrocuting two employees and destroying all of the servers, including the mainframe. But the mainframe's workload was restored within 24 hours from the remote backup.
|
||||
|
||||
The x86 machines were all destroyed, and the data was never recovered. But why weren't they also backed up?
|
||||
|
||||
"The reason they didn't do this disaster recovery the same way they did with the mainframe was because it was too expensive to have a mirror of all those distributed servers someplace else," he explained. "With the mainframe, you can have another mainframe as an insurance policy that's lower in price, called Capacity BackUp, and it just sits there idling until something like this happens."
|
||||
|
||||
Mainframes are also evidently tough as nails. Santalucia told another story in which a data center in Japan was struck by an earthquake strong enough to destroy all of its x86 machines. The center's one mainframe fell on its side but continued to work.
|
||||
|
||||
The mainframe also comes with built-in redundancy to guard against situations that would be disastrous with x86 machines.
|
||||
|
||||
"What if a hard disk fails on a node in x86?" the Open Mainframe Project's Mertic asked. "You're taking down a chunk of that cluster potentially. With a mainframe you're not. A mainframe just keeps on kicking like nothing's ever happened."
|
||||
|
||||
Mertic added that a motherboard can be pulled from a running mainframe, and again, "the thing keeps on running like nothing's ever happened."
|
||||
|
||||
So how do you figure out if a mainframe is right for your organization? Simple, says Santalucia. Do the math.
|
||||
|
||||
"The approach should be to look at it from a business, technical, and financial perspective -- not just a financial, total-cost-of-acquisition perspective," he said, pointing out that often, costs associated with software, migration, networking, and people are not considered. The break-even point, he said, comes when at least 20 to 30 servers are being migrated to a mainframe. After that point the mainframe has a financial advantage.
|
||||
|
||||
"You can get a few people running the mainframe and managing hundreds or thousands of virtual servers," he added. "If you tried to do the same thing on other platforms, you'd find that you need significantly more resources to maintain an environment like that. Seven people at ADP handle the 8,000 virtual servers they have, and they need seven only in case somebody gets sick.
|
||||
|
||||
"If you had eight thousand servers on x86, even if they're virtualized, do you think you could get away with seven?"
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.datacenterknowledge.com/hardware/why-mainframes-arent-going-away-any-time-soon
|
||||
|
||||
作者:[Christine Hall][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.datacenterknowledge.com/archives/author/christine-hall
|
||||
[1]:http://www.datacenterknowledge.com/ibm/mainframe-sales-fuel-growth-ibm
|
||||
[2]:http://www.datacenterknowledge.com/design/one-click-and-voil-your-entire-data-center-encrypted
|
||||
[3]:http://www.datacenterknowledge.com/design/ibm-designs-performance-beast-ai
|
@ -0,0 +1,41 @@
|
||||
Gathering project requirements using the Open Decision Framework
|
||||
======
|
||||
|
||||

|
||||
|
||||
It's no secret that clear, concise, and measurable requirements lead to more successful projects. A study about large scale projects by [McKinsey & Company in conjunction with the University of Oxford][1] revealed that "on average, large IT projects run 45 percent over budget and 7 percent over time, while delivering 56 percent less value than predicted." The research also showed that some of the causes for this failure were "fuzzy business objectives, out-of-sync stakeholders, and excessive rework."
|
||||
|
||||
Business analysts often find themselves constructing these requirements through ongoing conversations. To do this, they must engage multiple stakeholders and ensure that engaged participants provide clear business objectives. This leads to less rework and more projects with a higher rate of success.
|
||||
|
||||
And they can do it in an open and inclusive way.
|
||||
|
||||
### A framework for success
|
||||
|
||||
One tool for increasing project success rate is the [Open Decision Framework][2]. The Open Decision Framework is an resource that can help users make more effective decisions in organizations that embrace [open principles][3]. The framework stresses three primary principles: being transparent, being inclusive, and being customer-centric.
|
||||
|
||||
**Transparent**. Many times, developers and product designers assume they know how stakeholders use a particular tool or piece of software. But these assumptions are often incorrect and lead to misconceptions about what stakeholders actually need. Practicing transparency when having discussions with developers and business owners is imperative. Development teams need to see not only the "sunny day" scenario but also the challenges that stakeholders face with certain tools or processes. Ask questions such as: "What steps must be done manually?" and "Is this tool performing as you expect?" This provides a shared understanding of the problem and a common baseline for discussion.
|
||||
|
||||
|
||||
**Inclusive**. It is vitally important for business analysts to look at body language and visual cues when gathering requirements. If someone is sitting with arms crossed or rolling their eyes, then it's a clear indication that they do not feel heard. A BA must encourage open communication by reaching out to those that don't feel heard and giving them the opportunity to be heard. Prior to starting the session, lay down ground rules that make the place safe for all to speak their opinions and to share their thoughts. Listen to the feedback provided and respond politely when feedback is offered. Diverse opinions and collaborative problem solving will bring exciting ideas to the session.
|
||||
|
||||
**Customer-centric**. The first step to being customer-centric is to recognize the customer. Who is benefiting from this change, update, or development? Early in the project, conduct a stakeholder mapping to help determine the key stakeholders, their roles in the project, and the ways they fit into the big picture. Involving the right customers and assuring that their needs are met will lead to more successful requirements being identified, more realistic (real-life) tests being conducted, and, ultimately, a successful delivery.
|
||||
|
||||
When your requirement sessions are transparent, inclusive, and customer-centric, you'll gather better requirements. And when you use the [Open Decision Framework][4] for running those sessions, participants feel more involved and empowered, and they deliver more accurate and complete requirements. In other words:
|
||||
|
||||
**Transparent + Inclusive + Customer-Centric = Better Requirements = Successful Projects**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/18/2/constructing-project-requirements
|
||||
|
||||
作者:[Tracy Buckner][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/tracyb
|
||||
[1]:http://calleam.com/WTPF/?page_id=1445
|
||||
[2]:https://opensource.com/open-organization/resources/open-decision-framework
|
||||
[3]:https://opensource.com/open-organization/resources/open-org-definition
|
||||
[4]:https://opensource.com/open-organization/16/6/introducing-open-decision-framework
|
@ -0,0 +1,127 @@
|
||||
Arch Anywhere Is Dead, Long Live Anarchy Linux
|
||||
======
|
||||
|
||||

|
||||
|
||||
Arch Anywhere was a distribution aimed at bringing Arch Linux to the masses. Due to a trademark infringement, Arch Anywhere has been completely rebranded to [Anarchy Linux][1]. And I’m here to say, if you’re looking for a distribution that will enable you to enjoy Arch Linux, a little Anarchy will go a very long way. This distribution is seriously impressive in what it sets out to do and what it achieves. In fact, anyone who previously feared Arch Linux can set those fears aside… because Anarchy Linux makes Arch Linux easy.
|
||||
|
||||
Let’s face it; Arch Linux isn’t for the faint of heart. The installation alone will turn off many a new user (and even some seasoned users). That’s where distributions like Anarchy make for an easy bridge to Arch. With a live ISO that can be tested and then installed, Arch becomes as user-friendly as any other distribution.
|
||||
|
||||
Anarchy Linux goes a little bit further than that, however. Let’s fire it up and see what it does.
|
||||
|
||||
### The installation
|
||||
|
||||
The installation of Anarchy Linux isn’t terribly challenging, but it’s also not quite as simple as for, say, [Ubuntu][2], [Linux Mint][3], or [Elementary OS][4]. Although you can run the installer from within the default graphical desktop environment (Xfce4), it’s still much in the same vein as Arch Linux. In other words, you’re going to have to do a bit of work—all within a text-based installer.
|
||||
|
||||
To start, the very first step of the installer (Figure 1) requires you to update the mirror list, which will likely trip up new users.
|
||||
|
||||
![Updating the mirror][6]
|
||||
|
||||
Figure 1: Updating the mirror list is a necessity for the Anarchy Linux installation.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
From the options, select Download & Rank New Mirrors. Tab down to OK and hit Enter on your keyboard. You can then select the nearest mirror (to your location) and be done with it. The next few installation screens are simple (keyboard layout, language, timezone, etc.). The next screen should surprise many an Arch fan. Anarchy Linux includes an auto partition tool. Select Auto Partition Drive (Figure 2), tab down to Ok, and hit Enter on your keyboard.
|
||||
|
||||
![partitioning][9]
|
||||
|
||||
Figure 2: Anarchy makes partitioning easy.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
You will then have to select the drive to be used (if you only have one drive this is only a matter of hitting Enter). Once you’ve selected the drive, choose the filesystem type to be used (ext2/3/4, btrfs, jfs, reiserfs, xfs), tab down to OK, and hit Enter. Next you must choose whether you want to create SWAP space. If you select Yes, you’ll then have to define how much SWAP to use. The next window will stop many new users in their tracks. It asks if you want to use GPT (GUID Partition Table). This is different than the traditional MBR (Master Boot Record) partitioning. GPT is a newer standard and works better with UEFI. If you’ll be working with UEFI, go with GPT, otherwise, stick with the old standby, MBR. Finally select to write the changes to the disk, and your installation can continue.
|
||||
|
||||
The next screen that could give new users pause, requires the selection of the desired installation. There are five options:
|
||||
|
||||
* Anarchy-Desktop
|
||||
|
||||
* Anarchy-Desktop-LTS
|
||||
|
||||
* Anarchy-Server
|
||||
|
||||
* Anarchy-Server-LTS
|
||||
|
||||
* Anarchy-Advanced
|
||||
|
||||
|
||||
|
||||
|
||||
If you want long term support, select Anarchy-Desktop-LTS, otherwise click Anarchy-Desktop (the default), and tab down to Ok. Click Enter on your keyboard. After you select the type of installation, you will get to select your desktop. You can select from five options: Budgie, Cinnamon, GNOME, Openbox, and Xfce4.
|
||||
Once you’ve selected your desktop, give the machine a hostname, set the root password, create a user, and enable sudo for the new user (if applicable). The next section that will raise the eyebrows of new users is the software selection window (Figure 3). You must go through the various sections and select which software packages to install. Don’t worry, if you miss something, you can always installed it later.
|
||||
|
||||
|
||||
![software][11]
|
||||
|
||||
Figure 3: Selecting the software you want on your system.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
Once you’ve made your software selections, tab to Install (Figure 4), and hit Enter on your keyboard.
|
||||
|
||||
![ready to install][13]
|
||||
|
||||
Figure 4: Everything is ready to install.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
Once the installation completes, reboot and enjoy Anarchy.
|
||||
|
||||
### Post install
|
||||
|
||||
I installed two versions of Anarchy—one with Budgie and one with GNOME. Both performed quite well, however you might be surprised to see that the version of GNOME installed is decked out with a dock. In fact, comparing the desktops side-by-side and they do a good job of resembling one another (Figure 5).
|
||||
|
||||
![GNOME and Budgie][15]
|
||||
|
||||
Figure 5: GNOME is on the right, Budgie is on the left.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
My guess is that you’ll find all desktop options for Anarchy configured in such a way to offer a similar look and feel. Of course, the second you click on the bottom left “buttons”, you’ll see those similarities immediately disappear (Figure 6).
|
||||
|
||||
![GNOME and Budgie][17]
|
||||
|
||||
Figure 6: The GNOME Dash and the Budgie menu are nothing alike.
|
||||
|
||||
[Used with permission][7]
|
||||
|
||||
Regardless of which desktop you select, you’ll find everything you need to install new applications. Open up your desktop menu of choice and select Packages to search for and install whatever is necessary for you to get your work done.
|
||||
|
||||
### Why use Arch Linux without the “Arch”?
|
||||
|
||||
This is a valid question. The answer is simple, but revealing. Some users may opt for a distribution like [Arch Linux][18] because they want the feeling of “elitism” that comes with using, say, [Gentoo][19], without having to go through that much hassle. With regards to complexity, Arch rests below Gentoo, which means it’s accessible to more users. However, along with that complexity in the platform, comes a certain level of dependability that may not be found in others. So if you’re looking for a Linux distribution with high stability, that’s not quite as challenging as Gentoo or Arch to install, Anarchy might be exactly what you want. In the end, you’ll wind up with an outstanding desktop platform that’s easy to work with (and maintain), based on a very highly regarded distribution of Linux.
|
||||
|
||||
That’s why you might opt for Arch Linux without the Arch.
|
||||
|
||||
Anarchy Linux is one of the finest “user-friendly” takes on Arch Linux I’ve ever had the privilege of using. Without a doubt, if you’re looking for a friendlier version of a rather challenging desktop operating system, you cannot go wrong with Anarchy.
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][20]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/2/arch-anywhere-dead-long-live-anarchy-linux
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/jlwallen
|
||||
[1]:https://anarchy-linux.org/
|
||||
[2]:https://www.ubuntu.com/
|
||||
[3]:https://linuxmint.com/
|
||||
[4]:https://elementary.io/
|
||||
[6]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_1.jpg?itok=WgHRqFTf (Updating the mirror)
|
||||
[7]:https://www.linux.com/licenses/category/used-permission
|
||||
[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_2.jpg?itok=D7HkR97t (partitioning)
|
||||
[10]:/files/images/anarchyinstall3jpg
|
||||
[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_3.jpg?itok=5-9E2u0S (software)
|
||||
[12]:/files/images/anarchyinstall4jpg
|
||||
[13]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_4.jpg?itok=fuSZqtZS (ready to install)
|
||||
[14]:/files/images/anarchyinstall5jpg
|
||||
[15]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_5.jpg?itok=4y9kiC8I (GNOME and Budgie)
|
||||
[16]:/files/images/anarchyinstall6jpg
|
||||
[17]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/anarchy_install_6.jpg?itok=fJ7Lmdci (GNOME and Budgie)
|
||||
[18]:https://www.archlinux.org/
|
||||
[19]:https://www.gentoo.org/
|
||||
[20]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -0,0 +1,149 @@
|
||||
How writing can change your career for the better, even if you don't identify as a writer
|
||||
======
|
||||
Have you read Marie Kondo's book [The Life-Changing Magic of Tidying Up][1]? Or did you, like me, buy it and read a little bit and then add it to the pile of clutter next to your bed?
|
||||
|
||||
Early in the book, Kondo talks about keeping possessions that "spark joy." In this article, I'll examine ways writing about what we and other people are doing in the open source world can "spark joy," or at least how writing can improve your career in unexpected ways.
|
||||
|
||||
Because I'm a community manager and editor on Opensource.com, you might be thinking, "She just wants us to [write for Opensource.com][2]." And that is true. But everything I will tell you about why you should write is true, even if you never send a story in to Opensource.com. Writing can change your career for the better, even if you don't identify as a writer. Let me explain.
|
||||
|
||||
### How I started writing
|
||||
|
||||
Early in the first decade of my career, I transitioned from a customer service-related role at a tech publishing company into an editing role on Sys Admin Magazine. I was plugging along, happily laying low in my career, and then that all changed when I started writing about open source technologies and communities, and the people in them. But I did _not_ start writing voluntarily. The tl;dr: of it is that my colleagues at Linux New Media eventually talked me into launching our first blog on the [Linux Pro Magazine][3] site. And as it turns out, it was one of the best career decisions I've ever made. I would not be working on Opensource.com today had I not started writing about what other people in open source were doing all those years ago.
|
||||
|
||||
When I first started writing, my goal was to raise awareness of the company I worked for and our publications, while also helping raise the visibility of women in tech. But soon after I started writing, I began seeing unexpected results.
|
||||
|
||||
#### My network started growing
|
||||
|
||||
When I wrote about a person, an organization, or a project, I got their attention. Suddenly the people I wrote about knew who I was. And because I was sharing knowledge—that is to say, I wasn't being a critic—I'd generally become an ally, and in many cases, a friend. I had a platform and an audience, and I was sharing them with other people in open source.
|
||||
|
||||
#### I was learning
|
||||
|
||||
In addition to promoting our website and magazine and growing my network, the research and fact-checking I did when writing articles helped me become more knowledgeable in my field and improve my tech chops.
|
||||
|
||||
#### I started meeting more people IRL
|
||||
|
||||
When I went to conferences, I found that my blog posts helped me meet people. I introduced myself to people I'd written about or learned about during my research, and I met new people to interview. People started knowing who I was because they'd read my articles. Sometimes people were even excited to meet me because I'd highlighted them, their projects, or someone or something they were interested in. I had no idea writing could be so exciting and interesting away from the keyboard.
|
||||
|
||||
#### My conference talks improved
|
||||
|
||||
I started speaking at events about a year after launching my blog. A few years later, I started writing articles based on my talks prior to speaking at events. The process of writing the articles helps me organize my talks and slides, and it was a great way to provide "notes" for conference attendees, while sharing the topic with a larger international audience that wasn't at the event in person.
|
||||
|
||||
### What should you write about?
|
||||
|
||||
Maybe you're interested in writing, but you struggle with what to write about. You should write about two things: what you know, and what you don't know.
|
||||
|
||||
#### Write about what you know
|
||||
|
||||
Writing about what you know can be relatively easy. For example, a script you wrote to help automate part of your daily tasks might be something you don't give any thought to, but it could make for a really exciting article for someone who hates doing that same task every day. That could be a relatively quick, short, and easy article for you to write, and you might not even think about writing it. But it could be a great contribution to the open source community.
|
||||
|
||||
#### Write about what you don't know
|
||||
|
||||
Writing about what you don't know can be much harder and more time consuming, but also much more fulfilling and help your career. I've found that writing about what I don't know helps me learn, because I have to research it and understand it well enough to explain it.
|
||||
|
||||
> "When I write about a technical topic, I usually learn a lot more about it. I want to make sure my article is as good as it can be. So even if I'm writing about something I know well, I'll research the topic a bit more so I can make sure to get everything right." ~Jim Hall, FreeDOS project leader
|
||||
|
||||
For example, I wanted to learn about machine learning, and I thought narrowing down the topic would help me get started. My team mate Jason Baker suggested that I write an article on the [Top 3 machine learning libraries for Python][4], which gave me a focus for research.
|
||||
|
||||
The process of researching that article inspired another article, [3 cool machine learning projects using TensorFlow and the Raspberry Pi][5]. That article was also one of our most popular last year. I'm not an _expert_ on machine learning now, but researching the topic with writing an article in mind allowed me to give myself a crash course in the topic.
|
||||
|
||||
### Why people in tech write
|
||||
|
||||
Now let's look at a few benefits of writing that other people in tech have found. I emailed the Opensource.com writers' list and asked, and here's what writers told me.
|
||||
|
||||
#### Grow your network or your project community
|
||||
|
||||
Xavier Ho wrote for us for the first time last year ("[A programmer's cleaning guide for messy sensor data][6]"). He says: "I've been getting Twitter mentions from all over the world, including Spain, US, Australia, Indonesia, the UK, and other European countries. It shows the article is making some impact... This is the kind of reach I normally don't have. Hope it's really helping someone doing similar work!"
|
||||
|
||||
#### Help people
|
||||
|
||||
Writing about what other people are working on is a great way to help your fellow community members. Antoine Thomas, who wrote "[Linux helped me grow as a musician][7]", says, "I began to use open source years ago, by reading tutorials and documentation. That's why now I share my tips and tricks, experience or knowledge. It helped me to get started, so I feel that it's my turn to help others to get started too."
|
||||
|
||||
#### Give back to the community
|
||||
|
||||
[Jim Hall][8], who started the [FreeDOS project][9], says, "I like to write ... because I like to support the open source community by sharing something neat. I don't have time to be a program maintainer anymore, but I still like to do interesting stuff. So when something cool comes along, I like to write about it and share it."
|
||||
|
||||
#### Highlight your community
|
||||
|
||||
Emilio Velis wrote an article, "[Open hardware groups spread across the globe][10]", about projects in Central and South America. He explains, "I like writing about specific aspects of the open culture that are usually enclosed in my region (Latin America). I feel as if smaller communities and their ideas are hidden from the mainstream, so I think that creating this sense of broadness in participation is what makes some other cultures as valuable."
|
||||
|
||||
#### Gain confidence
|
||||
|
||||
[Don Watkins][11] is one of our regular writers and a [community moderator][12]. He says, "When I first started writing I thought I was an impostor, later I realized that many people feel that way. Writing and contributing to Opensource.com has been therapeutic, too, as it contributed to my self esteem and helped me to overcome feelings of inadequacy. … Writing has given me a renewed sense of purpose and empowered me to help others to write and/or see the valuable contributions that they too can make if they're willing to look at themselves in a different light. Writing has kept me younger and more open to new ideas."
|
||||
|
||||
#### Get feedback
|
||||
|
||||
One of our writers described writing as a feedback loop. He said that he started writing as a way to give back to the community, but what he found was that community responses give back to him.
|
||||
|
||||
Another writer, [Stuart Keroff][13] says, "Writing for Opensource.com about the program I run at school gave me valuable feedback, encouragement, and support that I would not have had otherwise. Thousands upon thousands of people heard about the Asian Penguins because of the articles I wrote for the website."
|
||||
|
||||
#### Exhibit expertise
|
||||
|
||||
Writing can help you show that you've got expertise in a subject, and having writing samples on well-known websites can help you move toward better pay at your current job, get a new role at a different organization, or start bringing in writing income.
|
||||
|
||||
[Jeff Macharyas][14] explains, "There are several ways I've benefitted from writing for Opensource.com. One, is the credibility I can add to my social media sites, resumes, bios, etc., just by saying 'I am a contributing writer to Opensource.com.' … I am hoping that I will be able to line up some freelance writing assignments, using my Opensource.com articles as examples, in the future."
|
||||
|
||||
### Where should you publish your articles?
|
||||
|
||||
That depends. Why are you writing?
|
||||
|
||||
You can always post on your personal blog, but if you don't already have a lot of readers, your article might get lost in the noise online.
|
||||
|
||||
Your project or company blog is a good option—again, you'll have to think about who will find it. How big is your company's reach? Or will you only get the attention of people who already give you their attention?
|
||||
|
||||
Are you trying to reach a new audience? A bigger audience? That's where sites like Opensource.com can help. We attract more than a million page views a month, and more than 700,000 unique visitors. Plus you'll work with editors who will polish and help promote your article.
|
||||
|
||||
We aren't the only site interested in your story. What are your favorite sites to read? They might want to help you share your story, and it's ok to pitch to multiple publications. Just be transparent about whether your article has been shared on other sites when working with editors. Occasionally, editors can even help you modify articles so that you can publish variations on multiple sites.
|
||||
|
||||
#### Do you want to get rich by writing? (Don't count on it.)
|
||||
|
||||
If your goal is to make money by writing, pitch your article to publications that have author budgets. There aren't many of them, the budgets don't tend to be huge, and you will be competing with experienced professional tech journalists who write seven days a week, 365 days a year, with large social media followings and networks. I'm not saying it can't be done—I've done it—but I am saying don't expect it to be easy or lucrative. It's not. (And frankly, I've found that nothing kills my desire to write much like having to write if I want to eat...)
|
||||
|
||||
A couple of people have asked me whether Opensource.com pays for content, or whether I'm asking someone to write "for exposure." Opensource.com does not have an author budget, but I won't tell you to write "for exposure," either. You should write because it meets a need.
|
||||
|
||||
If you already have a platform that meets your needs, and you don't need editing or social media and syndication help: Congratulations! You are privileged.
|
||||
|
||||
### Spark joy!
|
||||
|
||||
Most people don't know they have a story to tell, so I'm here to tell you that you probably do, and my team can help, if you just submit a proposal.
|
||||
|
||||
Most people—myself included—could use help from other people. Sites like Opensource.com offer one way to get editing and social media services at no cost to the writer, which can be hugely valuable to someone starting out in their career, someone who isn't a native English speaker, someone who wants help with their project or organization, and so on.
|
||||
|
||||
If you don't already write, I hope this article helps encourage you to get started. Or, maybe you already write. In that case, I hope this article makes you think about friends, colleagues, or people in your network who have great stories and experiences to share. I'd love to help you help them get started.
|
||||
|
||||
I'll conclude with feedback I got from a recent writer, [Mario Corchero][15], a Senior Software Developer at Bloomberg. He says, "I wrote for Opensource because you told me to :)" (For the record, I "invited" him to write for our [PyCon speaker series][16] last year.) He added, "And I am extremely happy about it—not only did it help me at my workplace by gaining visibility, but I absolutely loved it! The article appeared in multiple email chains about Python and was really well received, so I am now looking to publish the second :)" Then he [wrote for us][17] again.
|
||||
|
||||
I hope you find writing to be as fulfilling as we do.
|
||||
|
||||
You can connect with Opensource.com editors, community moderators, and writers in our Freenode [IRC][18] channel #opensource.com, and you can reach me and the Opensource.com team by email at [open@opensource.com][19].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/career-changing-magic-writing
|
||||
|
||||
作者:[Rikki Endsley][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rikki-endsley
|
||||
[1]:http://tidyingup.com/books/the-life-changing-magic-of-tidying-up-hc
|
||||
[2]:https://opensource.com/how-submit-article
|
||||
[3]:http://linuxpromagazine.com/
|
||||
[4]:https://opensource.com/article/17/2/3-top-machine-learning-libraries-python
|
||||
[5]:https://opensource.com/article/17/2/machine-learning-projects-tensorflow-raspberry-pi
|
||||
[6]:https://opensource.com/article/17/9/messy-sensor-data
|
||||
[7]:https://opensource.com/life/16/9/my-linux-story-musician
|
||||
[8]:https://opensource.com/users/jim-hall
|
||||
[9]:http://www.freedos.org/
|
||||
[10]:https://opensource.com/article/17/6/open-hardware-latin-america
|
||||
[11]:https://opensource.com/users/don-watkins
|
||||
[12]:https://opensource.com/community-moderator-program
|
||||
[13]:https://opensource.com/education/15/3/asian-penguins-Linux-middle-school-club
|
||||
[14]:https://opensource.com/users/jeffmacharyas
|
||||
[15]:https://opensource.com/article/17/5/understanding-datetime-python-primer
|
||||
[16]:https://opensource.com/tags/pycon
|
||||
[17]:https://opensource.com/article/17/9/python-logging
|
||||
[18]:https://opensource.com/article/16/6/getting-started-irc
|
||||
[19]:mailto:open@opensource.com
|
@ -0,0 +1,47 @@
|
||||
Why an involved user community makes for better software
|
||||
======
|
||||

|
||||
|
||||
Imagine releasing a major new infrastructure service based on open source software only to discover that the product you deployed had evolved so quickly that the documentation for the version you released is no longer available. At Bloomberg, we experienced this problem firsthand in our deployment of OpenStack. In late 2016, we spent six months testing and rolling out [Liberty][1] on our OpenStack environment. By that time, Liberty was about a year old, or two versions behind the latest build.
|
||||
|
||||
As our users started taking advantage of its new functionality, we found ourselves unable to solve a few tricky problems and to answer some detailed questions about its API. When we went looking for Liberty's documentation, it was nowhere to be found on the OpenStack website. Liberty, it turned out, had been labeled "end of life" and was no longer supported by the OpenStack developer community.
|
||||
|
||||
The disappearance wasn't intentional, rather the result of a development community that had not anticipated the real-world needs of users. The documentation was stored in the source branch along with the source code, and, as Liberty was superseded by newer versions, it had been deleted. Worse, in the intervening months, the documentation for the newer versions had been completely restructured, and there was no way to easily rebuild it in a useful form. And believe me, we tried.
|
||||
|
||||
The disappearance wasn't intentional, rather the result of a development community that had not anticipated the real-world needs of users. ]After consulting other users and our vendor, we found that OpenStack's development cadence of two releases per year had created some unintended, yet deeply frustrating, consequences. Older releases that were typically still widely in use were being superseded and effectively killed for the purposes of support.
|
||||
|
||||
Eventually, conversations took place between OpenStack users and developers that resulted in changes. Documentation was moved out of the source branch, and users can now build documentation for whatever version they're using—more or less indefinitely. The problem was solved. (I'm especially indebted to my colleague [Chris Morgan][2], who was knee-deep in this effort and first wrote about it in detail for the [OpenStack Superuser blog][3].)
|
||||
|
||||
Many other enterprise users were in the same boat as Bloomberg—running older versions of OpenStack that are three or four versions behind the latest build. There's a good reason for that: On average it takes a reasonably large enterprise about six months to qualify, test, and deploy a new version of OpenStack. And, from my experience, this is generally true of most open source infrastructure projects.
|
||||
|
||||
For most of the past decade, companies like Bloomberg that adopted open source software relied on distribution vendors to incorporate, test, verify, and support much of it. These vendors provide long-term support (LTS) releases, which enable enterprise users to plan for upgrades on a two- or three-year cycle, knowing they'll still have support for a year or two, even if their deployment schedule slips a bit (as they often do). In the past few years, though, infrastructure software has advanced so rapidly that even the distribution vendors struggle to keep up. And customers of those vendors are yet another step removed, so many are choosing to deploy this type of software without vendor support.
|
||||
|
||||
Losing vendor support also usually means there are no LTS releases; OpenStack, Kubernetes, and Prometheus, and many more, do not yet provide LTS releases of their own. As a result, I'd argue that healthy interaction between the development and user community should be high on the list of considerations for adoption of any open source infrastructure. Do the developers building the software pay attention to the needs—and frustrations—of the people who deploy it and make it useful for their enterprise?
|
||||
|
||||
There is a solid model for how this should happen. We recently joined the [Cloud Native Computing Foundation][4], part of The Linux Foundation. It has a formal [end-user community][5], whose members include organizations just like us: enterprises that are trying to make open source software useful to their internal customers. Corporate members also get a chance to have their voices heard as they vote to select a representative to serve on the CNCF [Technical Oversight Committee][6]. Similarly, in the OpenStack community, Bloomberg is involved in the semi-annual Operators Meetups, where companies who deploy and support OpenStack for their own users get together to discuss their challenges and provide guidance to the OpenStack developer community.
|
||||
|
||||
The past few years have been great for open source infrastructure. If you're working for a large enterprise, the opportunity to deploy open source projects like the ones mentioned above has made your company more productive and more agile.
|
||||
|
||||
As large companies like ours begin to consume more open source software to meet their infrastructure needs, they're going to be looking at a long list of considerations before deciding what to use: license compatibility, out-of-pocket costs, and the health of the development community are just a few examples. As a result of our experiences, we'll add the presence of a vibrant and engaged end-user community to the list.
|
||||
|
||||
Increased reliance on open source infrastructure projects has also highlighted a key problem: People in the development community have little experience deploying the software they work on into production environments or supporting the people who use it to get things done on a daily basis. The fast pace of updates to these projects has created some unexpected problems for the people who deploy and use them. There are numerous examples I can cite where open source projects are updated so frequently that new versions will, usually unintentionally, break backwards compatibility.
|
||||
|
||||
As open source increasingly becomes foundational to the operation of so many enterprises, this cannot be allowed to happen, and members of the user community should assert themselves accordingly and press for the creation of formal representation. In the end, the software can only be better.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/important-conversation
|
||||
|
||||
作者:[Kevin P.Fleming][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/kpfleming
|
||||
[1]:https://releases.openstack.org/liberty/
|
||||
[2]:https://www.linkedin.com/in/mihalis68/
|
||||
[3]:http://superuser.openstack.org/articles/openstack-at-bloomberg/
|
||||
[4]:https://www.cncf.io/
|
||||
[5]:https://www.cncf.io/people/end-user-community/
|
||||
[6]:https://www.cncf.io/people/technical-oversight-committee/
|
@ -1,160 +0,0 @@
|
||||
Linux Find Out Last System Reboot Time and Date Command
|
||||
======
|
||||
So, how do you find out your Linux or UNIX-like system was last rebooted? How do you display the system shutdown date and time? The last utility will either list the sessions of specified users, ttys, and hosts, in reverse time order, or list the users logged in at a specified date and time. Each line of output contains the user name, the tty from which the session was conducted, any hostname, the start and stop times for the session, and the duration of the session. To view Linux or Unix system reboot and shutdown date and time stamp using the following commands:
|
||||
|
||||
* last command
|
||||
* who command
|
||||
|
||||
|
||||
|
||||
### Use who command to find last system reboot time/date
|
||||
|
||||
You need to use the [who command][1], to print who is logged on. It also displays the time of last system boot. Use the last command to display system reboot and shutdown date and time, run:
|
||||
`$ who -b`
|
||||
Sample outputs:
|
||||
```
|
||||
system boot 2017-06-20 17:41
|
||||
```
|
||||
|
||||
Use the last command to display listing of last logged in users and system last reboot time and date, enter:
|
||||
`$ last reboot | less`
|
||||
Sample outputs:
|
||||
[![Fig.01: last command in action][2]][2]
|
||||
Or, better try:
|
||||
`$ last reboot | head -1`
|
||||
Sample outputs:
|
||||
```
|
||||
reboot system boot 4.9.0-3-amd64 Sat Jul 15 19:19 still running
|
||||
```
|
||||
|
||||
The last command searches back through the file /var/log/wtmp and displays a list of all users logged in (and out) since that file was created. The pseudo user reboot logs in each time the system is rebooted. Thus last reboot command will show a log of all reboots since the log file was created.
|
||||
|
||||
### Finding systems last shutdown date and time
|
||||
|
||||
To display last shutdown date and time use the following command:
|
||||
`$ last -x|grep shutdown | head -1`
|
||||
Sample outputs:
|
||||
```
|
||||
shutdown system down 2.6.15.4 Sun Apr 30 13:31 - 15:08 (01:37)
|
||||
```
|
||||
|
||||
Where,
|
||||
|
||||
* **-x** : Display the system shutdown entries and run level changes.
|
||||
|
||||
|
||||
|
||||
Here is another session from my last command:
|
||||
```
|
||||
$ last
|
||||
$ last -x
|
||||
$ last -x reboot
|
||||
$ last -x shutdown
|
||||
```
|
||||
Sample outputs:
|
||||
![Fig.01: How to view last Linux System Reboot Date/Time ][3]
|
||||
|
||||
### Find out Linux system up since…
|
||||
|
||||
Another option as suggested by readers in the comments section below is to run the following command:
|
||||
`$ uptime -s`
|
||||
Sample outputs:
|
||||
```
|
||||
2017-06-20 17:41:51
|
||||
```
|
||||
|
||||
### OS X/Unix/FreeBSD find out last reboot and shutdown time command examples
|
||||
|
||||
Type the following command:
|
||||
`$ last reboot`
|
||||
Sample outputs from OS X unix:
|
||||
```
|
||||
reboot ~ Fri Dec 18 23:58
|
||||
reboot ~ Mon Dec 14 09:54
|
||||
reboot ~ Wed Dec 9 23:21
|
||||
reboot ~ Tue Nov 17 21:52
|
||||
reboot ~ Tue Nov 17 06:01
|
||||
reboot ~ Wed Nov 11 12:14
|
||||
reboot ~ Sat Oct 31 13:40
|
||||
reboot ~ Wed Oct 28 15:56
|
||||
reboot ~ Wed Oct 28 11:35
|
||||
reboot ~ Tue Oct 27 00:00
|
||||
reboot ~ Sun Oct 18 17:28
|
||||
reboot ~ Sun Oct 18 17:11
|
||||
reboot ~ Mon Oct 5 09:35
|
||||
reboot ~ Sat Oct 3 18:57
|
||||
|
||||
|
||||
wtmp begins Sat Oct 3 18:57
|
||||
```
|
||||
|
||||
To see shutdown date and time, enter:
|
||||
`$ last shutdown`
|
||||
Sample outputs:
|
||||
```
|
||||
shutdown ~ Fri Dec 18 23:57
|
||||
shutdown ~ Mon Dec 14 09:53
|
||||
shutdown ~ Wed Dec 9 23:20
|
||||
shutdown ~ Tue Nov 17 14:24
|
||||
shutdown ~ Mon Nov 16 21:15
|
||||
shutdown ~ Tue Nov 10 13:15
|
||||
shutdown ~ Sat Oct 31 13:40
|
||||
shutdown ~ Wed Oct 28 03:10
|
||||
shutdown ~ Sun Oct 18 17:27
|
||||
shutdown ~ Mon Oct 5 09:23
|
||||
|
||||
|
||||
wtmp begins Sat Oct 3 18:57
|
||||
```
|
||||
|
||||
### How do I find who rebooted/shutdown the Linux box?
|
||||
|
||||
You need [to enable psacct service and run the following command to see info][4] about executed commands including user name. Type the following [lastcomm command][5] to see
|
||||
```
|
||||
# lastcomm userNameHere
|
||||
# lastcomm commandNameHere
|
||||
# lastcomm | more
|
||||
# lastcomm reboot
|
||||
# lastcomm shutdown
|
||||
### OR see both reboot and shutdown time
|
||||
# lastcomm | egrep 'reboot|shutdown'
|
||||
```
|
||||
Sample outputs:
|
||||
```
|
||||
reboot S X root pts/0 0.00 secs Sun Dec 27 23:49
|
||||
shutdown S root pts/1 0.00 secs Sun Dec 27 23:45
|
||||
```
|
||||
|
||||
So root user rebooted the box from 'pts/0' on Sun, Dec, 27th at 23:49 local time.
|
||||
|
||||
### See also
|
||||
|
||||
* For more information read last(1) and [learn how to use the tuptime command on Linux server to see the historical and statistical uptime][6].
|
||||
|
||||
|
||||
|
||||
### about the author
|
||||
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][7], [Facebook][8], [Google+][9].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/tips/linux-last-reboot-time-and-date-find-out.html
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/faq/unix-linux-who-command-examples-syntax-usage/ (See Linux/Unix who command examples for more info)
|
||||
[2]:https://www.cyberciti.biz/tips/wp-content/uploads/2006/04/last-reboot.jpg
|
||||
[3]:https://www.cyberciti.biz/media/new/tips/2006/04/check-last-time-system-was-rebooted.jpg
|
||||
[4]:https://www.cyberciti.biz/tips/howto-log-user-activity-using-process-accounting.html
|
||||
[5]:https://www.cyberciti.biz/faq/linux-unix-lastcomm-command-examples-usage-syntax/ (See Linux/Unix lastcomm command examples for more info)
|
||||
[6]:https://www.cyberciti.biz/hardware/howto-see-historical-statistical-uptime-on-linux-server/
|
||||
[7]:https://twitter.com/nixcraft
|
||||
[8]:https://facebook.com/nixcraft
|
||||
[9]:https://plus.google.com/+CybercitiBiz
|
@ -1,89 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
How To Turn On/Off Colors For ls Command In Bash On a Linux/Unix
|
||||
======
|
||||
|
||||
How do I turn on or off file name colors (ls command colors) in bash shell on a Linux or Unix like operating systems?
|
||||
|
||||
Most modern Linux distributions and Unix systems comes with alias that defines colors for your file. However, ls command is responsible for displaying color on screen for files, directories and other file system objects.
|
||||
|
||||
By default, color is not used to distinguish types of files. You need to pass --color option to the ls command on Linux. If you are using OS X or BSD based system pass -G option to the ls command. The syntax is as follows to turn on or off colors.
|
||||
|
||||
#### How to turn off colors for ls command
|
||||
|
||||
Type the following command
|
||||
`$ ls --color=none`
|
||||
Or just remove alias with the unalias command:
|
||||
`$ unalias ls`
|
||||
Please note that the following bash shell aliases are defined to display color with the ls command. Use combination of [alias command][1] and [grep command][2] as follows:
|
||||
`$ alias | grep ls`
|
||||
Sample outputs
|
||||
```
|
||||
alias l='ls -CF'
|
||||
alias la='ls -A'
|
||||
alias ll='ls -alF'
|
||||
alias ls='ls --color=auto'
|
||||
```
|
||||
|
||||
#### How to turn on colors for ls command
|
||||
|
||||
Use any one of the following command:
|
||||
```
|
||||
$ ls --color=auto
|
||||
$ ls --color=tty
|
||||
```
|
||||
[Define bash shell aliases ][3]if you want:
|
||||
`alias ls='ls --color=auto'`
|
||||
You can add or remove ls command alias to the ~/.bash_profile or [~/.bashrc file][4]. Edit file using a text editor such as vi command:
|
||||
`$ vi ~/.bashrc`
|
||||
Append the following code:
|
||||
```
|
||||
# my ls command aliases #
|
||||
alias ls = 'ls --color=auto'
|
||||
```
|
||||
|
||||
[Save and close the file in Vi/Vim text editor][5].
|
||||
|
||||
#### A note about *BSD/macOS/Apple OS X ls command
|
||||
|
||||
Pass the -G option to ls command to enable colorized output on a {Free,Net,Open}BSD or macOS and Apple OS X Unix family of operating systems:
|
||||
`$ ls -G`
|
||||
Sample outputs:
|
||||
[![How to enable colorized output for the ls command in Mac OS X Terminal][6]][7]
|
||||
How to enable colorized output for the ls command in Mac OS X Terminal
|
||||
|
||||
#### How do I skip colorful ls command output temporarily?
|
||||
|
||||
You can always [disable bash shell aliases temporarily][8] using any one of the following syntax:
|
||||
`\ls
|
||||
/bin/ls
|
||||
command ls
|
||||
'ls'`
|
||||
|
||||
|
||||
#### About the author
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][9], [Facebook][10], [Google+][11].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/how-to-turn-on-or-off-colors-in-bash/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/tips/bash-aliases-mac-centos-linux-unix.html (See Linux/Unix alias command examples for more info)
|
||||
[2]:https://www.cyberciti.biz/faq/howto-use-grep-command-in-linux-unix/ (See Linux/Unix grep command examples for more info)
|
||||
[3]:https://www.cyberciti.biz/tips/bash-aliases-mac-centos-linux-unix.html
|
||||
[4]:https://bash.cyberciti.biz/guide/~/.bashrc
|
||||
[5]:https://www.cyberciti.biz/faq/linux-unix-vim-save-and-quit-command/
|
||||
[6]:https://www.cyberciti.biz/media/new/faq/2016/01/color-ls-for-Mac-OS-X.jpg
|
||||
[7]:https://www.cyberciti.biz/faq/apple-mac-osx-terminal-color-ls-output-option/
|
||||
[8]:https://www.cyberciti.biz/faq/bash-shell-temporarily-disable-an-alias/
|
||||
[9]:https://twitter.com/nixcraft
|
||||
[10]:https://facebook.com/nixcraft
|
||||
[11]:https://plus.google.com/+CybercitiBiz
|
@ -0,0 +1,84 @@
|
||||
translating---geekpi
|
||||
|
||||
How to use lftp to accelerate ftp/https download speed on Linux/UNIX
|
||||
======
|
||||
lftp is a file transfer program. It allows sophisticated FTP, HTTP/HTTPS, and other connections. If the site URL is specified, then lftp will connect to that site otherwise a connection has to be established with the open command. It is an essential tool for all a Linux/Unix command line users. I have already written about [Linux ultra fast command line download accelerator][1] such as Axel and prozilla. lftp is another tool for the same job with more features. lftp can handle seven file access methods:
|
||||
|
||||
1. ftp
|
||||
2. ftps
|
||||
3. http
|
||||
4. https
|
||||
5. hftp
|
||||
6. fish
|
||||
7. sftp
|
||||
8. file
|
||||
|
||||
|
||||
|
||||
### So what is unique about lftp?
|
||||
|
||||
* Every operation in lftp is reliable, that is any not fatal error is ignored, and the operation is repeated. So if downloading breaks, it will be restarted from the point automatically. Even if FTP server does not support REST command, lftp will try to retrieve the file from the very beginning until the file is transferred completely.
|
||||
* lftp has shell-like command syntax allowing you to launch several commands in parallel in the background.
|
||||
* lftp has a builtin mirror which can download or update a whole directory tree. There is also a reverse mirror (mirror -R) which uploads or updates a directory tree on the server. The mirror can also synchronize directories between two remote servers, using FXP if available.
|
||||
|
||||
|
||||
|
||||
### How to use lftp as download accelerator
|
||||
|
||||
lftp has pget command. It allows you download files in parallel. The syntax is
|
||||
`lftp -e 'pget -n NUM -c url; exit'`
|
||||
For example, download <http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.22.2.tar.bz2> file using pget in 5 parts:
|
||||
```
|
||||
$ cd /tmp
|
||||
$ lftp -e 'pget -n 5 -c http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.22.2.tar.bz2'
|
||||
```
|
||||
Sample outputs:
|
||||
```
|
||||
45108964 bytes transferred in 57 seconds (775.3K/s)
|
||||
lftp :~>quit
|
||||
|
||||
```
|
||||
|
||||
Where,
|
||||
|
||||
1. pget – Download files in parallel
|
||||
2. -n 5 – Set maximum number of connections to 5
|
||||
3. -c – Continue broken transfer if lfile.lftp-pget-status exists in the current directory
|
||||
|
||||
|
||||
|
||||
### How to use lftp to accelerate ftp/https download on Linux/Unix
|
||||
|
||||
Another try with added exit command:
|
||||
`$ lftp -e 'pget -n 10 -c https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.15.tar.xz; exit'`
|
||||
|
||||
[Linux-lftp-command-demo][https://www.cyberciti.biz/tips/wp-content/uploads/2007/08/Linux-lftp-command-demo.mp4]
|
||||
|
||||
### A note about parallel downloading
|
||||
|
||||
Please note that by using download accelerator you are going to put a load on remote host. Also note that lftp may not work with sites that do not support multi-source downloads or blocks such requests at firewall level.
|
||||
|
||||
NA command offers many other features. Refer to [lftp][2] man page for more information:
|
||||
`man lftp`
|
||||
|
||||
### about the author
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][3], [Facebook][4], [Google+][5]. Get the **latest tutorials on SysAdmin, Linux/Unix and open source topics via[my RSS/XML feed][6]**.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/tips/linux-unix-download-accelerator.html
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz
|
||||
[1]:https://www.cyberciti.biz/tips/download-accelerator-for-linux-command-line-tools.html
|
||||
[2]:https://lftp.yar.ru/
|
||||
[3]:https://twitter.com/nixcraft
|
||||
[4]:https://facebook.com/nixcraft
|
||||
[5]:https://plus.google.com/+CybercitiBiz
|
||||
[6]:https://www.cyberciti.biz/atom/atom.xml
|
@ -1,134 +0,0 @@
|
||||
Linux Check IDE / SATA SSD Hard Disk Transfer Speed
|
||||
======
|
||||
So how do you find out how fast is your hard disk under Linux? Is it running at the SATA I (150 MB/s) or SATA II (300 MB/s) or SATA III (6.0Gb/s) speed without opening computer case or chassis?
|
||||
|
||||
You can use the **hdparm or dd command** to check hard disk speed. It provides a command line interface to various hard disk ioctls supported by the stock Linux ATA/IDE/SATA device driver subsystem. Some options may work correctly only with the latest kernels (make sure you have cutting edge kernel installed). I also recommend compiling hdparm with the included files from the most recent kernel source code.
|
||||
|
||||
### How to measure hard disk data transfer speed using hdparm
|
||||
|
||||
Login as the root user and enter the following command:
|
||||
`$ sudo hdparm -tT /dev/sda`
|
||||
OR
|
||||
`$ sudo hdparm -tT /dev/hda`
|
||||
Sample outputs:
|
||||
```
|
||||
/dev/sda:
|
||||
Timing cached reads: 7864 MB in 2.00 seconds = 3935.41 MB/sec
|
||||
Timing buffered disk reads: 204 MB in 3.00 seconds = 67.98 MB/sec
|
||||
```
|
||||
|
||||
For meaningful results, this operation should be **repeated 2-3 times**. This displays the speed of reading directly from the Linux buffer cache without disk access. This measurement is essentially an indication of the **throughput of the processor, cache, and memory** of the system under test. [Here is a for loop example][1], to run test 3 time in a row:
|
||||
`for i in 1 2 3; do hdparm -tT /dev/hda; done`
|
||||
Where,
|
||||
|
||||
* **-t** :perform device read timings
|
||||
* **-T** : perform cache read timings
|
||||
* **/dev/sda** : Hard disk device file
|
||||
|
||||
|
||||
|
||||
To [find out SATA hard disk link speed][2], enter:
|
||||
`sudo hdparm -I /dev/sda | grep -i speed`
|
||||
Output:
|
||||
```
|
||||
* Gen1 signaling speed (1.5Gb/s)
|
||||
* Gen2 signaling speed (3.0Gb/s)
|
||||
* Gen3 signaling speed (6.0Gb/s)
|
||||
|
||||
```
|
||||
|
||||
Above output indicate that my hard disk can use 1.5Gb/s, 3.0Gb/s, or 6.0Gb/s speed. Please note that your BIOS / Motherboard must have support for SATA-II/III:
|
||||
`$ dmesg | grep -i sata | grep 'link up'`
|
||||
[![Linux Check IDE SATA SSD Hard Disk Transfer Speed][3]][3]
|
||||
|
||||
### dd Command
|
||||
|
||||
You can use the dd command as follows to get speed info too:
|
||||
```
|
||||
dd if=/dev/zero of=/tmp/output.img bs=8k count=256k
|
||||
rm /tmp/output.img
|
||||
```
|
||||
|
||||
Sample outputs:
|
||||
```
|
||||
262144+0 records in
|
||||
262144+0 records out
|
||||
2147483648 bytes (2.1 GB) copied, 23.6472 seconds, **90.8 MB/s**
|
||||
|
||||
```
|
||||
|
||||
The [recommended syntax for the dd command is as follows][4]
|
||||
```
|
||||
dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync
|
||||
|
||||
## GNU dd syntax ##
|
||||
dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
|
||||
|
||||
## OR alternate syntax for GNU/dd ##
|
||||
dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync
|
||||
```
|
||||
|
||||
|
||||
Sample outputs from the last dd command:
|
||||
```
|
||||
1+0 records in
|
||||
1+0 records out
|
||||
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.23889 s, 253 MB/s
|
||||
```
|
||||
|
||||
### Disks & storage - GUI tool
|
||||
|
||||
You can also use disk utility located at System > Administration > Disk utility menu. Please note that in latest version of Gnome it is simply called Disks.
|
||||
|
||||
#### How do I test the performance of my hard disk using Disks on Linux?
|
||||
|
||||
To test the speed of your hard disk:
|
||||
|
||||
1. Open **Disks** from the **Activities** overview (press the Super key on your keyboard and type Disks)
|
||||
2. Choose the **disk** from the list in the **left pane**
|
||||
3. Select the menu button and select **Benchmark disk …** from the menu
|
||||
4. Click **Start Benchmark …** and adjust the Transfer Rate and Access Time parameters as desired.
|
||||
5. Choose **Start Benchmarking** to test how fast data can be read from the disk. Administrative privileges required. Enter your password
|
||||
|
||||
|
||||
|
||||
A quick video demo of above procedure:
|
||||
|
||||
https://www.cyberciti.biz/tips/wp-content/uploads/2007/10/disks-performance.mp4
|
||||
|
||||
|
||||
#### Read Only Benchmark (Safe option)
|
||||
|
||||
Then, select > Read only:
|
||||
![Fig.01: Linux Benchmarking Hard Disk Read Only Test Speed][5]
|
||||
The above option will not destroy any data.
|
||||
|
||||
#### Read and Write Benchmark (All data will be lost so be careful)
|
||||
|
||||
Visit System > Administration > Disk utility menu > Click Benchmark > Click Start Read/Write Benchmark button:
|
||||
![Fig.02:Linux Measuring read rate, write rate and access time][6]
|
||||
|
||||
### About the author
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][7], [Facebook][8], [Google+][9].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/tips/how-fast-is-linux-sata-hard-disk.html
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/faq/bash-for-loop/
|
||||
[2]:https://www.cyberciti.biz/faq/linux-command-to-find-sata-harddisk-link-speed/
|
||||
[3]:https://www.cyberciti.biz/tips/wp-content/uploads/2007/10/Linux-Check-IDE-SATA-SSD-Hard-Disk-Transfer-Speed.jpg
|
||||
[4]:https://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd-command/
|
||||
[5]:https://www.cyberciti.biz/media/new/tips/2007/10/Linux-Hard-Disk-Speed-Benchmark.png (Linux Benchmark Hard Disk Speed)
|
||||
[6]:https://www.cyberciti.biz/media/new/tips/2007/10/Linux-Hard-Disk-Read-Write-Benchmark.png (Linux Hard Disk Benchmark Read / Write Rate and Access Time)
|
||||
[7]:https://twitter.com/nixcraft
|
||||
[8]:https://facebook.com/nixcraft
|
||||
[9]:https://plus.google.com/+CybercitiBiz
|
@ -0,0 +1,141 @@
|
||||
How to use yum-cron to automatically update RHEL/CentOS Linux
|
||||
======
|
||||
The yum command line tool is used to install and update software packages under RHEL / CentOS Linux server. I know how to apply updates using [yum update command line][1], but I would like to use cron to update packages where appropriate manually. How do I configure yum to install software patches/updates [automatically with cron][2]?
|
||||
|
||||
You need to install yum-cron package. It provides files needed to run yum updates as a cron job. Install this package if you want auto yum updates nightly via cron.
|
||||
|
||||
### How to install yum cron on a CentOS/RHEL 6.x/7.x
|
||||
|
||||
Type the following [yum command][3] on:
|
||||
`$ sudo yum install yum-cron`
|
||||

|
||||
|
||||
Turn on service using systemctl command on **CentOS/RHEL 7.x** :
|
||||
```
|
||||
$ sudo systemctl enable yum-cron.service
|
||||
$ sudo systemctl start yum-cron.service
|
||||
$ sudo systemctl status yum-cron.service
|
||||
```
|
||||
If you are using **CentOS/RHEL 6.x** , run:
|
||||
```
|
||||
$ sudo chkconfig yum-cron on
|
||||
$ sudo service yum-cron start
|
||||
```
|
||||

|
||||
|
||||
yum-cron is an alternate interface to yum. Very convenient way to call yum from cron. It provides methods to keep repository metadata up to date, and to check for, download, and apply updates. Rather than accepting many different command line arguments, the different functions of yum-cron can be accessed through config files.
|
||||
|
||||
### How to configure yum-cron to automatically update RHEL/CentOS Linux
|
||||
|
||||
You need to edit /etc/yum/yum-cron.conf and /etc/yum/yum-cron-hourly.conf files using a text editor such as vi command:
|
||||
`$ sudo vi /etc/yum/yum-cron.conf`
|
||||
Make sure updates should be applied when they are available
|
||||
`apply_updates = yes`
|
||||
You can set the address to send email messages from. Please note that ‘localhost’ will be replaced with the value of system_name.
|
||||
`email_from = root@localhost`
|
||||
List of addresses to send messages to.
|
||||
`email_to = your-it-support@some-domain-name`
|
||||
Name of the host to connect to to send email messages.
|
||||
`email_host = localhost`
|
||||
If you [do not want to update kernel package add the following on CentOS/RHEL 7.x][4]:
|
||||
`exclude=kernel*`
|
||||
For RHEL/CentOS 6.x add [the following to exclude kernel package from updating][5]:
|
||||
`YUM_PARAMETER=kernel*`
|
||||
[Save and close the file in vi/vim][6]. You also need to update /etc/yum/yum-cron-hourly.conf file if you want to apply update hourly. Otherwise /etc/yum/yum-cron.conf will run on daily using the following cron job (us [cat command][7]:
|
||||
`$ cat /etc/cron.daily/0yum-daily.cron`
|
||||
Sample outputs:
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
# Only run if this flag is set. The flag is created by the yum-cron init
|
||||
# script when the service is started -- this allows one to use chkconfig and
|
||||
# the standard "service stop|start" commands to enable or disable yum-cron.
|
||||
if [[ ! -f /var/lock/subsys/yum-cron ]]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Action!
|
||||
exec /usr/sbin/yum-cron /etc/yum/yum-cron-hourly.conf
|
||||
[root@centos7-box yum]# cat /etc/cron.daily/0yum-daily.cron
|
||||
#!/bin/bash
|
||||
|
||||
# Only run if this flag is set. The flag is created by the yum-cron init
|
||||
# script when the service is started -- this allows one to use chkconfig and
|
||||
# the standard "service stop|start" commands to enable or disable yum-cron.
|
||||
if [[ ! -f /var/lock/subsys/yum-cron ]]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Action!
|
||||
exec /usr/sbin/yum-cron
|
||||
```
|
||||
|
||||
That is all. Now your system will update automatically everyday using yum-cron. See man page of yum-cron for more details:
|
||||
`$ man yum-cron`
|
||||
|
||||
### Method 2 – Use shell scripts
|
||||
|
||||
**Warning** : The following method is outdated. Do not use it on RHEL/CentOS 6.x/7.x. I kept it below for historical reasons only when I used it on CentOS/RHEL version 4.x/5.x.
|
||||
|
||||
Let us see how to configure CentOS/RHEL for yum automatic update retrieval and installation of security packages. You can use yum-updatesd service provided with CentOS / RHEL servers. However, this service provides a few overheads. You can create daily or weekly updates with the following shell script. Create
|
||||
|
||||
* **/etc/cron.daily/yumupdate.sh** to apply updates one a day.
|
||||
* **/etc/cron.weekly/yumupdate.sh** to apply updates once a week.
|
||||
|
||||
|
||||
|
||||
#### Sample shell script to update system
|
||||
|
||||
A shell script that instructs yum to update any packages it finds via [cron][8]:
|
||||
```
|
||||
#!/bin/bash
|
||||
YUM=/usr/bin/yum
|
||||
$YUM -y -R 120 -d 0 -e 0 update yum
|
||||
$YUM -y -R 10 -e 0 -d 0 update
|
||||
```
|
||||
|
||||
(Code listing -01: /etc/cron.daily/yumupdate.sh)
|
||||
|
||||
Where,
|
||||
|
||||
1. First command will update yum itself and next will apply system updates.
|
||||
2. **-R 120** : Sets the maximum amount of time yum will wait before performing a command
|
||||
3. **-e 0** : Sets the error level to 0 (range 0 – 10). 0 means print only critical errors about which you must be told.
|
||||
4. -d 0 : Sets the debugging level to 0 – turns up or down the amount of things that are printed. (range: 0 – 10).
|
||||
5. **-y** : Assume yes; assume that the answer to any question which would be asked is yes.
|
||||
|
||||
|
||||
|
||||
Make sure you setup executable permission:
|
||||
`# chmod +x /etc/cron.daily/yumupdate.sh`
|
||||
|
||||
|
||||
### about the author
|
||||
|
||||
Posted by:
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][9], [Facebook][10], [Google+][11]. Get the **latest tutorials on SysAdmin, Linux/Unix and open source topics via[my RSS/XML feed][12]**.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/faq/fedora-automatic-update-retrieval-installation-with-cron/
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/
|
||||
[2]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses
|
||||
[3]:https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/ (See Linux/Unix yum command examples for more info)
|
||||
[4]:https://www.cyberciti.biz/faq/yum-update-except-kernel-package-command/
|
||||
[5]:https://www.cyberciti.biz/faq/redhat-centos-linux-yum-update-exclude-packages/
|
||||
[6]:https://www.cyberciti.biz/faq/linux-unix-vim-save-and-quit-command/
|
||||
[7]:https://www.cyberciti.biz/faq/linux-unix-appleosx-bsd-cat-command-examples/ (See Linux/Unix cat command examples for more info)
|
||||
[8]:https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses
|
||||
[9]:https://twitter.com/nixcraft
|
||||
[10]:https://facebook.com/nixcraft
|
||||
[11]:https://plus.google.com/+CybercitiBiz
|
||||
[12]:https://www.cyberciti.biz/atom/atom.xml
|
@ -1,4 +1,4 @@
|
||||
6 Best Open Source Alternatives to Microsoft Office for Linux
|
||||
6 Best Open Source Alternatives to Microsoft Office for Linux
|
||||
======
|
||||
**Brief: Looking for Microsoft Office in Linux? Here are the best free and open source alternatives to Microsoft Office for Linux.**
|
||||
|
||||
|
@ -0,0 +1,94 @@
|
||||
How To Safely Generate A Random Number — Quarrelsome
|
||||
======
|
||||
### Use urandom
|
||||
|
||||
Use [urandom][1]. Use [urandom][2]. Use [urandom][3]. Use [urandom][4]. Use [urandom][5]. Use [urandom][6].
|
||||
|
||||
### But what about for crypto keys?
|
||||
|
||||
Still [urandom][6].
|
||||
|
||||
### Why not {SecureRandom, OpenSSL, havaged, &c}?
|
||||
|
||||
These are userspace CSPRNGs. You want to use the kernel’s CSPRNG, because:
|
||||
|
||||
* The kernel has access to raw device entropy.
|
||||
|
||||
* It can promise not to share the same state between applications.
|
||||
|
||||
* A good kernel CSPRNG, like FreeBSD’s, can also promise not to feed you random data before it’s seeded.
|
||||
|
||||
|
||||
|
||||
|
||||
Study the last ten years of randomness failures and you’ll read a litany of userspace randomness failures. [Debian’s OpenSSH debacle][7]? Userspace random. Android Bitcoin wallets [repeating ECDSA k’s][8]? Userspace random. Gambling sites with predictable shuffles? Userspace random.
|
||||
|
||||
Userspace OpenSSL also seeds itself from “from uninitialized memory, magical fairy dust and unicorn horns” generators almost always depend on the kernel’s generator anyways. Even if they don’t, the security of your whole system sure does. **A userspace CSPRNG doesn’t add defense-in-depth; instead, it creates two single points of failure.**
|
||||
|
||||
### Doesn’t the man page say to use /dev/random?
|
||||
|
||||
You But, more on this later. Stay your pitchforks. should ignore the man page. Don’t use /dev/random. The distinction between /dev/random and /dev/urandom is a Unix design wart. The man page doesn’t want to admit that, so it invents a security concern that doesn’t really exist. Consider the cryptographic advice in random(4) an urban legend and get on with your life.
|
||||
|
||||
### But what if I need real random values, not psuedorandom values?
|
||||
|
||||
Both urandom and /dev/random provide the same kind of randomness. Contrary to popular belief, /dev/random doesn’t provide “true random” data. For cryptography, you don’t usually want “true random”.
|
||||
|
||||
Both urandom and /dev/random are based on a simple idea. Their design is closely related to that of a stream cipher: a small secret is stretched into an indefinite stream of unpredictable values. Here the secrets are “entropy”, and the stream is “output”.
|
||||
|
||||
Only on Linux are /dev/random and urandom still meaningfully different. The Linux kernel CSPRNG rekeys itself regularly (by collecting more entropy). But /dev/random also tries to keep track of how much entropy remains in its kernel pool, and will occasionally go on strike if it decides not enough remains. This design is as silly as I’ve made it sound; it’s akin to AES-CTR blocking based on how much “key” is left in the “keystream”.
|
||||
|
||||
If you use /dev/random instead of urandom, your program will unpredictably (or, if you’re an attacker, very predictably) hang when Linux gets confused about how its own RNG works. Using /dev/random will make your programs less stable, but it won’t make them any more cryptographically safe.
|
||||
|
||||
### There’s a catch here, isn’t there?
|
||||
|
||||
No, but there’s a Linux kernel bug you might want to know about, even though it doesn’t change which RNG you should use.
|
||||
|
||||
On Linux, if your software runs immediately at boot, and/or the OS has just been installed, your code might be in a race with the RNG. That’s bad, because if you win the race, there could be a window of time where you get predictable outputs from urandom. This is a bug in Linux, and you need to know about it if you’re building platform-level code for a Linux embedded device.
|
||||
|
||||
This is indeed a problem with urandom (and not /dev/random) on Linux. It’s also a [bug in the Linux kernel][9]. But it’s also easily fixed in userland: at boot, seed urandom explicitly. Most Linux distributions have done this for a long time. But don’t switch to a different CSPRNG.
|
||||
|
||||
### What about on other operating systems?
|
||||
|
||||
FreeBSD and OS X do away with the distinction between urandom and /dev/random; the two devices behave identically. Unfortunately, the man page does a poor job of explaining why this is, and perpetuates the myth that Linux urandom is scary.
|
||||
|
||||
FreeBSD’s kernel crypto RNG doesn’t block regardless of whether you use /dev/random or urandom. Unless it hasn’t been seeded, in which case both block. This behavior, unlike Linux’s, makes sense. Linux should adopt it. But if you’re an app developer, this makes little difference to you: Linux, FreeBSD, iOS, whatever: use urandom.
|
||||
|
||||
### tl;dr
|
||||
|
||||
Use urandom.
|
||||
|
||||
### Epilog
|
||||
|
||||
[ruby-trunk Feature #9569][10]
|
||||
|
||||
> Right now, SecureRandom.random_bytes tries to detect an OpenSSL to use before it tries to detect /dev/urandom. I think it should be the other way around. In both cases, you just need random bytes to unpack, so SecureRandom could skip the middleman (and second point of failure) and just talk to /dev/urandom directly if it’s available.
|
||||
|
||||
Resolution:
|
||||
|
||||
> /dev/urandom is not suitable to be used to generate directly session keys and other application level random data which is generated frequently.
|
||||
>
|
||||
> [the] random(4) [man page] on GNU/Linux [says]…
|
||||
|
||||
Thanks to Matthew Green, Nate Lawson, Sean Devlin, Coda Hale, and Alex Balducci for reading drafts of this. Fair warning: Matthew only mostly agrees with me.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/
|
||||
|
||||
作者:[Thomas;Erin;Matasano][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://sockpuppet.org/blog
|
||||
[1]:http://blog.cr.yp.to/20140205-entropy.html
|
||||
[2]:http://cr.yp.to/talks/2011.09.28/slides.pdf
|
||||
[3]:http://golang.org/src/pkg/crypto/rand/rand_unix.go
|
||||
[4]:http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key
|
||||
[5]:http://stackoverflow.com/a/5639631
|
||||
[6]:https://twitter.com/bramcohen/status/206146075487240194
|
||||
[7]:http://research.swtch.com/openssl
|
||||
[8]:http://arstechnica.com/security/2013/08/google-confirms-critical-android-crypto-flaw-used-in-5700-bitcoin-heist/
|
||||
[9]:https://factorable.net/weakkeys12.extended.pdf
|
||||
[10]:https://bugs.ruby-lang.org/issues/9569
|
@ -1,343 +0,0 @@
|
||||
BriFuture is translating this article
|
||||
|
||||
Let’s Build A Simple Interpreter. Part 1.
|
||||
======
|
||||
|
||||
|
||||
> **" If you don't know how compilers work, then you don't know how computers work. If you're not 100% sure whether you know how compilers work, then you don't know how they work."** -- Steve Yegge
|
||||
|
||||
There you have it. Think about it. It doesn't really matter whether you're a newbie or a seasoned software developer: if you don't know how compilers and interpreters work, then you don't know how computers work. It's that simple.
|
||||
|
||||
So, do you know how compilers and interpreters work? And I mean, are you 100% sure that you know how they work? If you don't. ![][1]
|
||||
|
||||
Or if you don't and you're really agitated about it. ![][2]
|
||||
|
||||
Do not worry. If you stick around and work through the series and build an interpreter and a compiler with me you will know how they work in the end. And you will become a confident happy camper too. At least I hope so. ![][3]
|
||||
|
||||
Why would you study interpreters and compilers? I will give you three reasons.
|
||||
|
||||
1. To write an interpreter or a compiler you have to have a lot of technical skills that you need to use together. Writing an interpreter or a compiler will help you improve those skills and become a better software developer. As well, the skills you will learn are useful in writing any software, not just interpreters or compilers.
|
||||
2. You really want to know how computers work. Often interpreters and compilers look like magic. And you shouldn't be comfortable with that magic. You want to demystify the process of building an interpreter and a compiler, understand how they work, and get in control of things.
|
||||
3. You want to create your own programming language or domain specific language. If you create one, you will also need to create either an interpreter or a compiler for it. Recently, there has been a resurgence of interest in new programming languages. And you can see a new programming language pop up almost every day: Elixir, Go, Rust just to name a few.
|
||||
|
||||
|
||||
|
||||
|
||||
Okay, but what are interpreters and compilers?
|
||||
|
||||
The goal of an **interpreter** or a **compiler** is to translate a source program in some high-level language into some other form. Pretty vague, isn 't it? Just bear with me, later in the series you will learn exactly what the source program is translated into.
|
||||
|
||||
At this point you may also wonder what the difference is between an interpreter and a compiler. For the purpose of this series, let's agree that if a translator translates a source program into machine language, it is a **compiler**. If a translator processes and executes the source program without translating it into machine language first, it is an **interpreter**. Visually it looks something like this:
|
||||
|
||||
![][4]
|
||||
|
||||
I hope that by now you're convinced that you really want to study and build an interpreter and a compiler. What can you expect from this series on interpreters?
|
||||
|
||||
Here is the deal. You and I are going to create a simple interpreter for a large subset of [Pascal][5] language. At the end of this series you will have a working Pascal interpreter and a source-level debugger like Python's [pdb][6].
|
||||
|
||||
You might ask, why Pascal? For one thing, it's not a made-up language that I came up with just for this series: it's a real programming language that has many important language constructs. And some old, but useful, CS books use Pascal programming language in their examples (I understand that that's not a particularly compelling reason to choose a language to build an interpreter for, but I thought it would be nice for a change to learn a non-mainstream language :)
|
||||
|
||||
Here is an example of a factorial function in Pascal that you will be able to interpret with your own interpreter and debug with the interactive source-level debugger that you will create along the way:
|
||||
```
|
||||
program factorial;
|
||||
|
||||
function factorial(n: integer): longint;
|
||||
begin
|
||||
if n = 0 then
|
||||
factorial := 1
|
||||
else
|
||||
factorial := n * factorial(n - 1);
|
||||
end;
|
||||
|
||||
var
|
||||
n: integer;
|
||||
|
||||
begin
|
||||
for n := 0 to 16 do
|
||||
writeln(n, '! = ', factorial(n));
|
||||
end.
|
||||
```
|
||||
|
||||
The implementation language of the Pascal interpreter will be Python, but you can use any language you want because the ideas presented don't depend on any particular implementation language. Okay, let's get down to business. Ready, set, go!
|
||||
|
||||
You will start your first foray into interpreters and compilers by writing a simple interpreter of arithmetic expressions, also known as a calculator. Today the goal is pretty minimalistic: to make your calculator handle the addition of two single digit integers like **3+5**. Here is the source code for your calculator, sorry, interpreter:
|
||||
|
||||
```
|
||||
# Token types
|
||||
#
|
||||
# EOF (end-of-file) token is used to indicate that
|
||||
# there is no more input left for lexical analysis
|
||||
INTEGER, PLUS, EOF = 'INTEGER', 'PLUS', 'EOF'
|
||||
|
||||
|
||||
class Token(object):
|
||||
def __init__(self, type, value):
|
||||
# token type: INTEGER, PLUS, or EOF
|
||||
self.type = type
|
||||
# token value: 0, 1, 2. 3, 4, 5, 6, 7, 8, 9, '+', or None
|
||||
self.value = value
|
||||
|
||||
def __str__(self):
|
||||
"""String representation of the class instance.
|
||||
|
||||
Examples:
|
||||
Token(INTEGER, 3)
|
||||
Token(PLUS '+')
|
||||
"""
|
||||
return 'Token({type}, {value})'.format(
|
||||
type=self.type,
|
||||
value=repr(self.value)
|
||||
)
|
||||
|
||||
def __repr__(self):
|
||||
return self.__str__()
|
||||
|
||||
|
||||
class Interpreter(object):
|
||||
def __init__(self, text):
|
||||
# client string input, e.g. "3+5"
|
||||
self.text = text
|
||||
# self.pos is an index into self.text
|
||||
self.pos = 0
|
||||
# current token instance
|
||||
self.current_token = None
|
||||
|
||||
def error(self):
|
||||
raise Exception('Error parsing input')
|
||||
|
||||
def get_next_token(self):
|
||||
"""Lexical analyzer (also known as scanner or tokenizer)
|
||||
|
||||
This method is responsible for breaking a sentence
|
||||
apart into tokens. One token at a time.
|
||||
"""
|
||||
text = self.text
|
||||
|
||||
# is self.pos index past the end of the self.text ?
|
||||
# if so, then return EOF token because there is no more
|
||||
# input left to convert into tokens
|
||||
if self.pos > len(text) - 1:
|
||||
return Token(EOF, None)
|
||||
|
||||
# get a character at the position self.pos and decide
|
||||
# what token to create based on the single character
|
||||
current_char = text[self.pos]
|
||||
|
||||
# if the character is a digit then convert it to
|
||||
# integer, create an INTEGER token, increment self.pos
|
||||
# index to point to the next character after the digit,
|
||||
# and return the INTEGER token
|
||||
if current_char.isdigit():
|
||||
token = Token(INTEGER, int(current_char))
|
||||
self.pos += 1
|
||||
return token
|
||||
|
||||
if current_char == '+':
|
||||
token = Token(PLUS, current_char)
|
||||
self.pos += 1
|
||||
return token
|
||||
|
||||
self.error()
|
||||
|
||||
def eat(self, token_type):
|
||||
# compare the current token type with the passed token
|
||||
# type and if they match then "eat" the current token
|
||||
# and assign the next token to the self.current_token,
|
||||
# otherwise raise an exception.
|
||||
if self.current_token.type == token_type:
|
||||
self.current_token = self.get_next_token()
|
||||
else:
|
||||
self.error()
|
||||
|
||||
def expr(self):
|
||||
"""expr -> INTEGER PLUS INTEGER"""
|
||||
# set current token to the first token taken from the input
|
||||
self.current_token = self.get_next_token()
|
||||
|
||||
# we expect the current token to be a single-digit integer
|
||||
left = self.current_token
|
||||
self.eat(INTEGER)
|
||||
|
||||
# we expect the current token to be a '+' token
|
||||
op = self.current_token
|
||||
self.eat(PLUS)
|
||||
|
||||
# we expect the current token to be a single-digit integer
|
||||
right = self.current_token
|
||||
self.eat(INTEGER)
|
||||
# after the above call the self.current_token is set to
|
||||
# EOF token
|
||||
|
||||
# at this point INTEGER PLUS INTEGER sequence of tokens
|
||||
# has been successfully found and the method can just
|
||||
# return the result of adding two integers, thus
|
||||
# effectively interpreting client input
|
||||
result = left.value + right.value
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
while True:
|
||||
try:
|
||||
# To run under Python3 replace 'raw_input' call
|
||||
# with 'input'
|
||||
text = raw_input('calc> ')
|
||||
except EOFError:
|
||||
break
|
||||
if not text:
|
||||
continue
|
||||
interpreter = Interpreter(text)
|
||||
result = interpreter.expr()
|
||||
print(result)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
```
|
||||
|
||||
|
||||
Save the above code into calc1.py file or download it directly from [GitHub][7]. Before you start digging deeper into the code, run the calculator on the command line and see it in action. Play with it! Here is a sample session on my laptop (if you want to run the calculator under Python3 you will need to replace raw_input with input):
|
||||
```
|
||||
$ python calc1.py
|
||||
calc> 3+4
|
||||
7
|
||||
calc> 3+5
|
||||
8
|
||||
calc> 3+9
|
||||
12
|
||||
calc>
|
||||
```
|
||||
|
||||
For your simple calculator to work properly without throwing an exception, your input needs to follow certain rules:
|
||||
|
||||
* Only single digit integers are allowed in the input
|
||||
* The only arithmetic operation supported at the moment is addition
|
||||
* No whitespace characters are allowed anywhere in the input
|
||||
|
||||
|
||||
|
||||
Those restrictions are necessary to make the calculator simple. Don't worry, you'll make it pretty complex pretty soon.
|
||||
|
||||
Okay, now let's dive in and see how your interpreter works and how it evaluates arithmetic expressions.
|
||||
|
||||
When you enter an expression 3+5 on the command line your interpreter gets a string "3+5". In order for the interpreter to actually understand what to do with that string it first needs to break the input "3+5" into components called **tokens**. A **token** is an object that has a type and a value. For example, for the string "3" the type of the token will be INTEGER and the corresponding value will be integer 3.
|
||||
|
||||
The process of breaking the input string into tokens is called **lexical analysis**. So, the first step your interpreter needs to do is read the input of characters and convert it into a stream of tokens. The part of the interpreter that does it is called a **lexical analyzer** , or **lexer** for short. You might also encounter other names for the same component, like **scanner** or **tokenizer**. They all mean the same: the part of your interpreter or compiler that turns the input of characters into a stream of tokens.
|
||||
|
||||
The method get_next_token of the Interpreter class is your lexical analyzer. Every time you call it, you get the next token created from the input of characters passed to the interpreter. Let's take a closer look at the method itself and see how it actually does its job of converting characters into tokens. The input is stored in the variable text that holds the input string and pos is an index into that string (think of the string as an array of characters). pos is initially set to 0 and points to the character '3'. The method first checks whether the character is a digit and if so, it increments pos and returns a token instance with the type INTEGER and the value set to the integer value of the string '3', which is an integer 3:
|
||||
|
||||
![][8]
|
||||
|
||||
The pos now points to the '+' character in the text. The next time you call the method, it tests if a character at the position pos is a digit and then it tests if the character is a plus sign, which it is. As a result the method increments pos and returns a newly created token with the type PLUS and value '+':
|
||||
|
||||
![][9]
|
||||
|
||||
The pos now points to character '5'. When you call the get_next_token method again the method checks if it's a digit, which it is, so it increments pos and returns a new INTEGER token with the value of the token set to integer 5: ![][10]
|
||||
|
||||
Because the pos index is now past the end of the string "3+5" the get_next_token method returns the EOF token every time you call it:
|
||||
|
||||
![][11]
|
||||
|
||||
Try it out and see for yourself how the lexer component of your calculator works:
|
||||
```
|
||||
>>> from calc1 import Interpreter
|
||||
>>>
|
||||
>>> interpreter = Interpreter('3+5')
|
||||
>>> interpreter.get_next_token()
|
||||
Token(INTEGER, 3)
|
||||
>>>
|
||||
>>> interpreter.get_next_token()
|
||||
Token(PLUS, '+')
|
||||
>>>
|
||||
>>> interpreter.get_next_token()
|
||||
Token(INTEGER, 5)
|
||||
>>>
|
||||
>>> interpreter.get_next_token()
|
||||
Token(EOF, None)
|
||||
>>>
|
||||
```
|
||||
|
||||
So now that your interpreter has access to the stream of tokens made from the input characters, the interpreter needs to do something with it: it needs to find the structure in the flat stream of tokens it gets from the lexer get_next_token. Your interpreter expects to find the following structure in that stream: INTEGER -> PLUS -> INTEGER. That is, it tries to find a sequence of tokens: integer followed by a plus sign followed by an integer.
|
||||
|
||||
The method responsible for finding and interpreting that structure is expr. This method verifies that the sequence of tokens does indeed correspond to the expected sequence of tokens, i.e INTEGER -> PLUS -> INTEGER. After it's successfully confirmed the structure, it generates the result by adding the value of the token on the left side of the PLUS and the right side of the PLUS, thus successfully interpreting the arithmetic expression you passed to the interpreter.
|
||||
|
||||
The expr method itself uses the helper method eat to verify that the token type passed to the eat method matches the current token type. After matching the passed token type the eat method gets the next token and assigns it to the current_token variable, thus effectively "eating" the currently matched token and advancing the imaginary pointer in the stream of tokens. If the structure in the stream of tokens doesn't correspond to the expected INTEGER PLUS INTEGER sequence of tokens the eat method throws an exception.
|
||||
|
||||
Let's recap what your interpreter does to evaluate an arithmetic expression:
|
||||
|
||||
* The interpreter accepts an input string, let's say "3+5"
|
||||
* The interpreter calls the expr method to find a structure in the stream of tokens returned by the lexical analyzer get_next_token. The structure it tries to find is of the form INTEGER PLUS INTEGER. After it's confirmed the structure, it interprets the input by adding the values of two INTEGER tokens because it's clear to the interpreter at that point that what it needs to do is add two integers, 3 and 5.
|
||||
|
||||
Congratulate yourself. You've just learned how to build your very first interpreter!
|
||||
|
||||
Now it's time for exercises.
|
||||
|
||||
![][12]
|
||||
|
||||
You didn't think you would just read this article and that would be enough, did you? Okay, get your hands dirty and do the following exercises:
|
||||
|
||||
1. Modify the code to allow multiple-digit integers in the input, for example "12+3"
|
||||
2. Add a method that skips whitespace characters so that your calculator can handle inputs with whitespace characters like " 12 + 3"
|
||||
3. Modify the code and instead of '+' handle '-' to evaluate subtractions like "7-5"
|
||||
|
||||
|
||||
|
||||
**Check your understanding**
|
||||
|
||||
1. What is an interpreter?
|
||||
2. What is a compiler?
|
||||
3. What's the difference between an interpreter and a compiler?
|
||||
4. What is a token?
|
||||
5. What is the name of the process that breaks input apart into tokens?
|
||||
6. What is the part of the interpreter that does lexical analysis called?
|
||||
7. What are the other common names for that part of an interpreter or a compiler?
|
||||
|
||||
|
||||
|
||||
Before I finish this article, I really want you to commit to studying interpreters and compilers. And I want you to do it right now. Don't put it on the back burner. Don't wait. If you've skimmed the article, start over. If you've read it carefully but haven't done exercises - do them now. If you've done only some of them, finish the rest. You get the idea. And you know what? Sign the commitment pledge to start learning about interpreters and compilers today!
|
||||
|
||||
|
||||
|
||||
_I, ________, of being sound mind and body, do hereby pledge to commit to studying interpreters and compilers starting today and get to a point where I know 100% how they work!_
|
||||
|
||||
Signature:
|
||||
|
||||
Date:
|
||||
|
||||
![][13]
|
||||
|
||||
Sign it, date it, and put it somewhere where you can see it every day to make sure that you stick to your commitment. And keep in mind the definition of commitment:
|
||||
|
||||
> "Commitment is doing the thing you said you were going to do long after the mood you said it in has left you." -- Darren Hardy
|
||||
|
||||
Okay, that's it for today. In the next article of the mini series you will extend your calculator to handle more arithmetic expressions. Stay tuned.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://ruslanspivak.com/lsbasi-part1/
|
||||
|
||||
作者:[Ruslan Spivak][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ruslanspivak.com
|
||||
[1]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_i_dont_know.png
|
||||
[2]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_omg.png
|
||||
[3]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_i_know.png
|
||||
[4]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_compiler_interpreter.png
|
||||
[5]:https://en.wikipedia.org/wiki/Pascal_%28programming_language%29
|
||||
[6]:https://docs.python.org/2/library/pdb.html
|
||||
[7]:https://github.com/rspivak/lsbasi/blob/master/part1/calc1.py
|
||||
[8]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_lexer1.png
|
||||
[9]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_lexer2.png
|
||||
[10]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_lexer3.png
|
||||
[11]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_lexer4.png
|
||||
[12]:https://ruslanspivak.com/lsbasi-part1/lsbasi_exercises2.png
|
||||
[13]:https://ruslanspivak.com/lsbasi-part1/lsbasi_part1_commitment_pledge.png
|
||||
[14]:http://ruslanspivak.com/lsbaws-part1/ (Part 1)
|
||||
[15]:http://ruslanspivak.com/lsbaws-part2/ (Part 2)
|
||||
[16]:http://ruslanspivak.com/lsbaws-part3/ (Part 3)
|
@ -1,161 +0,0 @@
|
||||
Learn your tools: Navigating your Git History
|
||||
============================================================
|
||||
|
||||
Starting a greenfield application everyday is nearly impossible, especially in your daily job. In fact, most of us are facing (somewhat) legacy codebases on a daily basis, and regaining the context of why some feature, or line of code exists in the codebase is very important. This is where `git`, the distributed version control system, is invaluable. Let’s dive in and see how we can use our `git` history and easily navigate through it.
|
||||
|
||||
### Git history
|
||||
|
||||
First and foremost, what is `git` history? As the name says, it is the commit history of a `git` repo. It contains a bunch of commit messages, with their authors’ name, the commit hash and the date of the commit. The easiest way to see the history of a `git`repo, is the `git log` command.
|
||||
|
||||
Sidenote: For the purpose of this post, we will use Ruby on Rails’ repo, the `master`branch. The reason behind this is because Rails has a very good `git` history, with nice commit messages, references and explanations behind every change. Given the size of the codebase, the age and the number of maintainers, it’s certainly one of the best repositories that I have seen. Of course, I am not saying there are no other repositories built with good `git` practices, but this is one that has caught my eye.
|
||||
|
||||
So back to Rails’ repo. If you run `git log` in the Rails’ repo, you will see something like this:
|
||||
|
||||
```
|
||||
commit 66ebbc4952f6cfb37d719f63036441ef98149418Author: Arthur Neves <foo@bar.com>Date: Fri Jun 3 17:17:38 2016 -0400 Dont re-define class SQLite3Adapter on test We were declaring in a few tests, which depending of the order load will cause an error, as the super class could change. see https://github.com/rails/rails/commit/ac1c4e141b20c1067af2c2703db6e1b463b985da#commitcomment-17731383commit 755f6bf3d3d568bc0af2c636be2f6df16c651eb1Merge: 4e85538 f7b850eAuthor: Eileen M. Uchitelle <foo@bar.com>Date: Fri Jun 3 10:21:49 2016 -0400 Merge pull request #25263 from abhishekjain16/doc_accessor_thread [skip ci] Fix grammarcommit f7b850ec9f6036802339e965c8ce74494f731b4aAuthor: Abhishek Jain <foo@bar.com>Date: Fri Jun 3 16:49:21 2016 +0530 [skip ci] Fix grammarcommit 4e85538dddf47877cacc65cea6c050e349af0405Merge: 082a515 cf2158cAuthor: Vijay Dev <foo@bar.com>Date: Fri Jun 3 14:00:47 2016 +0000 Merge branch 'master' of github.com:rails/docrails Conflicts: guides/source/action_cable_overview.mdcommit 082a5158251c6578714132e5c4f71bd39f462d71Merge: 4bd11d4 3bd30d9Author: Yves Senn <foo@bar.com>Date: Fri Jun 3 11:30:19 2016 +0200 Merge pull request #25243 from sukesan1984/add_i18n_validation_test Add i18n_validation_testcommit 4bd11d46de892676830bca51d3040f29200abbfaMerge: 99d8d45 e98caf8Author: Arthur Nogueira Neves <foo@bar.com>Date: Thu Jun 2 22:55:52 2016 -0400 Merge pull request #25258 from alexcameron89/master [skip ci] Make header bullets consistent in engines.mdcommit e98caf81fef54746126d31076c6d346c48ae8e1bAuthor: Alex Kitchens <foo@bar.com>Date: Thu Jun 2 21:26:53 2016 -0500 [skip ci] Make header bullets consistent in engines.md
|
||||
```
|
||||
|
||||
As you can see, the `git log` shows the commit hash, the author and his email and the date of when the commit was created. Of course, `git` being super customisable, it allows you to customise the output format of the `git log` command. Let’s say, we want to just see the first line of the commit message, we could run `git log --oneline`, which will produce a more compact log:
|
||||
|
||||
```
|
||||
66ebbc4 Dont re-define class SQLite3Adapter on test755f6bf Merge pull request #25263 from abhishekjain16/doc_accessor_threadf7b850e [skip ci] Fix grammar4e85538 Merge branch 'master' of github.com:rails/docrails082a515 Merge pull request #25243 from sukesan1984/add_i18n_validation_test4bd11d4 Merge pull request #25258 from alexcameron89/mastere98caf8 [skip ci] Make header bullets consistent in engines.md99d8d45 Merge pull request #25254 from kamipo/fix_debug_helper_test818397c Merge pull request #25240 from matthewd/reloadable-channels2c5a8ba Don't blank pad day of the month when formatting dates14ff8e7 Fix debug helper test
|
||||
```
|
||||
|
||||
To see all of the `git log` options, I recommend checking out manpage of `git log`, available in your terminal via `man git-log` or `git help log`. A tip: if `git log` is a bit scarse or complicated to use, or maybe you are just bored, I recommend checking out various `git` GUIs and command line tools. In the past I’ve used [GitX][1] which was very good, but since the command line feels like home to me, after trying [tig][2] I’ve never looked back.
|
||||
|
||||
### Finding Nemo
|
||||
|
||||
So now, since we know the bare minimum of the `git log` command, let’s see how we can explore the history more effectively in our everyday work.
|
||||
|
||||
Let’s say, hypothetically, we are suspecting an unexpected behaviour in the`String#classify` method and we want to find how and where it has been implemented.
|
||||
|
||||
One of the first commands that you can use, to see where the method is defined, is `git grep`. Simply said, this command prints out lines that match a certain pattern. Now, to find the definition of the method, it’s pretty simple - we can grep for `def classify` and see what we get:
|
||||
|
||||
```
|
||||
➜ git grep 'def classify'activesupport/lib/active_support/core_ext/string/inflections.rb: def classifyactivesupport/lib/active_support/inflector/methods.rb: def classify(table_name)tools/profile: def classify
|
||||
```
|
||||
|
||||
Now, although we can already see where our method is created, we are not sure on which line it is. If we add the `-n` flag to our `git grep` command, `git` will provide the line numbers of the match:
|
||||
|
||||
```
|
||||
➜ git grep -n 'def classify'activesupport/lib/active_support/core_ext/string/inflections.rb:205: def classifyactivesupport/lib/active_support/inflector/methods.rb:186: def classify(table_name)tools/profile:112: def classify
|
||||
```
|
||||
|
||||
Much better, right? Having the context in mind, we can easily figure out that the method that we are looking for lives in `activesupport/lib/active_support/core_ext/string/inflections.rb`, on line 205\. The `classify` method, in all of it’s glory looks like this:
|
||||
|
||||
```
|
||||
# Creates a class name from a plural table name like Rails does for table names to models.# Note that this returns a string and not a class. (To convert to an actual class# follow +classify+ with +constantize+.)## 'ham_and_eggs'.classify # => "HamAndEgg"# 'posts'.classify # => "Post"def classify ActiveSupport::Inflector.classify(self)end
|
||||
```
|
||||
|
||||
Although the method we found is the one we usually call on `String`s, it invokes another method on the `ActiveSupport::Inflector`, with the same name. Having our `git grep` result available, we can easily navigate there, since we can see the second line of the result being`activesupport/lib/active_support/inflector/methods.rb` on line 186\. The method that we are are looking for is:
|
||||
|
||||
```
|
||||
# Creates a class name from a plural table name like Rails does for table# names to models. Note that this returns a string and not a Class (To# convert to an actual class follow +classify+ with #constantize).## classify('ham_and_eggs') # => "HamAndEgg"# classify('posts') # => "Post"## Singular names are not handled correctly:## classify('calculus') # => "Calculus"def classify(table_name) # strip out any leading schema name camelize(singularize(table_name.to_s.sub(/.*\./, ''.freeze)))end
|
||||
```
|
||||
|
||||
Boom! Given the size of Rails, finding this should not take us more than 30 seconds with the help of `git grep`.
|
||||
|
||||
### So, what changed last?
|
||||
|
||||
Now, since we have the method available, we need to figure out what were the changes that this file has gone through. The since we know the correct file name and line number, we can use `git blame`. This command shows what revision and author last modified each line of a file. Let’s see what were the latest changes made to this file:
|
||||
|
||||
```
|
||||
git blame activesupport/lib/active_support/inflector/methods.rb
|
||||
```
|
||||
|
||||
Whoa! Although we get the last change of every line in the file, we are more interested in the specific method (lines 176 to 189). Let’s add a flag to the `git blame` command, that will show the blame of just those lines. Also, we will add the `-s` (suppress) option to the command, to skip the author names and the timestamp of the revision (commit) that changed the line:
|
||||
|
||||
```
|
||||
git blame -L 176,189 -s activesupport/lib/active_support/inflector/methods.rb9fe8e19a 176) # Creates a class name from a plural table name like Rails does for table5ea3f284 177) # names to models. Note that this returns a string and not a Class (To9fe8e19a 178) # convert to an actual class follow +classify+ with #constantize).51cd6bb8 179) #6d077205 180) # classify('ham_and_eggs') # => "HamAndEgg"9fe8e19a 181) # classify('posts') # => "Post"51cd6bb8 182) #51cd6bb8 183) # Singular names are not handled correctly:5ea3f284 184) #66d6e7be 185) # classify('calculus') # => "Calculus"51cd6bb8 186) def classify(table_name)51cd6bb8 187) # strip out any leading schema name5bb1d4d2 188) camelize(singularize(table_name.to_s.sub(/.*\./, ''.freeze)))51cd6bb8 189) end
|
||||
```
|
||||
|
||||
The output of the `git blame` command now shows all of the file lines and their respective revisions. Now, to see a specific revision, or in other words, what each of those revisions changed, we can use the `git show` command. When supplied a revision hash (like `66d6e7be`) as an argument, it will show you the full revision, with the author name, timestamp and the whole revision in it’s glory. Let’s see what actually changed at the latest revision that changed line 188:
|
||||
|
||||
```
|
||||
git show 5bb1d4d2
|
||||
```
|
||||
|
||||
Whoa! Did you test that? If you didn’t, it’s an awesome [commit][3] by [Schneems][4] that made a very interesting performance optimization by using frozen strings, which makes sense in our current context. But, since we are on this hypothetical debugging session, this doesn’t tell much about our current problem. So, how can we see what changes has our method under investigation gone through?
|
||||
|
||||
### Searching the logs
|
||||
|
||||
Now, we are back to the `git` log. The question is, how can we see all the revisions that the `classify` method went under?
|
||||
|
||||
The `git log` command is quite powerful, because it has a rich list of options to apply to it. We can try to see what the `git` log has stored for this file, using the `-p`options, which means show me the patch for this entry in the `git` log:
|
||||
|
||||
```
|
||||
git log -p activesupport/lib/active_support/inflector/methods.rb
|
||||
```
|
||||
|
||||
This will show us a big list of revisions, for every revision of this file. But, just like before, we are interested in the specific file lines. Let’s modify the command a bit, to show us what we need:
|
||||
|
||||
```
|
||||
git log -L 176,189:activesupport/lib/active_support/inflector/methods.rb
|
||||
```
|
||||
|
||||
The `git log` command accepts the `-L` option, which takes the lines range and the filename as arguments. The format might be a bit weird for you, but it translates to:
|
||||
|
||||
```
|
||||
git log -L <start-line>,<end-line>:<path-to-file>
|
||||
```
|
||||
|
||||
When we run this command, we can see the list of revisions for these lines, which will lead us to the first revision that created the method:
|
||||
|
||||
```
|
||||
commit 51xd6bb829c418c5fbf75de1dfbb177233b1b154Author: Foo Bar <foo@bar.com>Date: Tue Jun 7 19:05:09 2011 -0700 Refactordiff --git a/activesupport/lib/active_support/inflector/methods.rb b/activesupport/lib/active_support/inflector/methods.rb--- a/activesupport/lib/active_support/inflector/methods.rb+++ b/activesupport/lib/active_support/inflector/methods.rb@@ -58,0 +135,14 @@+ # Create a class name from a plural table name like Rails does for table names to models.+ # Note that this returns a string and not a Class. (To convert to an actual class+ # follow +classify+ with +constantize+.)+ #+ # Examples:+ # "egg_and_hams".classify # => "EggAndHam"+ # "posts".classify # => "Post"+ #+ # Singular names are not handled correctly:+ # "business".classify # => "Busines"+ def classify(table_name)+ # strip out any leading schema name+ camelize(singularize(table_name.to_s.sub(/.*\./, '')))+ end
|
||||
```
|
||||
|
||||
Now, look at that - it’s a commit from 2011\. Practically, `git` allows us to travel back in time. This is a very good example of why a proper commit message is paramount to regain context, because from the commit message we cannot really regain context of how this method came to be. But, on the flip side, you should **never ever** get frustrated about it, because you are looking at someone that basically gives away his time and energy for free, doing open source work.
|
||||
|
||||
Coming back from that tangent, we are not sure how the initial implementation of the `classify` method came to be, given that the first commit is just a refactor. Now, if you are thinking something within the lines of “but maybe, just maybe, the method was not on the line range 176 to 189, and we should look more broadly in the file”, you are very correct. The revision that we saw said “Refactor” in it’s commit message, which means that the method was actually there, but after that refactor it started to exist on that line range.
|
||||
|
||||
So, how can we confirm this? Well, believe it or not, `git` comes to the rescue again. The `git log` command accepts the `-S` option, which looks for the code change (additions or deletions) for the specified string as an argument to the command. This means that, if we call `git log -S classify`, we can see all of the commits that changed a line that contains that string.
|
||||
|
||||
If you call this command in the Rails repo, you will first see `git` slowing down a bit. But, when you realise that `git` actually parses all of the revisions in the repo to match the string, it’s actually super fast. Again, the power of `git` at your fingertips. So, to find the first version of the `classify` method, we can run:
|
||||
|
||||
```
|
||||
git log -S 'def classify'
|
||||
```
|
||||
|
||||
This will return all of the revisions where this method has been introduced or changed. If you were following along, the last commit in the log that you will see is:
|
||||
|
||||
```
|
||||
commit db045dbbf60b53dbe013ef25554fd013baf88134Author: David Heinemeier Hansson <foo@bar.com>Date: Wed Nov 24 01:04:44 2004 +0000 Initial git-svn-id: http://svn-commit.rubyonrails.org/rails/trunk@4 5ecf4fe2-1ee6-0310-87b1-e25e094e27de
|
||||
```
|
||||
|
||||
How cool is that? It’s the initial commit to Rails, made on a `svn` repo by DHH! This means that `classify` has been around since the beginning of (Rails) time. Now, to see the commit with all of it’s changes, we can run:
|
||||
|
||||
```
|
||||
git show db045dbbf60b53dbe013ef25554fd013baf88134
|
||||
```
|
||||
|
||||
Great, we got to the bottom of it. Now, by using the output from `git log -S 'def classify'` you can track the changes that have happened to this method, combined with the power of the `git log -L` command.
|
||||
|
||||
### Until next time
|
||||
|
||||
Sure, we didn’t really fix any bugs, because we were trying some `git` commands and following along the evolution of the `classify` method. But, nevertheless, `git` is a very powerful tool that we all must learn to use and to embrace. I hope this article gave you a little bit more knowledge of how useful `git` is.
|
||||
|
||||
What are your favourite (or, most effective) ways of navigating through the `git`history?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Backend engineer, interested in Ruby, Go, microservices, building resilient architectures and solving challenges at scale. I coach at Rails Girls in Amsterdam, maintain a list of small gems and often contribute to Open Source.
|
||||
This is where I write about software development, programming languages and everything else that interests me.
|
||||
|
||||
------
|
||||
|
||||
via: https://ieftimov.com/learn-your-tools-navigating-git-history
|
||||
|
||||
作者:[Ilija Eftimov ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ieftimov.com/
|
||||
[1]:http://gitx.frim.nl/
|
||||
[2]:https://github.com/jonas/tig
|
||||
[3]:https://github.com/rails/rails/commit/5bb1d4d288d019e276335465d0389fd2f5246bfd
|
||||
[4]:https://twitter.com/schneems
|
289
sources/tech/20161106 Myths about -dev-urandom.md
Normal file
289
sources/tech/20161106 Myths about -dev-urandom.md
Normal file
@ -0,0 +1,289 @@
|
||||
Myths about /dev/urandom
|
||||
======
|
||||
|
||||
There are a few things about /dev/urandom and /dev/random that are repeated again and again. Still they are false.
|
||||
|
||||
I'm mostly talking about reasonably recent Linux systems, not other UNIX-like systems.
|
||||
|
||||
### /dev/urandom is insecure. Always use /dev/random for cryptographic purposes.
|
||||
|
||||
Fact: /dev/urandom is the preferred source of cryptographic randomness on UNIX-like systems.
|
||||
|
||||
### /dev/urandom is a pseudo random number generator, a PRNG, while /dev/random is a “true” random number generator.
|
||||
|
||||
Fact: Both /dev/urandom and /dev/random are using the exact same CSPRNG (a cryptographically secure pseudorandom number generator). They only differ in very few ways that have nothing to do with “true” randomness.
|
||||
|
||||
### /dev/random is unambiguously the better choice for cryptography. Even if /dev/urandom were comparably secure, there's no reason to choose the latter.
|
||||
|
||||
Fact: /dev/random has a very nasty problem: it blocks.
|
||||
|
||||
### But that's good! /dev/random gives out exactly as much randomness as it has entropy in its pool. /dev/urandom will give you insecure random numbers, even though it has long run out of entropy.
|
||||
|
||||
Fact: No. Even disregarding issues like availability and subsequent manipulation by users, the issue of entropy “running low” is a straw man. About 256 bits of entropy are enough to get computationally secure numbers for a long, long time.
|
||||
|
||||
And the fun only starts here: how does /dev/random know how much entropy there is available to give out? Stay tuned!
|
||||
|
||||
### But cryptographers always talk about constant re-seeding. Doesn't that contradict your last point?
|
||||
|
||||
Fact: You got me! Kind of. It is true, the random number generator is constantly re-seeded using whatever entropy the system can lay its hands on. But that has (partly) other reasons.
|
||||
|
||||
Look, I don't claim that injecting entropy is bad. It's good. I just claim that it's bad to block when the entropy estimate is low.
|
||||
|
||||
### That's all good and nice, but even the man page for /dev/(u)random contradicts you! Does anyone who knows about this stuff actually agree with you?
|
||||
|
||||
Fact: No, it really doesn't. It seems to imply that /dev/urandom is insecure for cryptographic use, unless you really understand all that cryptographic jargon.
|
||||
|
||||
The man page does recommend the use of /dev/random in some cases (it doesn't hurt, in my opinion, but is not strictly necessary), but it also recommends /dev/urandom as the device to use for “normal” cryptographic use.
|
||||
|
||||
And while appeal to authority is usually nothing to be proud of, in cryptographic issues you're generally right to be careful and try to get the opinion of a domain expert.
|
||||
|
||||
And yes, quite a few experts share my view that /dev/urandom is the go-to solution for your random number needs in a cryptography context on UNIX-like systems. Obviously, their opinions influenced mine, not the other way around.
|
||||
|
||||
Hard to believe, right? I must certainly be wrong! Well, read on and let me try to convince you.
|
||||
|
||||
I tried to keep it out, but I fear there are two preliminaries to be taken care of, before we can really tackle all those points.
|
||||
|
||||
Namely, what is randomness, or better: what kind of randomness am I talking about here?
|
||||
|
||||
And, even more important, I'm really not being condescending. I have written this document to have a thing to point to, when this discussion comes up again. More than 140 characters. Without repeating myself again and again. Being able to hone the writing and the arguments itself, benefitting many discussions in many venues.
|
||||
|
||||
And I'm certainly willing to hear differing opinions. I'm just saying that it won't be enough to state that /dev/urandom is bad. You need to identify the points you're disagreeing with and engage them.
|
||||
|
||||
### You're saying I'm stupid!
|
||||
|
||||
Emphatically no!
|
||||
|
||||
Actually, I used to believe that /dev/urandom was insecure myself, a few years ago. And it's something you and me almost had to believe, because all those highly respected people on Usenet, in web forums and today on Twitter told us. Even the man page seems to say so. Who were we to dismiss their convincing argument about “entropy running low”?
|
||||
|
||||
This misconception isn't so rampant because people are stupid, it is because with a little knowledge about cryptography (namely some vague idea what entropy is) it's very easy to be convinced of it. Intuition almost forces us there. Unfortunately intuition is often wrong in cryptography. So it is here.
|
||||
|
||||
### True randomness
|
||||
|
||||
What does it mean for random numbers to be “truly random”?
|
||||
|
||||
I don't want to dive into that issue too deep, because it quickly gets philosophical. Discussions have been known to unravel fast, because everyone can wax about their favorite model of randomness, without paying attention to anyone else. Or even making himself understood.
|
||||
|
||||
I believe that the “gold standard” for “true randomness” are quantum effects. Observe a photon pass through a semi-transparent mirror. Or not. Observe some radioactive material emit alpha particles. It's the best idea we have when it comes to randomness in the world. Other people might reasonably believe that those effects aren't truly random. Or even that there is no randomness in the world at all. Let a million flowers bloom.
|
||||
|
||||
Cryptographers often circumvent this philosophical debate by disregarding what it means for randomness to be “true”. They care about unpredictability. As long as nobody can get any information about the next random number, we're fine. And when you're talking about random numbers as a prerequisite in using cryptography, that's what you should aim for, in my opinion.
|
||||
|
||||
Anyway, I don't care much about those “philosophically secure” random numbers, as I like to think of your “true” random numbers.
|
||||
|
||||
### Two kinds of security, one that matters
|
||||
|
||||
But let's assume you've obtained those “true” random numbers. What are you going to do with them?
|
||||
|
||||
You print them out, frame them and hang them on your living-room wall, to revel in the beauty of a quantum universe? That's great, and I certainly understand.
|
||||
|
||||
Wait, what? You're using them? For cryptographic purposes? Well, that spoils everything, because now things get a bit ugly.
|
||||
|
||||
You see, your truly-random, quantum effect blessed random numbers are put into some less respectable, real-world tarnished algorithms.
|
||||
|
||||
Because almost all of the cryptographic algorithms we use do not hold up to ### information-theoretic security**. They can “only” offer **computational security. The two exceptions that come to my mind are Shamir's Secret Sharing and the One-time pad. And while the first one may be a valid counterpoint (if you actually intend to use it), the latter is utterly impractical.
|
||||
|
||||
But all those algorithms you know about, AES, RSA, Diffie-Hellman, Elliptic curves, and all those crypto packages you're using, OpenSSL, GnuTLS, Keyczar, your operating system's crypto API, these are only computationally secure.
|
||||
|
||||
What's the difference? While information-theoretically secure algorithms are secure, period, those other algorithms cannot guarantee security against an adversary with unlimited computational power who's trying all possibilities for keys. We still use them because it would take all the computers in the world taken together longer than the universe has existed, so far. That's the level of “insecurity” we're talking about here.
|
||||
|
||||
Unless some clever guy breaks the algorithm itself, using much less computational power. Even computational power achievable today. That's the big prize every cryptanalyst dreams about: breaking AES itself, breaking RSA itself and so on.
|
||||
|
||||
So now we're at the point where you don't trust the inner building blocks of the random number generator, insisting on “true randomness” instead of “pseudo randomness”. But then you're using those “true” random numbers in algorithms that you so despise that you didn't want them near your random number generator in the first place!
|
||||
|
||||
Truth is, when state-of-the-art hash algorithms are broken, or when state-of-the-art block ciphers are broken, it doesn't matter that you get “philosophically insecure” random numbers because of them. You've got nothing left to securely use them for anyway.
|
||||
|
||||
So just use those computationally-secure random numbers for your computationally-secure algorithms. In other words: use /dev/urandom.
|
||||
|
||||
### Structure of Linux's random number generator
|
||||
|
||||
#### An incorrect view
|
||||
|
||||
Chances are, your idea of the kernel's random number generator is something similar to this:
|
||||
|
||||
![image: mythical structure of the kernel's random number generator][1]
|
||||
|
||||
“True randomness”, albeit possibly skewed and biased, enters the system and its entropy is precisely counted and immediately added to an internal entropy counter. After de-biasing and whitening it's entering the kernel's entropy pool, where both /dev/random and /dev/urandom get their random numbers from.
|
||||
|
||||
The “true” random number generator, /dev/random, takes those random numbers straight out of the pool, if the entropy count is sufficient for the number of requested numbers, decreasing the entropy counter, of course. If not, it blocks until new entropy has entered the system.
|
||||
|
||||
The important thing in this narrative is that /dev/random basically yields the numbers that have been input by those randomness sources outside, after only the necessary whitening. Nothing more, just pure randomness.
|
||||
|
||||
/dev/urandom, so the story goes, is doing the same thing. Except when there isn't sufficient entropy in the system. In contrast to /dev/random, it does not block, but gets “low quality random” numbers from a pseudorandom number generator (conceded, a cryptographically secure one) that is running alongside the rest of the random number machinery. This CSPRNG is just seeded once (or maybe every now and then, it doesn't matter) with “true randomness” from the randomness pool, but you can't really trust it.
|
||||
|
||||
In this view, that seems to be in a lot of people's minds when they're talking about random numbers on Linux, avoiding /dev/urandom is plausible.
|
||||
|
||||
Because either there is enough entropy left, then you get the same you'd have gotten from /dev/random. Or there isn't, then you get those low-quality random numbers from a CSPRNG that almost never saw high-entropy input.
|
||||
|
||||
Devilish, right? Unfortunately, also utterly wrong. In reality, the internal structure of the random number generator looks like this.
|
||||
|
||||
#### A better simplification
|
||||
|
||||
##### Before Linux 4.8
|
||||
|
||||
![image: actual structure of the kernel's random number generator before Linux 4.8][2] This is a pretty rough simplification. In fact, there isn't just one, but three pools filled with entropy. One primary pool, and one for /dev/random and /dev/urandom each, feeding off the primary pool. Those three pools all have their own entropy counts, but the counts of the secondary pools (for /dev/random and /dev/urandom) are mostly close to zero, and “fresh” entropy flows from the primary pool when needed, decreasing its entropy count. Also there is a lot of mixing and re-injecting outputs back into the system going on. All of this is far more detail than is necessary for this document.
|
||||
|
||||
See the big difference? The CSPRNG is not running alongside the random number generator, filling in for those times when /dev/urandom wants to output something, but has nothing good to output. The CSPRNG is an integral part of the random number generation process. There is no /dev/random handing out “good and pure” random numbers straight from the whitener. Every randomness source's input is thoroughly mixed and hashed inside the CSPRNG, before it emerges as random numbers, either via /dev/urandom or /dev/random.
|
||||
|
||||
Another important difference is that there is no entropy counting going on here, but estimation. The amount of entropy some source is giving you isn't something obvious that you just get, along with the data. It has to be estimated. Please note that when your estimate is too optimistic, the dearly held property of /dev/random, that it's only giving out as many random numbers as available entropy allows, is gone. Unfortunately, it's hard to estimate the amount of entropy.
|
||||
|
||||
The Linux kernel uses only the arrival times of events to estimate their entropy. It does that by interpolating polynomials of those arrival times, to calculate “how surprising” the actual arrival time was, according to the model. Whether this polynomial interpolation model is the best way to estimate entropy is an interesting question. There is also the problem that internal hardware restrictions might influence those arrival times. The sampling rates of all kinds of hardware components may also play a role, because it directly influences the values and the granularity of those event arrival times.
|
||||
|
||||
In the end, to the best of our knowledge, the kernel's entropy estimate is pretty good. Which means it's conservative. People argue about how good it really is, but that issue is far above my head. Still, if you insist on never handing out random numbers that are not “backed” by sufficient entropy, you might be nervous here. I'm sleeping sound because I don't care about the entropy estimate.
|
||||
|
||||
So to make one thing crystal clear: both /dev/random and /dev/urandom are fed by the same CSPRNG. Only the behavior when their respective pool runs out of entropy, according to some estimate, differs: /dev/random blocks, while /dev/urandom does not.
|
||||
|
||||
##### From Linux 4.8 onward
|
||||
|
||||
In Linux 4.8 the equivalency between /dev/urandom and /dev/random was given up. Now /dev/urandom output does not come from an entropy pool, but directly from a CSPRNG.
|
||||
|
||||
![image: actual structure of the kernel's random number generator from Linux 4.8 onward][3]
|
||||
|
||||
We will see shortly why that is not a security problem.
|
||||
|
||||
### What's wrong with blocking?
|
||||
|
||||
Have you ever waited for /dev/random to give you more random numbers? Generating a PGP key inside a virtual machine maybe? Connecting to a web server that's waiting for more random numbers to create an ephemeral session key?
|
||||
|
||||
That's the problem. It inherently runs counter to availability. So your system is not working. It's not doing what you built it to do. Obviously, that's bad. You wouldn't have built it if you didn't need it.
|
||||
|
||||
I'm working on safety-related systems in factory automation. Can you guess what the main reason for failures of safety systems is? Manipulation. Simple as that. Something about the safety measure bugged the worker. It took too much time, was too inconvenient, whatever. People are very resourceful when it comes to finding “inofficial solutions”.
|
||||
|
||||
But the problem runs even deeper: people don't like to be stopped in their ways. They will devise workarounds, concoct bizarre machinations to just get it running. People who don't know anything about cryptography. Normal people.
|
||||
|
||||
Why not patching out the call to `random()`? Why not having some guy in a web forum tell you how to use some strange ioctl to increase the entropy counter? Why not switch off SSL altogether?
|
||||
|
||||
In the end you just educate your users to do foolish things that compromise your system's security without you ever knowing about it.
|
||||
|
||||
It's easy to disregard availability, usability or other nice properties. Security trumps everything, right? So better be inconvenient, unavailable or unusable than feign security.
|
||||
|
||||
But that's a false dichotomy. Blocking is not necessary for security. As we saw, /dev/urandom gives you the same kind of random numbers as /dev/random, straight out of a CSPRNG. Use it!
|
||||
|
||||
### The CSPRNGs are alright
|
||||
|
||||
But now everything sounds really bleak. If even the high-quality random numbers from /dev/random are coming out of a CSPRNG, how can we use them for high-security purposes?
|
||||
|
||||
It turns out, that “looking random” is the basic requirement for a lot of our cryptographic building blocks. If you take the output of a cryptographic hash, it has to be indistinguishable from a random string so that cryptographers will accept it. If you take a block cipher, its output (without knowing the key) must also be indistinguishable from random data.
|
||||
|
||||
If anyone could gain an advantage over brute force breaking of cryptographic building blocks, using some perceived weakness of those CSPRNGs over “true” randomness, then it's the same old story: you don't have anything left. Block ciphers, hashes, everything is based on the same mathematical fundament as CSPRNGs. So don't be afraid.
|
||||
|
||||
### What about entropy running low?
|
||||
|
||||
It doesn't matter.
|
||||
|
||||
The underlying cryptographic building blocks are designed such that an attacker cannot predict the outcome, as long as there was enough randomness (a.k.a. entropy) in the beginning. A usual lower limit for “enough” may be 256 bits. No more.
|
||||
|
||||
Considering that we were pretty hand-wavey about the term “entropy” in the first place, it feels right. As we saw, the kernel's random number generator cannot even precisely know the amount of entropy entering the system. Only an estimate. And whether the model that's the basis for the estimate is good enough is pretty unclear, too.
|
||||
|
||||
### Re-seeding
|
||||
|
||||
But if entropy is so unimportant, why is fresh entropy constantly being injected into the random number generator?
|
||||
|
||||
djb [remarked][4] that more entropy actually can hurt.
|
||||
|
||||
First, it cannot hurt. If you've got more randomness just lying around, by all means use it!
|
||||
|
||||
There is another reason why re-seeding the random number generator every now and then is important:
|
||||
|
||||
Imagine an attacker knows everything about your random number generator's internal state. That's the most severe security compromise you can imagine, the attacker has full access to the system.
|
||||
|
||||
You've totally lost now, because the attacker can compute all future outputs from this point on.
|
||||
|
||||
But over time, with more and more fresh entropy being mixed into it, the internal state gets more and more random again. So that such a random number generator's design is kind of self-healing.
|
||||
|
||||
But this is injecting entropy into the generator's internal state, it has nothing to do with blocking its output.
|
||||
|
||||
### The random and urandom man page
|
||||
|
||||
The man page for /dev/random and /dev/urandom is pretty effective when it comes to instilling fear into the gullible programmer's mind:
|
||||
|
||||
> A read from the /dev/urandom device will not block waiting for more entropy. As a result, if there is not sufficient entropy in the entropy pool, the returned values are theoretically vulnerable to a cryptographic attack on the algorithms used by the driver. Knowledge of how to do this is not available in the current unclassified literature, but it is theoretically possible that such an attack may exist. If this is a concern in your application, use /dev/random instead.
|
||||
|
||||
Such an attack is not known in “unclassified literature”, but the NSA certainly has one in store, right? And if you're really concerned about this (you should!), please use /dev/random, and all your problems are solved.
|
||||
|
||||
The truth is, while there may be such an attack available to secret services, evil hackers or the Bogeyman, it's just not rational to just take it as a given.
|
||||
|
||||
And even if you need that peace of mind, let me tell you a secret: no practical attacks on AES, SHA-3 or other solid ciphers and hashes are known in the “unclassified” literature, either. Are you going to stop using those, as well? Of course not!
|
||||
|
||||
Now the fun part: “use /dev/random instead”. While /dev/urandom does not block, its random number output comes from the very same CSPRNG as /dev/random's.
|
||||
|
||||
If you really need information-theoretically secure random numbers (you don't!), and that's about the only reason why the entropy of the CSPRNGs input matters, you can't use /dev/random, either!
|
||||
|
||||
The man page is silly, that's all. At least it tries to redeem itself with this:
|
||||
|
||||
> If you are unsure about whether you should use /dev/random or /dev/urandom, then probably you want to use the latter. As a general rule, /dev/urandom should be used for everything except long-lived GPG/SSL/SSH keys.
|
||||
|
||||
Fine. I think it's unnecessary, but if you want to use /dev/random for your “long-lived keys”, by all means, do so! You'll be waiting a few seconds typing stuff on your keyboard, that's no problem.
|
||||
|
||||
But please don't make connections to a mail server hang forever, just because you “wanted to be safe”.
|
||||
|
||||
### Orthodoxy
|
||||
|
||||
The view espoused here is certainly a tiny minority's opinions on the Internet. But ask a real cryptographer, you'll be hard pressed to find someone who sympathizes much with that blocking /dev/random.
|
||||
|
||||
Let's take [Daniel Bernstein][5], better known as djb:
|
||||
|
||||
> Cryptographers are certainly not responsible for this superstitious nonsense. Think about this for a moment: whoever wrote the /dev/random manual page seems to simultaneously believe that
|
||||
>
|
||||
> * (1) we can't figure out how to deterministically expand one 256-bit /dev/random output into an endless stream of unpredictable keys (this is what we need from urandom), but
|
||||
>
|
||||
> * (2) we _can_ figure out how to use a single key to safely encrypt many messages (this is what we need from SSL, PGP, etc.).
|
||||
>
|
||||
>
|
||||
|
||||
>
|
||||
> For a cryptographer this doesn't even pass the laugh test.
|
||||
|
||||
Or [Thomas Pornin][6], who is probably one of the most helpful persons I've ever encountered on the Stackexchange sites:
|
||||
|
||||
> The short answer is yes. The long answer is also yes. /dev/urandom yields data which is indistinguishable from true randomness, given existing technology. Getting "better" randomness than what /dev/urandom provides is meaningless, unless you are using one of the few "information theoretic" cryptographic algorithm, which is not your case (you would know it).
|
||||
>
|
||||
> The man page for urandom is somewhat misleading, arguably downright wrong, when it suggests that /dev/urandom may "run out of entropy" and /dev/random should be preferred;
|
||||
|
||||
Or maybe [Thomas Ptacek][7], who is not a real cryptographer in the sense of designing cryptographic algorithms or building cryptographic systems, but still the founder of a well-reputed security consultancy that's doing a lot of penetration testing and breaking bad cryptography:
|
||||
|
||||
> Use urandom. Use urandom. Use urandom. Use urandom. Use urandom. Use urandom.
|
||||
|
||||
### Not everything is perfect
|
||||
|
||||
/dev/urandom isn't perfect. The problems are twofold:
|
||||
|
||||
On Linux, unlike FreeBSD, /dev/urandom never blocks. Remember that the whole security rested on some starting randomness, a seed?
|
||||
|
||||
Linux's /dev/urandom happily gives you not-so-random numbers before the kernel even had the chance to gather entropy. When is that? At system start, booting the computer.
|
||||
|
||||
FreeBSD does the right thing: they don't have the distinction between /dev/random and /dev/urandom, both are the same device. At startup /dev/random blocks once until enough starting entropy has been gathered. Then it won't block ever again.
|
||||
|
||||
In the meantime, Linux has implemented a new syscall, originally introduced by OpenBSD as getentropy(2): getrandom(2). This syscall does the right thing: blocking until it has gathered enough initial entropy, and never blocking after that point. Of course, it is a syscall, not a character device, so it isn't as easily accessible from shell or script languages. It is available from Linux 3.17 onward.
|
||||
|
||||
On Linux it isn't too bad, because Linux distributions save some random numbers when booting up the system (but after they have gathered some entropy, since the startup script doesn't run immediately after switching on the machine) into a seed file that is read next time the machine is booting. So you carry over the randomness from the last running of the machine.
|
||||
|
||||
Obviously that isn't as good as if you let the shutdown scripts write out the seed, because in that case there would have been much more time to gather entropy. The advantage is obviously that this does not depend on a proper shutdown with execution of the shutdown scripts (in case the computer crashes, for example).
|
||||
|
||||
And it doesn't help you the very first time a machine is running, but the Linux distributions usually do the same saving into a seed file when running the installer. So that's mostly okay.
|
||||
|
||||
Virtual machines are the other problem. Because people like to clone them, or rewind them to a previously saved check point, this seed file doesn't help you.
|
||||
|
||||
But the solution still isn't using /dev/random everywhere, but properly seeding each and every virtual machine after cloning, restoring a checkpoint, whatever.
|
||||
|
||||
### tldr;
|
||||
|
||||
Just use /dev/urandom!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2uo.de/myths-about-urandom/
|
||||
|
||||
作者:[Thomas Hühn][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2uo.de/
|
||||
[1]:https://www.2uo.de/myths-about-urandom/structure-no.png
|
||||
[2]:https://www.2uo.de/myths-about-urandom/structure-yes.png
|
||||
[3]:https://www.2uo.de/myths-about-urandom/structure-new.png
|
||||
[4]:http://blog.cr.yp.to/20140205-entropy.html
|
||||
[5]:http://www.mail-archive.com/cryptography@randombit.net/msg04763.html
|
||||
[6]:http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key/3939#3939
|
||||
[7]:http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/
|
@ -0,0 +1,295 @@
|
||||
Prevent Files And Folders From Accidental Deletion Or Modification In Linux
|
||||
======
|
||||
|
||||

|
||||
|
||||
Some times, I accidentally “SHIFT+DELETE” my data. Yes, I am an idiot who don’t double check what I am exactly going to delete. And, I am too dumb or lazy to backup the data. Result? Data loss! They are gone in a fraction of second. I do it every now and then. If you’re anything like me, I’ve got a good news. There is a simple, yet useful commandline utility called **“chattr”** (abbreviation of **Ch** ange **Attr** ibute) which can be used to prevent files and folders from accidental deletion or modification in Unix-like distributions. It applies/removes certain attributes to a file or folder in your Linux system. So the users can’t delete or modify the files and folders either accidentally or intentionally, even as root user. Sounds useful, isn’t it?
|
||||
|
||||
In this brief tutorial, we are going to see how to use chattr in real time in-order to prevent files and folders from accidental deletion in Linux.
|
||||
|
||||
### Prevent Files And Folders From Accidental Deletion Or Modification In Linux
|
||||
|
||||
By default, Chattr is available in most modern Linux operating systems. Let us see some examples.
|
||||
|
||||
The default syntax of chattr command is:
|
||||
```
|
||||
chattr [operator] [switch] [filename]
|
||||
|
||||
```
|
||||
|
||||
chattr has the following operators.
|
||||
|
||||
* The operator **‘+’** causes the selected attributes to be added to the existing attributes of the files;
|
||||
* The operator **‘-‘** causes them to be removed;
|
||||
* The operator **‘=’** causes them to be the only attributes that the files have.
|
||||
|
||||
|
||||
|
||||
Chattr has different attributes namely – **aAcCdDeijsStTu**. Each letter applies a particular attributes to a file.
|
||||
|
||||
* **a** – append only,
|
||||
* **A** – no atime updates,
|
||||
* **c** – compressed,
|
||||
* **C** – no copy on write,
|
||||
* **d** – no dump,
|
||||
* **D** – synchronous directory updates,
|
||||
* **e** – extent format,
|
||||
* **i** – immutable,
|
||||
* **j** – data journalling,
|
||||
* **P** – project hierarchy,
|
||||
* **s** – secure deletion,
|
||||
* **S** – synchronous updates,
|
||||
* **t** – no tail-merging,
|
||||
* **T** – top of directory hierarchy,
|
||||
* **u** – undeletable.
|
||||
|
||||
|
||||
|
||||
In this tutorial, we are going to discuss the usage of two attributes, namely **a** , **i** which are used to prevent the deletion of files and folders. That’s what our topic today, isn’t? Indeed!
|
||||
|
||||
### Prevent files from accidental deletion
|
||||
|
||||
Let me create a file called **file.txt** in my current directory.
|
||||
```
|
||||
$ touch file.txt
|
||||
|
||||
```
|
||||
|
||||
Now, I am going to apply **“i”** attribute which makes the file immutable. It means you can’t delete, modify the file, even if you’re the file owner and the root user.
|
||||
```
|
||||
$ sudo chattr +i file.txt
|
||||
|
||||
```
|
||||
|
||||
You can check the file attributes using command:
|
||||
```
|
||||
$ lsattr file.txt
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
----i---------e---- file.txt
|
||||
|
||||
```
|
||||
|
||||
Now, try to remove the file either as a normal user or with sudo privileges.
|
||||
```
|
||||
$ rm file.txt
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
rm: cannot remove 'file.txt': Operation not permitted
|
||||
|
||||
```
|
||||
|
||||
Let me try with sudo command:
|
||||
```
|
||||
$ sudo rm file.txt
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
rm: cannot remove 'file.txt': Operation not permitted
|
||||
|
||||
```
|
||||
|
||||
Let us try to append some contents in the text file.
|
||||
```
|
||||
$ echo 'Hello World!' >> file.txt
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
bash: file.txt: Operation not permitted
|
||||
|
||||
```
|
||||
|
||||
Try with **sudo** privilege:
|
||||
```
|
||||
$ sudo echo 'Hello World!' >> file.txt
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
bash: file.txt: Operation not permitted
|
||||
|
||||
```
|
||||
|
||||
As you noticed in the above outputs, We can’t delete or modify the file even as root user or the file owner.
|
||||
|
||||
To revoke attributes, just use **“-i”** switch as shown below.
|
||||
```
|
||||
$ sudo chattr -i file.txt
|
||||
|
||||
```
|
||||
|
||||
Now, the immutable attribute has been removed. You can now delete or modify the file.
|
||||
```
|
||||
$ rm file.txt
|
||||
|
||||
```
|
||||
|
||||
Similarly, you can restrict the directories from accidental deletion or modification as described in the next section.
|
||||
|
||||
### Prevent folders from accidental deletion and modification
|
||||
|
||||
Create a directory called dir1 and a file called file.txt inside this directory.
|
||||
```
|
||||
$ mkdir dir1 && touch dir1/file.txt
|
||||
|
||||
```
|
||||
|
||||
Now, make this directory and its contents (file.txt) immutable using command:
|
||||
```
|
||||
$ sudo chattr -R +i dir1
|
||||
|
||||
```
|
||||
|
||||
Where,
|
||||
|
||||
* **-R** – will make the dir1 and its contents immutable recursively.
|
||||
* **+i** – makes the directory immutable.
|
||||
|
||||
|
||||
|
||||
Now, try to delete the directory either as normal user or using sudo user.
|
||||
```
|
||||
$ rm -fr dir1
|
||||
|
||||
$ sudo rm -fr dir1
|
||||
|
||||
```
|
||||
|
||||
You will get the following output:
|
||||
```
|
||||
rm: cannot remove 'dir1/file.txt': Operation not permitted
|
||||
|
||||
```
|
||||
|
||||
Try to append some contents in the file using “echo” command. Did you make it? Of course, you couldn’t!
|
||||
|
||||
To revoke the attributes back, run:
|
||||
```
|
||||
$ sudo chattr -R -i dir1
|
||||
|
||||
```
|
||||
|
||||
Now, you can delete or modify the contents of this directory as usual.
|
||||
|
||||
### Prevent files and folders from accidental deletion, but allow append operation
|
||||
|
||||
We know now how to prevent files and folders from accidental deletion and modification. Next, we are going to prevent files and folders from deletion, but allow the file for writing in append mode only. That means you can’t edit, modify the existing data in the file, rename the file, and delete the file. You can only open the file for writing in append mode.
|
||||
|
||||
To set append mode attribution to a file/directory, we do the following.
|
||||
|
||||
**For files:**
|
||||
```
|
||||
$ sudo chattr +a file.txt
|
||||
|
||||
```
|
||||
|
||||
**For directories: **
|
||||
```
|
||||
$ sudo chattr -R +a dir1
|
||||
|
||||
```
|
||||
|
||||
A file/folder with the ‘a’ attribute set can only be open in append mode for writing.
|
||||
|
||||
Add some contents to the file(s) to check whether it works or not.
|
||||
```
|
||||
$ echo 'Hello World!' >> file.txt
|
||||
|
||||
$ echo 'Hello World!' >> dir1/file.txt
|
||||
|
||||
```
|
||||
|
||||
Check the file contents using cat command:
|
||||
```
|
||||
$ cat file.txt
|
||||
|
||||
$ cat dir1/file.txt
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Hello World!
|
||||
|
||||
```
|
||||
|
||||
You will see that you can now be able to append the contents. It means we can modify the files and folders.
|
||||
|
||||
Let us try to delete the file or folder now.
|
||||
```
|
||||
$ rm file.txt
|
||||
|
||||
```
|
||||
|
||||
**Output:**
|
||||
```
|
||||
rm: cannot remove 'file.txt': Operation not permitted
|
||||
|
||||
```
|
||||
|
||||
Let us try to delete the folder:
|
||||
```
|
||||
$ rm -fr dir1/
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
rm: cannot remove 'dir1/file.txt': Operation not permitted
|
||||
|
||||
```
|
||||
|
||||
To remove the attributes, run the following commands:
|
||||
|
||||
**For files:**
|
||||
```
|
||||
$ sudo chattr -R -a file.txt
|
||||
|
||||
```
|
||||
|
||||
**For directories: **
|
||||
```
|
||||
$ sudo chattr -R -a dir1/
|
||||
|
||||
```
|
||||
|
||||
Now, you can delete or modify the files and folders as usual.
|
||||
|
||||
For more details, refer the man pages.
|
||||
```
|
||||
man chattr
|
||||
|
||||
```
|
||||
|
||||
### Wrapping up
|
||||
|
||||
Data protection is one of the main job of a System administrator. There are numerous free and commercial data protection software are available on the market. Luckily, we’ve got this built-in tool that helps us to protect the data from accidental deletion or modification. Chattr can be used as additional tool to protect the important system files and data in your Linux system.
|
||||
|
||||
And, that’s all for today. Hope this helps. I will be soon here with another useful article. Until then, stay tuned with OSTechNix!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/prevent-files-folders-accidental-deletion-modification-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
@ -1,97 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Monitoring network bandwidth with iftop command
|
||||
======
|
||||
System Admins are required to monitor IT infrastructure to make sure that everything is up & running. We have to monitor performance of hardware i.e memory, hdds & CPUs etc & so does we have to monitor our network. We need to make sure that our network is not being over utilised or our applications, websites might not work. In this tutorial, we are going to learn to use IFTOP utility.
|
||||
|
||||
( **Recommended read** :[ **Resource monitoring using Nagios**][1], [**Tools for checking system info**,][2] [**Important logs to monitor**][3])
|
||||
|
||||
Iftop is network monitoring utility that provides real time real time bandwidth monitoring. Iftop measures total data moving in & out of the individual socket connections i.e. it captures packets moving in and out via network adapter & than sums those up to find the bandwidth being utilized.
|
||||
|
||||
## Installation on Debian/Ubuntu
|
||||
|
||||
Iftop is available with default repositories of Debian/Ubuntu & can be simply installed using the command below,
|
||||
|
||||
```
|
||||
$ sudo apt-get install iftop
|
||||
```
|
||||
|
||||
## Installation on RHEL/Centos using yum
|
||||
|
||||
For installing iftop on CentOS or RHEL, we need to enable EPEL repository. To enable repository, run the following on your terminal,
|
||||
|
||||
### RHEL/CentOS 7
|
||||
|
||||
```
|
||||
$ rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-10.noarch.rpm
|
||||
```
|
||||
|
||||
### RHEL/CentOS 6 (64 Bit)
|
||||
|
||||
```
|
||||
$ rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
|
||||
```
|
||||
|
||||
### RHEL/CentOS 6 (32 Bit)
|
||||
|
||||
```
|
||||
$ rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
|
||||
```
|
||||
|
||||
After epel repository has been installed, we can now install iftop by running,
|
||||
|
||||
```
|
||||
$ yum install iftop
|
||||
```
|
||||
|
||||
This will install iftop utility on your system. We will now use it to monitor our network,
|
||||
|
||||
## Using IFTOP
|
||||
|
||||
You can start using iftop by opening your terminal windown & type,
|
||||
|
||||
```
|
||||
$ iftop
|
||||
```
|
||||
|
||||
![network monitoring][5]
|
||||
|
||||
You will now be presented with network activity happening on your machine. You can also use
|
||||
|
||||
```
|
||||
$ iftop -n
|
||||
```
|
||||
|
||||
Which will present the network information on your screen but with '-n' , you will not be presented with the names related to IP addresses but only ip addresses. This option allows for some bandwidth to be saved, which goes into resolving IP addresses to names.
|
||||
|
||||
Now we can also see all the commands that can be used with iftop. Once you have ran iftop, press 'h' button on the keyboard to see all the commands that can be used with iftop.
|
||||
|
||||
![network monitoring][7]
|
||||
|
||||
To monitor a particular network interface, we can mention interface with iftop,
|
||||
|
||||
```
|
||||
$ iftop -I enp0s3
|
||||
```
|
||||
|
||||
You can check further options that are used with iftop using help, as mentioned above. But these mentioned examples are only what you might to monitor network.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/monitoring-network-bandwidth-iftop-command/
|
||||
|
||||
作者:[SHUSAIN][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/installing-configuring-nagios-server/
|
||||
[2]:http://linuxtechlab.com/commands-system-hardware-info/
|
||||
[3]:http://linuxtechlab.com/important-logs-monitor-identify-issues/
|
||||
[4]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=661%2C424
|
||||
[5]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2017/04/iftop-1.jpg?resize=661%2C424
|
||||
[6]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=663%2C416
|
||||
[7]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2017/04/iftop-help.jpg?resize=663%2C416
|
@ -0,0 +1,164 @@
|
||||
Create your first Ansible server (automation) setup
|
||||
======
|
||||
Automation/configuration management tools are the new craze in the IT world, organizations are moving towards adopting them. There are many tools that are available in market like Puppet, Chef, Ansible etc & in this tutorial, we are going to learn about Ansible.
|
||||
|
||||
Ansible is an open source configuration tool; that is used to deploy, configure & manage servers. Ansible is one of the easiest automation tool to learn and master. It does not require you to learn complicated programming language like ruby (used in puppet & chef) & uses YAML, which is a very simple language. Also it does not require any special agent to be installed on client machines & only requires client machines to have python and ssh installed, both of these are usually available on systems.
|
||||
|
||||
## Pre-requisites
|
||||
|
||||
Before we move onto installation part, let's discuss the pre-requisites for Ansible
|
||||
|
||||
1. For server, we will need a machine with either CentOS or RHEL 7 installed & EPEL repository enabled
|
||||
|
||||
To enable epel repository, use the commands below,
|
||||
|
||||
**RHEL/CentOS 7**
|
||||
|
||||
```
|
||||
$ rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-10.noarch.rpm
|
||||
```
|
||||
|
||||
**RHEL/CentOS 6 (64 Bit)**
|
||||
|
||||
```
|
||||
$ rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
|
||||
```
|
||||
|
||||
**RHEL/CentOS 6 (32 Bit)**
|
||||
|
||||
```
|
||||
$ rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
|
||||
```
|
||||
|
||||
2. For client machines, Open SSH & python should be installed. Also we need to configure password less login for ssh session (create public-private keys). To create public-private keys & configure password less login for ssh session, refer to our article "
|
||||
|
||||
[Setting up SSH Server for Public/Private keys based Authentication (Password-less login)][1]"
|
||||
|
||||
|
||||
|
||||
## Installation
|
||||
|
||||
Once we have epel repository enabled, we can now install anisble using yum,
|
||||
|
||||
```
|
||||
$ yum install ansible
|
||||
```
|
||||
|
||||
## Configuring Ansible hosts
|
||||
|
||||
We will now configure hosts that we want Ansible to manage. To do that we need to edit the file **/etc/ansible/host** s & add the clients in following syntax,
|
||||
|
||||
```
|
||||
[group-name]
|
||||
alias ansible_ssh_host=host_IP_address
|
||||
```
|
||||
|
||||
where, alias is the alias name given to hosts we adding & it can be anything,
|
||||
|
||||
host_IP_address is where we enter the IP address for the hosts.
|
||||
|
||||
For this tutorial, we are going to add 2 clients/hosts for ansible to manage, so let's create an entry for these two hosts in the configuration file,
|
||||
|
||||
```
|
||||
$ vi /etc/ansible/hosts
|
||||
[test_clients]
|
||||
client1 ansible_ssh_host=192.168.1.101
|
||||
client2 ansible_ssh_host=192.168.1.10
|
||||
```
|
||||
|
||||
Save file & exit it. Now as mentioned in pre-requisites, we should have a password less login to these clients from the ansible server. To check if that's the case, ssh into the clients and we should be able to login without password,
|
||||
|
||||
```
|
||||
$ ssh root@192.168.1.101
|
||||
```
|
||||
|
||||
If that's working, then we can move further otherwise we need to create Public/Private keys for ssh session (Refer to article mentioned above in pre-requisites).
|
||||
|
||||
We are using root to login to other servers but we can use other local users as well & we need to define it for Ansible whatever user we will be using. To do so, we will first create a folder named 'group_vars' in '/etc/ansible'
|
||||
|
||||
```
|
||||
$ cd /etc/ansible
|
||||
$ mkdir group_vars
|
||||
```
|
||||
|
||||
Next, we will create a file named after the group we have created in 'etc/ansible/hosts' i.e. test_clients
|
||||
|
||||
```
|
||||
$ vi test_clients
|
||||
```
|
||||
|
||||
& add the ifollowing information about the user,
|
||||
|
||||
```
|
||||
--
|
||||
ansible_ssh_user:root
|
||||
```
|
||||
|
||||
**Note :-** File will start with '--' (minus symbol), so keep not of that.
|
||||
|
||||
If we want to use same user for all the groups created, then we can create only a single file named 'all' to mention the user details for ssh login, instead of creating a file for every group.
|
||||
|
||||
```
|
||||
$ vi /etc/ansible/group_vars/all
|
||||
--
|
||||
ansible_ssh_user: root
|
||||
```
|
||||
|
||||
Similarly, we can setup files for individual hosts as well.
|
||||
|
||||
Now, the setup for the clients has been done. We will now push some simple commands to all the clients being managed by Ansible.
|
||||
|
||||
## Testing hosts
|
||||
|
||||
To check the connectivity of all the hosts, we will issue a command,
|
||||
|
||||
```
|
||||
$ ansible -m ping all
|
||||
```
|
||||
|
||||
If all the hosts are properly connected, it should return the following output,
|
||||
|
||||
```
|
||||
client1 | SUCCESS = > {
|
||||
" changed": false,
|
||||
" ping": "pong"
|
||||
}
|
||||
client2 | SUCCESS = > {
|
||||
" changed": false,
|
||||
" ping": "pong"
|
||||
}
|
||||
```
|
||||
|
||||
We can also issue command to an individual host,
|
||||
|
||||
```
|
||||
$ ansible -m ping client1
|
||||
```
|
||||
|
||||
or to the multiple hosts,
|
||||
|
||||
```
|
||||
$ ansible -m ping client1:client2
|
||||
```
|
||||
|
||||
or even to a single group,
|
||||
|
||||
```
|
||||
$ ansible -m ping test_client
|
||||
```
|
||||
|
||||
This complete our tutorial on setting up an Ansible server, in our future posts we will further explore funtinalities offered by Ansible. If any having doubts or queries regarding this post, use the comment box below.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/create-first-ansible-server-automation-setup/
|
||||
|
||||
作者:[SHUSAIN][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/configure-ssh-server-publicprivate-key/
|
@ -1,131 +0,0 @@
|
||||
How to Use the ZFS Filesystem on Ubuntu Linux
|
||||
======
|
||||
There are a myriad of [filesystems available for Linux][1]. So why try a new one? They all work, right? They're not all the same, and some have some very distinct advantages, like ZFS.
|
||||
|
||||
### Why ZFS
|
||||
|
||||
ZFS is awesome. It's a truly modern filesystem with built-in capabilities that make sense for handling loads of data.
|
||||
|
||||
Now, if you're considering ZFS for your ultra-fast NVMe SSD, it might not be the best option. It's slower than others. That's okay, though. It was designed to store huge amounts of data and keep it safe.
|
||||
|
||||
ZFS eliminates the need to set up traditional RAID arrays. Instead, you can create ZFS pools, and even add drives to those pools at any time. ZFS pools behave almost exactly like RAID, but the functionality is built right into the filesystem.
|
||||
|
||||
ZFS also acts like a replacement for LVM, allowing you to partition and manage partitions on the fly without the need to handle things at a lower level and worry about the associated risks.
|
||||
|
||||
It's also a CoW filesystem. Without getting too technical, that means that ZFS protects your data from gradual corruption over time. ZFS creates checksums of files and lets you roll back those files to a previous working version.
|
||||
|
||||
### Installing ZFS
|
||||
|
||||
![Install ZFS on Ubuntu][2]
|
||||
|
||||
Installing ZFS on Ubuntu is very easy, though the process is slightly different for Ubuntu LTS and the latest releases.
|
||||
|
||||
**Ubuntu 16.04 LTS**
|
||||
```
|
||||
sudo apt install zfs
|
||||
```
|
||||
|
||||
**Ubuntu 17.04 and Later**
|
||||
```
|
||||
sudo apt install zfsutils
|
||||
```
|
||||
|
||||
After you have the utilities installed, you can create ZFS drives and partitions using the tools provided by ZFS.
|
||||
|
||||
### Creating Pools
|
||||
|
||||
![Create ZFS Pool][3]
|
||||
|
||||
Pools are the rough equivalent of RAID in ZFS. They are flexible and can easily be manipulated.
|
||||
|
||||
#### RAID0
|
||||
|
||||
RAID0 just pools your drives into what behaves like one giant drive. It can increase your drive speeds, but if one of your drives fails, you're probably going to be out of luck.
|
||||
|
||||
To achieve RAID0 with ZFS, just create a plain pool.
|
||||
```
|
||||
sudo zpool create your-pool /dev/sdc /dev/sdd
|
||||
```
|
||||
|
||||
#### RAID1/MIRROR
|
||||
|
||||
You can achieve RAID1 functionality with the `mirror` keyword in ZFS. Raid1 creates a 1-to-1 copy of your drive. This means that your data is constantly backed up. It also increases performance. Of course, you use half of your storage to the duplication.
|
||||
```
|
||||
sudo zpool create your-pool mirror /dev/sdc /dev/sdd
|
||||
```
|
||||
|
||||
#### RAID5/RAIDZ1
|
||||
|
||||
ZFS implements RAID5 functionality as RAIDZ1. RAID5 requires drives in multiples of three and allows you to keep 2/3 of your storage space by writing backup parity data to 1/3 of the drive space. If one drive fails, the array will remain online, but the failed drive should be replaced ASAP.
|
||||
```
|
||||
sudo zpool create your-pool raidz1 /dev/sdc /dev/sdd /dev/sde
|
||||
```
|
||||
|
||||
#### RAID6/RAIDZ2
|
||||
|
||||
RAID6 is almost exactly like RAID5, but it works in multiples of four instead of multiples of three. It doubles the parity data to allow up to two drives to fail without bringing the array down.
|
||||
```
|
||||
sudo zpool create your-pool raidz2 /dev/sdc /dev/sdd /dev/sde /dev/sdf
|
||||
```
|
||||
|
||||
#### RAID10/Striped Mirror
|
||||
|
||||
RAID10 aims to be the best of both worlds by providing both a speed increase and data redundancy with striping. You need drives in multiples of four and will only have access to half of the space. You can create a pool in RAID10 by creating two mirrors in the same pool command.
|
||||
```
|
||||
sudo zpool create your-pool mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf
|
||||
```
|
||||
|
||||
### Working With Pools
|
||||
|
||||
![ZFS pool Status][4]
|
||||
|
||||
There are also some management tools that you have to work with your pools once you've created them. First, check the status of your pools.
|
||||
```
|
||||
sudo zpool status
|
||||
```
|
||||
|
||||
#### Updates
|
||||
|
||||
When you update ZFS you'll need to update your pools, too. Your pools will notify you of any updates when you check their status. To update a pool, run the following command.
|
||||
```
|
||||
sudo zpool upgrade your-pool
|
||||
```
|
||||
|
||||
You can also upgrade them all.
|
||||
```
|
||||
sudo zpool upgrade -a
|
||||
```
|
||||
|
||||
#### Adding Drives
|
||||
|
||||
You can also add drives to your pools at any time. Tell `zpool` the name of the pool and the location of the drive, and it'll take care of everything.
|
||||
```
|
||||
sudo zpool add your-pool /dev/sdx
|
||||
```
|
||||
|
||||
### Other Thoughts
|
||||
|
||||
![ZFS in File Browser][5]
|
||||
|
||||
ZFS creates a directory in the root filesystem for your pools. You can browse to them by name using your GUI file manager or the CLI.
|
||||
|
||||
ZFS is awesomely powerful, and there are plenty of other things that you can do with it, too, but these are the basics. It is an excellent filesystem for working with loads of storage, even if it is just a RAID array of hard drives that you use for your files. ZFS works excellently with NAS systems, too.
|
||||
|
||||
Regardless of how stable and robust ZFS is, it's always best to back up your data when you implement something new on your hard drives.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.maketecheasier.com/use-zfs-filesystem-ubuntu-linux/
|
||||
|
||||
作者:[Nick Congleton][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.maketecheasier.com/author/nickcongleton/
|
||||
[1]:https://www.maketecheasier.com/best-linux-filesystem-for-ssd/
|
||||
[2]:https://www.maketecheasier.com/assets/uploads/2017/09/zfs-install.jpg (Install ZFS on Ubuntu)
|
||||
[3]:https://www.maketecheasier.com/assets/uploads/2017/09/zfs-create-pool.jpg (Create ZFS Pool)
|
||||
[4]:https://www.maketecheasier.com/assets/uploads/2017/09/zfs-pool-status.jpg (ZFS pool Status)
|
||||
[5]:https://www.maketecheasier.com/assets/uploads/2017/09/zfs-pool-open.jpg (ZFS in File Browser)
|
@ -1,3 +1,6 @@
|
||||
Translating by erialin
|
||||
|
||||
|
||||
Linux Gunzip Command Explained with Examples
|
||||
======
|
||||
|
||||
|
108
sources/tech/20171101 -dev-[u]random- entropy explained.md
Normal file
108
sources/tech/20171101 -dev-[u]random- entropy explained.md
Normal file
@ -0,0 +1,108 @@
|
||||
/dev/[u]random: entropy explained
|
||||
======
|
||||
### Entropy
|
||||
|
||||
When the topic of /dev/random and /dev/urandom come up, you always hear this word: “Entropy”. Everyone seems to have their own analogy for it. So why not me? I like to think of Entropy as “Random juice”. It is juice, required for random to be more random.
|
||||
|
||||
If you have ever generated an SSL certificate, or a GPG key, you may have seen something like:
|
||||
```
|
||||
We need to generate a lot of random bytes. It is a good idea to perform
|
||||
some other action (type on the keyboard, move the mouse, utilize the
|
||||
disks) during the prime generation; this gives the random number
|
||||
generator a better chance to gain enough entropy.
|
||||
++++++++++..+++++.+++++++++++++++.++++++++++...+++++++++++++++...++++++
|
||||
+++++++++++++++++++++++++++++.+++++..+++++.+++++.+++++++++++++++++++++++++>.
|
||||
++++++++++>+++++...........................................................+++++
|
||||
Not enough random bytes available. Please do some other work to give
|
||||
the OS a chance to collect more entropy! (Need 290 more bytes)
|
||||
|
||||
```
|
||||
|
||||
|
||||
By typing on the keyboard, and moving the mouse, you help generate Entropy, or Random Juice.
|
||||
|
||||
You might be asking yourself… Why do I need Entropy? and why it is so important for random to be actually random? Well, lets say our Entropy was limited to keyboard, mouse, and disk IO. But our system is a server, so I know there is no mouse and keyboard input. This means the only factor is your IO. If it is a single disk, that was barely used, you will have low Entropy. This means your systems ability to be random is weak. In other words, I could play the probability game, and significantly decrease the amount of time it would take to crack things like your ssh keys, or decrypt what you thought was an encrypted session.
|
||||
|
||||
Okay, but that is pretty unrealistic right? No, actually it isn’t. Take a look at this [Debian OpenSSH Vulnerability][1]. This particular issue was caused by someone removing some of the code responsible for adding Entropy. Rumor has it they removed it because it was causing valgrind to throw warnings. However, in doing that, random is now MUCH less random. In fact, so much less that Brute forcing the private ssh keys generated is now a fesible attack vector.
|
||||
|
||||
Hopefully by now we understand how important Entropy is to security. Whether you realize you are using it or not.
|
||||
|
||||
### /dev/random & /dev/urandom
|
||||
|
||||
|
||||
/dev/urandom is a Psuedo Random Number Generator, and it **does not** block if you run out of Entropy.
|
||||
/dev/random is a True Random Number Generator, and it **does** block if you run out of Entropy.
|
||||
|
||||
Most often, if we are dealing with something pragmatic, and it doesn’t contain the keys to your nukes, /dev/urandom is the right choice. Otherwise if you go with /dev/random, then when the system runs out of Entropy your application is just going to behave funny. Whether it outright fails, or just hangs until it has enough depends on how you wrote your application.
|
||||
|
||||
### Checking the Entropy
|
||||
|
||||
So, how much Entropy do you have?
|
||||
```
|
||||
[root@testbox test]# cat /proc/sys/kernel/random/poolsize
|
||||
4096
|
||||
[root@testbox test]# cat /proc/sys/kernel/random/entropy_avail
|
||||
2975
|
||||
[root@testbox test]#
|
||||
|
||||
```
|
||||
|
||||
/proc/sys/kernel/random/poolsize, to state the obvious is the size(in bits) of the Entropy Pool. eg: How much random-juice we should save before we stop pumping more. /proc/sys/kernel/random/entropy_avail, is the amount(in bits) of random-juice in the pool currently.
|
||||
|
||||
### How can we influence this number?
|
||||
|
||||
The number is drained as we use it. The most crude example I can come up with is catting /dev/random into /dev/null:
|
||||
```
|
||||
[root@testbox test]# cat /dev/random > /dev/null &
|
||||
[1] 19058
|
||||
[root@testbox test]# cat /proc/sys/kernel/random/entropy_avail
|
||||
0
|
||||
[root@testbox test]# cat /proc/sys/kernel/random/entropy_avail
|
||||
1
|
||||
[root@testbox test]#
|
||||
|
||||
```
|
||||
|
||||
The easiest way to influence this is to run [Haveged][2]. Haveged is a daemon that uses the processor “flutter” to add Entropy to the systems Entropy Pool. Installation and basic setup is pretty straight forward
|
||||
```
|
||||
[root@b08s02ur ~]# systemctl enable haveged
|
||||
Created symlink from /etc/systemd/system/multi-user.target.wants/haveged.service to /usr/lib/systemd/system/haveged.service.
|
||||
[root@b08s02ur ~]# systemctl start haveged
|
||||
[root@b08s02ur ~]#
|
||||
|
||||
```
|
||||
|
||||
On a machine with relatively moderate traffic:
|
||||
```
|
||||
[root@testbox ~]# pv /dev/random > /dev/null
|
||||
40 B 0:00:15 [ 0 B/s] [ <=> ]
|
||||
52 B 0:00:23 [ 0 B/s] [ <=> ]
|
||||
58 B 0:00:25 [5.92 B/s] [ <=> ]
|
||||
64 B 0:00:30 [6.03 B/s] [ <=> ]
|
||||
^C
|
||||
[root@testbox ~]# systemctl start haveged
|
||||
[root@testbox ~]# pv /dev/random > /dev/null
|
||||
7.12MiB 0:00:05 [1.43MiB/s] [ <=> ]
|
||||
15.7MiB 0:00:11 [1.44MiB/s] [ <=> ]
|
||||
27.2MiB 0:00:19 [1.46MiB/s] [ <=> ]
|
||||
43MiB 0:00:30 [1.47MiB/s] [ <=> ]
|
||||
^C
|
||||
[root@testbox ~]#
|
||||
|
||||
```
|
||||
|
||||
Using pv we are able to see how much data we are passing via pipe. As you can see, before haveged, we were getting 2.1 bits per second(B/s). Whereas after starting haveged, and adding processor flutter to our Entropy pool we get ~1.5MiB/sec.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://jhurani.com/linux/2017/11/01/entropy-explained.html
|
||||
|
||||
作者:[James J][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://jblevins.org/log/ssh-vulnkey
|
||||
[1]:http://jhurani.com/linux/2017/11/01/%22https://jblevins.org/log/ssh-vulnkey%22
|
||||
[2]:http://www.issihosts.com/haveged/
|
@ -1,49 +0,0 @@
|
||||
translating-----geekpi
|
||||
|
||||
3 Essential Questions to Ask at Your Next Tech Interview
|
||||
======
|
||||

|
||||
|
||||
Interviewing can be stressful, but 58 percent of companies tell Dice and the Linux Foundation that they need to hire open source talent in the months ahead. Learn how to ask the right questions.
|
||||
|
||||
The Linux Foundation
|
||||
|
||||
The annual [Open Source Jobs Report][1] from Dice and The Linux Foundation reveals a lot about prospects for open source professionals and hiring activity in the year ahead. In this year's report, 86 percent of tech professionals said that knowing open source has advanced their careers. Yet what happens with all that experience when it comes time for advancing within their own organization or applying for a new roles elsewhere?
|
||||
|
||||
Interviewing for a new job is never easy. Aside from the complexities of juggling your current work while preparing for a new role, there's the added pressure of coming up with the necessary response when the interviewer asks "Do you have any questions for me?"
|
||||
|
||||
At Dice, we're in the business of careers, advice, and connecting tech professionals with employers. But we also hire tech talent at our organization to work on open source projects. In fact, the Dice platform is based on a number of Linux distributions and we leverage open source databases as the basis for our search functionality. In short, we couldn't run Dice without open source software, therefore it's vital that we hire professionals who understand, and love, open source.
|
||||
|
||||
Over the years, I've learned the importance of asking good questions during an interview. It's an opportunity to learn about your potential new employer, as well as better understand if they are a good match for your skills.
|
||||
|
||||
Here are three essential questions to ask and the reason they're important:
|
||||
|
||||
**1\. What is the company 's position on employees contributing to open source projects or writing code in their spare time?**
|
||||
|
||||
The answer to this question will tell you a lot about the company you're interviewing with. In general, companies will want tech pros who contribute to websites or projects as long as they don't conflict with the work you're doing at that firm. Allowing this outside the company also fosters an entrepreneurial spirt among the tech organization, and teaches tech skills that you may not otherwise get in the normal course of your day.
|
||||
|
||||
**2\. How are projects prioritized here?**
|
||||
|
||||
As all companies have become tech companies, there is often a division between innovative customer facing tech projects versus those that improve the platform itself. Will you be working on keeping the existing platform up to date? Or working on new products for the public? Depending on where your interests lie, the answer could determine if the company is a right fit for you.
|
||||
|
||||
**3\. Who primarily makes decisions on new products and how much input do developers have in the decision-making process?**
|
||||
|
||||
This question is one part understanding who is responsible for innovation at the company (and how close you'll be working with him/her) and one part discovering your career path at the firm. A good company will talk to its developers and open source talent ahead of developing new products. It seems like a no brainer, but it's a step that's sometimes missed and will mean the difference between a collaborative environment or chaotic process ahead of new product releases.
|
||||
|
||||
Interviewing can be stressful, however as 58 percent of companies tell Dice and The Linux Foundation that they need to hire open source talent in the months ahead, it's important to remember the heightened demand puts professionals like you in the driver's seat. Steer your career in the direction you desire.
|
||||
|
||||
[Download ][2] the full 2017 Open Source Jobs Report now.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/os-jobs/2017/12/3-essential-questions-ask-your-next-tech-interview
|
||||
|
||||
作者:[Brian Hostetter][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/brianhostetter
|
||||
[1]:https://www.linuxfoundation.org/blog/2017-jobs-report-highlights-demand-open-source-skills/
|
||||
[2]:http://bit.ly/2017OSSjobsreport
|
@ -1,211 +0,0 @@
|
||||
Translating by qhwdw
|
||||
# Tutorial on how to write basic udev rules in Linux
|
||||
|
||||
Contents
|
||||
|
||||
* * [1. Objective][4]
|
||||
|
||||
* [2. Requirements][5]
|
||||
|
||||
* [3. Difficulty][6]
|
||||
|
||||
* [4. Conventions][7]
|
||||
|
||||
* [5. Introduction][8]
|
||||
|
||||
* [6. How rules are organized][9]
|
||||
|
||||
* [7. The rules syntax][10]
|
||||
|
||||
* [8. A test case][11]
|
||||
|
||||
* [9. Operators][12]
|
||||
* * [9.1.1. == and != operators][1]
|
||||
|
||||
* [9.1.2. The assignment operators: = and :=][2]
|
||||
|
||||
* [9.1.3. The += and -= operators][3]
|
||||
|
||||
* [10. The keys we used][13]
|
||||
|
||||
### Objective
|
||||
|
||||
Understanding the base concepts behind udev, and learn how to write simple rules
|
||||
|
||||
### Requirements
|
||||
|
||||
* Root permissions
|
||||
|
||||
### Difficulty
|
||||
|
||||
MEDIUM
|
||||
|
||||
### Conventions
|
||||
|
||||
* **#** - requires given command to be executed with root privileges either directly as a root user or by use of `sudo` command
|
||||
|
||||
* **$** - given command to be executed as a regular non-privileged user
|
||||
|
||||
### Introduction
|
||||
|
||||
In a GNU/Linux system, while devices low level support is handled at the kernel level, the management of events related to them is managed in userspace by `udev`, and more precisely by the `udevd` daemon. Learning how to write rules to be applied on the occurring of those events can be really useful to modify the behavior of the system and adapt it to our needs.
|
||||
|
||||
### How rules are organized
|
||||
|
||||
Udev rules are defined into files with the `.rules` extension. There are two main locations in which those files can be placed: `/usr/lib/udev/rules.d` it's the directory used for system-installed rules, `/etc/udev/rules.d/`is reserved for custom made rules.
|
||||
|
||||
The files in which the rules are defined are conventionally named with a number as prefix (e.g `50-udev-default.rules`) and are processed in lexical order independently of the directory they are in. Files installed in `/etc/udev/rules.d`, however, override those with the same name installed in the system default path.
|
||||
|
||||
### The rules syntax
|
||||
|
||||
The syntax of udev rules is not very complicated once you understand the logic behind it. A rule is composed by two main sections: the "match" part, in which we define the conditions for the rule to be applied, using a series of keys separated by a comma, and the "action" part, in which we perform some kind of action, when the conditions are met.
|
||||
|
||||
### A test case
|
||||
|
||||
What a better way to explain possible options than to configure an actual rule? As an example, we are going to define a rule to disable the touchpad when a mouse is connected. Obviously the attributes provided in the rule definition, will reflect my hardware.
|
||||
|
||||
We will write our rule in the `/etc/udev/rules.d/99-togglemouse.rules` file with the help of our favorite text editor. A rule definition can span over multiple lines, but if that's the case, a backslash must be used before the newline character, as a line continuation, just as in shell scripts. Here is our rule:
|
||||
```
|
||||
ACTION=="add" \
|
||||
, ATTRS{idProduct}=="c52f" \
|
||||
, ATTRS{idVendor}=="046d" \
|
||||
, ENV{DISPLAY}=":0" \
|
||||
, ENV{XAUTHORITY}="/run/user/1000/gdm/Xauthority" \
|
||||
, RUN+="/usr/bin/xinput --disable 16"
|
||||
```
|
||||
Let's analyze it.
|
||||
|
||||
### Operators
|
||||
|
||||
First of all, an explanation of the used and possible operators:
|
||||
|
||||
#### == and != operators
|
||||
|
||||
The `==` is the equality operator and the `!=` is the inequality operator. By using them we establish that for the rule to be applied the defined keys must match, or not match the defined value respectively.
|
||||
|
||||
#### The assignment operators: = and :=
|
||||
|
||||
The `=` assignment operator, is used to assign a value to the keys that accepts one. We use the `:=` operator, instead, when we want to assign a value and we want to make sure that it is not overridden by other rules: the values assigned with this operator, in facts, cannot be altered.
|
||||
|
||||
#### The += and -= operators
|
||||
|
||||
The `+=` and `-=` operators are used respectively to add or to remove a value from the list of values defined for a specific key.
|
||||
|
||||
### The keys we used
|
||||
|
||||
Let's now analyze the keys we used in the rule. First of all we have the `ACTION` key: by using it, we specified that our rule is to be applied when a specific event happens for the device. Valid values are `add`, `remove` and `change`
|
||||
|
||||
We then used the `ATTRS` keyword to specify an attribute to be matched. We can list a device attributes by using the `udevadm info` command, providing its name or `sysfs` path:
|
||||
```
|
||||
udevadm info -ap /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39
|
||||
|
||||
Udevadm info starts with the device specified by the devpath and then
|
||||
walks up the chain of parent devices. It prints for every device
|
||||
found, all possible attributes in the udev rules key format.
|
||||
A rule to match, can be composed by the attributes of the device
|
||||
and the attributes from one single parent device.
|
||||
|
||||
looking at device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39':
|
||||
KERNEL=="input39"
|
||||
SUBSYSTEM=="input"
|
||||
DRIVER==""
|
||||
ATTR{name}=="Logitech USB Receiver"
|
||||
ATTR{phys}=="usb-0000:00:1d.0-1.2/input1"
|
||||
ATTR{properties}=="0"
|
||||
ATTR{uniq}==""
|
||||
|
||||
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010':
|
||||
KERNELS=="0003:046D:C52F.0010"
|
||||
SUBSYSTEMS=="hid"
|
||||
DRIVERS=="hid-generic"
|
||||
ATTRS{country}=="00"
|
||||
|
||||
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1':
|
||||
KERNELS=="2-1.2:1.1"
|
||||
SUBSYSTEMS=="usb"
|
||||
DRIVERS=="usbhid"
|
||||
ATTRS{authorized}=="1"
|
||||
ATTRS{bAlternateSetting}==" 0"
|
||||
ATTRS{bInterfaceClass}=="03"
|
||||
ATTRS{bInterfaceNumber}=="01"
|
||||
ATTRS{bInterfaceProtocol}=="00"
|
||||
ATTRS{bInterfaceSubClass}=="00"
|
||||
ATTRS{bNumEndpoints}=="01"
|
||||
ATTRS{supports_autosuspend}=="1"
|
||||
|
||||
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2':
|
||||
KERNELS=="2-1.2"
|
||||
SUBSYSTEMS=="usb"
|
||||
DRIVERS=="usb"
|
||||
ATTRS{authorized}=="1"
|
||||
ATTRS{avoid_reset_quirk}=="0"
|
||||
ATTRS{bConfigurationValue}=="1"
|
||||
ATTRS{bDeviceClass}=="00"
|
||||
ATTRS{bDeviceProtocol}=="00"
|
||||
ATTRS{bDeviceSubClass}=="00"
|
||||
ATTRS{bMaxPacketSize0}=="8"
|
||||
ATTRS{bMaxPower}=="98mA"
|
||||
ATTRS{bNumConfigurations}=="1"
|
||||
ATTRS{bNumInterfaces}==" 2"
|
||||
ATTRS{bcdDevice}=="3000"
|
||||
ATTRS{bmAttributes}=="a0"
|
||||
ATTRS{busnum}=="2"
|
||||
ATTRS{configuration}=="RQR30.00_B0009"
|
||||
ATTRS{devnum}=="12"
|
||||
ATTRS{devpath}=="1.2"
|
||||
ATTRS{idProduct}=="c52f"
|
||||
ATTRS{idVendor}=="046d"
|
||||
ATTRS{ltm_capable}=="no"
|
||||
ATTRS{manufacturer}=="Logitech"
|
||||
ATTRS{maxchild}=="0"
|
||||
ATTRS{product}=="USB Receiver"
|
||||
ATTRS{quirks}=="0x0"
|
||||
ATTRS{removable}=="removable"
|
||||
ATTRS{speed}=="12"
|
||||
ATTRS{urbnum}=="1401"
|
||||
ATTRS{version}==" 2.00"
|
||||
|
||||
[...]
|
||||
```
|
||||
Above is the truncated output received after running the command. As you can read it from the output itself, `udevadm` starts with the specified path that we provided, and gives us information about all the parent devices. Notice that attributes of the device are reported in singular form (e.g `KERNEL`), while the parent ones in plural form (e.g `KERNELS`). The parent information can be part of a rule but only one of the parents can be referenced at a time: mixing attributes of different parent devices will not work. In the rule we defined above, we used the attributes of one parent device: `idProduct` and `idVendor`.
|
||||
|
||||
The next thing we have done in our rule, is to use the `ENV` keyword: it can be used to both set or try to match environment variables. We assigned a value to the `DISPLAY` and `XAUTHORITY` ones. Those variables are essential when interacting with the X server programmatically, to setup some needed information: with the `DISPLAY` variable, we specify on what machine the server is running, what display and what screen we are referencing, and with `XAUTHORITY` we provide the path to the file which contains Xorg authentication and authorization information. This file is usually located in the users "home" directory.
|
||||
|
||||
Finally we used the `RUN` keyword: this is used to run external programs. Very important: this is not executed immediately, but the various actions are executed once all the rules have been parsed. In this case we used the `xinput` utility to change the status of the touchpad. I will not explain the syntax of xinput here, it would be out of context, just notice that `16` is the id of the touchpad.
|
||||
|
||||
Once our rule is set, we can debug it by using the `udevadm test` command. This is useful for debugging but it doesn't really run commands specified using the `RUN` key:
|
||||
```
|
||||
$ udevadm test --action="add" /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.1/0003:046D:C52F.0010/input/input39
|
||||
```
|
||||
What we provided to the command is the action to simulate, using the `--action` option, and the sysfs path of the device. If no errors are reported, our rule should be good to go. To run it in the real world, we must reload the rules:
|
||||
```
|
||||
# udevadm control --reload
|
||||
```
|
||||
This command will reload the rules files, however, will have effect only on new generated events.
|
||||
|
||||
We have seen the basic concepts and logic used to create an udev rule, however we only scratched the surface of the many options and possible settings. The udev manpage provides an exhaustive list: please refer to it for a more in-depth knowledge.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux
|
||||
|
||||
作者:[Egidio Docile ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://disqus.com/by/egidiodocile/
|
||||
[1]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-1-and-operators
|
||||
[2]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-2-the-assignment-operators-and
|
||||
[3]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-1-3-the-and-operators
|
||||
[4]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h1-objective
|
||||
[5]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h2-requirements
|
||||
[6]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h3-difficulty
|
||||
[7]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h4-conventions
|
||||
[8]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h5-introduction
|
||||
[9]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h6-how-rules-are-organized
|
||||
[10]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h7-the-rules-syntax
|
||||
[11]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h8-a-test-case
|
||||
[12]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h9-operators
|
||||
[13]:https://linuxconfig.org/tutorial-on-how-to-write-basic-udev-rules-in-linux#h10-the-keys-we-used
|
@ -0,0 +1,133 @@
|
||||
Top 7 open source project management tools for agile teams
|
||||
======
|
||||
|
||||

|
||||
|
||||
Opensource.com has surveyed the landscape of popular open source project management tools. We've done this before—but this year we've added a twist. This time, we're looking specifically at tools that support [agile][1] methodology, including related practices such as [Scrum][2], Lean, and Kanban.
|
||||
|
||||
The growth of interest in and use of agile is why we've decided to focus on these types of tools this year. A majority of organizations—71%—say they [are using agile approaches][3] at least sometimes. In addition, agile projects are [28% more successful][4] than projects managed with traditional approaches.
|
||||
|
||||
For this roundup, we looked at the project management tools we covered in [2014][5], [2015][6], and [2016][7] and plucked the ones that support agile, then did research to uncover any additions or changes. Whether your organization is already using agile or is one of the many planning to adopt agile approaches in 2018, one of these seven open source project management tools may be exactly what you're looking for.
|
||||
|
||||
### MyCollab
|
||||
|
||||

|
||||
|
||||
[MyCollab][8] is a suite of three collaboration modules for small and midsize businesses: project management, customer relationship management (CRM), and document creation and editing software. There are two licensing options: a commercial "ultimate" edition, which is faster and can be run on-premises or in the cloud, and the open source "community edition," which is the version we're interested in here.
|
||||
|
||||
The community edition doesn't have a cloud option and is slower, due to not using query cache, but provides essential project management features, including tasks, issues management, activity stream, roadmap view, and a Kanban board for agile teams. While it doesn't have a separate mobile app, it works on mobile devices as well as Windows, MacOS, Linux, and Unix computers.
|
||||
|
||||
The latest version of MyCollab is 5.4.10 and the source code is available on [GitHub][9]. It is licensed under AGPLv3 and requires a Java runtime and MySQL stack to operate. It's available for [download][10] for Windows, Linux, Unix, and MacOS.
|
||||
|
||||
### Odoo
|
||||
|
||||

|
||||
|
||||
[Odoo][11] is more than project management software; it's a full, integrated business application suite that includes accounting, human resources, website & e-commerce, inventory, manufacturing, sales management (CRM), and other tools.
|
||||
|
||||
The free and open source community edition has limited [features][12] compared to the paid enterprise suite. Its project management application includes a Kanban-style task-tracking view for agile teams, which was updated in its latest release, Odoo 11.0, to include a progress bar and animation for tracking project status. The project management tool also includes Gantt charts, tasks, issues, graphs, and more. Odoo has a thriving [community][13] and provides [user guides][14] and other training resources.
|
||||
|
||||
It is licensed under GPLv3 and requires Python and PostgreSQL. It is available for [download][15] for Windows, Linux, and Red Hat Package Manager, as a [Docker][16] image, and as source on [GitHub][17].
|
||||
|
||||
### OpenProject
|
||||
|
||||

|
||||
|
||||
[OpenProject][18] is a powerful open source project management tool that is notable for its ease of use and rich project management and team collaboration features.
|
||||
|
||||
Its modules support project planning, scheduling, roadmap and release planning, time tracking, cost reporting, budgeting, bug tracking, and agile and Scrum. Its agile features, including creating stories, prioritizing sprints, and tracking tasks, are integrated with OpenProject's other modules.
|
||||
|
||||
OpenProject is licensed under GPLv3 and its source code is available on [GitHub][19]. Its latest version, 7.3.2. is available for [download][20] for Linux; you can learn more about installing and configuring it in Birthe Lindenthal's article "[Getting started with OpenProject][21]."
|
||||
|
||||
### OrangeScrum
|
||||
|
||||

|
||||
|
||||
As you would expect from its name, [OrangeScrum][22] supports agile methodologies, specifically with a Scrum task board and Kanban-style workflow view. It's geared for smaller organizations—freelancers, agencies, and small and midsize businesses.
|
||||
|
||||
The open source version offers many of the [features][23] in OrangeScrum's paid editions, including a mobile app, resource utilization, and progress tracking. Other features, including Gantt charts, time logs, invoicing, and client management, are available as paid add-ons, and the paid editions include a cloud option, which the community version does not.
|
||||
|
||||
OrangeScrum is licensed under GPLv3 and is based on the CakePHP framework. It requires Apache, PHP 5.3 or higher, and MySQL 4.1 or higher, and works on Windows, Linux, and MacOS. Its latest release, 1.6.1. is available for [download][24], and its source code can be found on [GitHub][25].
|
||||
|
||||
### ]project-open[
|
||||
|
||||

|
||||
|
||||
[]project-open[][26] is a dual-licensed enterprise project management tool, meaning that its core is open source, and some additional features are available in commercially licensed modules. According to the project's [comparison][27] of the community and enterprise editions, the open source core offers plenty of features for small and midsize organizations.
|
||||
|
||||
]project-open[ supports [agile][28] projects with Scrum and Kanban support, as well as classic Gantt/waterfall projects and hybrid or mixed projects.
|
||||
|
||||
The application is licensed under GPL and the [source code][29] is accessible via CVS. ]project-open[ is available as [installers][26] for both Linux and Windows, but also in cloud images and as a virtual appliance.
|
||||
|
||||
### Taiga
|
||||
|
||||

|
||||
|
||||
[Taiga][30] is an open source project management platform that focuses on Scrum and agile development, with features including a Kanban board, tasks, sprints, issues, a backlog, and epics. Other features include ticket management, multi-project support, wiki pages, and third-party integrations.
|
||||
|
||||
It also offers a free mobile app for iOS, Android, and Windows devices, and provides import tools that make it easy to migrate from other popular project management applications.
|
||||
|
||||
Taiga is free for public projects, with no restrictions on either the number of projects or the number of users. For private projects, there is a wide range of [paid plans][31] available under a "freemium" model, but, notably, the software's features are the same, no matter which type of plan you have.
|
||||
|
||||
Taiga is licensed under GNU Affero GPLv3, and requires a stack that includes Nginx, Python, and PostgreSQL. The latest release, [3.1.0 Perovskia atriplicifolia][32], is available on [GitHub][33].
|
||||
|
||||
### Tuleap
|
||||
|
||||

|
||||
|
||||
[Tuleap][34] is an application lifecycle management (ALM) platform that aims to manage projects for every type of team—small, midsize, large, waterfall, agile, or hybrid—but its support for agile teams is prominent. Notably, it offers support for Scrum, Kanban, sprints, tasks, reports, continuous integration, backlogs, and more.
|
||||
|
||||
Other [features][35] include issue tracking, document tracking, collaboration tools, and integration with Git, SVN, and Jenkins, all of which make it an appealing choice for open source software development projects.
|
||||
|
||||
Tuleap is licensed under GPLv2. More information, including Docker and CentOS downloads, is available on their [Get Started][36] page. You can also get the source code for its latest version, 9.14, on Tuleap's [Git][37].
|
||||
|
||||
The trouble with this type of list is that it's usually out of date as soon as it's published. Are you using an open source project management tool that supports agile that we forgot to include? Or do you have feedback on the ones we mentioned? Please leave a comment below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/agile-project-management-tools
|
||||
|
||||
作者:[Opensource.com][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com
|
||||
[1]:http://agilemanifesto.org/principles.html
|
||||
[2]:https://opensource.com/resources/scrum
|
||||
[3]:https://www.pmi.org/-/media/pmi/documents/public/pdf/learning/thought-leadership/pulse/pulse-of-the-profession-2017.pdf
|
||||
[4]:https://www.pwc.com/gx/en/actuarial-insurance-services/assets/agile-project-delivery-confidence.pdf
|
||||
[5]:https://opensource.com/business/14/1/top-project-management-tools-2014
|
||||
[6]:https://opensource.com/business/15/1/top-project-management-tools-2015
|
||||
[7]:https://opensource.com/business/16/3/top-project-management-tools-2016
|
||||
[8]:https://community.mycollab.com/
|
||||
[9]:https://github.com/MyCollab/mycollab
|
||||
[10]:https://www.mycollab.com/ce-registration/
|
||||
[11]:https://www.odoo.com/
|
||||
[12]:https://www.odoo.com/page/editions
|
||||
[13]:https://www.odoo.com/page/community
|
||||
[14]:https://www.odoo.com/documentation/user/11.0/
|
||||
[15]:https://www.odoo.com/page/download
|
||||
[16]:https://hub.docker.com/_/odoo/
|
||||
[17]:https://github.com/odoo/odoo
|
||||
[18]:https://www.openproject.org/
|
||||
[19]:https://github.com/opf/openproject
|
||||
[20]:https://www.openproject.org/download-and-installation/
|
||||
[21]:https://opensource.com/article/17/11/how-install-and-use-openproject
|
||||
[22]:https://www.orangescrum.org/
|
||||
[23]:https://www.orangescrum.org/compare-orangescrum
|
||||
[24]:http://www.orangescrum.org/free-download
|
||||
[25]:https://github.com/Orangescrum/orangescrum/
|
||||
[26]:http://www.project-open.com/en/list-installers
|
||||
[27]:http://www.project-open.com/en/products/editions.html
|
||||
[28]:http://www.project-open.com/en/project-type-agile
|
||||
[29]:http://www.project-open.com/en/developers-cvs-checkout
|
||||
[30]:https://taiga.io/
|
||||
[31]:https://tree.taiga.io/support/subscription-and-plans/payment-process-faqs/#q.-what-s-about-custom-plans-private-projects-with-more-than-25-members-?
|
||||
[32]:https://blog.taiga.io/taiga-perovskia-atriplicifolia-release-310.html
|
||||
[33]:https://github.com/taigaio
|
||||
[34]:https://www.tuleap.org/
|
||||
[35]:https://www.tuleap.org/features/project-management
|
||||
[36]:https://www.tuleap.org/get-started
|
||||
[37]:https://tuleap.net/plugins/git/tuleap/tuleap/stable
|
@ -1,3 +1,4 @@
|
||||
Translating by stevenzdg988
|
||||
How To Find The Installed Proprietary Packages In Arch Linux
|
||||
======
|
||||

|
||||
|
@ -1,201 +0,0 @@
|
||||
translating by Flowsnow
|
||||
|
||||
Ansible: the Automation Framework That Thinks Like a Sysadmin
|
||||
======
|
||||
|
||||
I've written about and trained folks on various DevOps tools through the years, and although they're awesome, it's obvious that most of them are designed from the mind of a developer. There's nothing wrong with that, because approaching configuration management programmatically is the whole point. Still, it wasn't until I started playing with Ansible that I felt like it was something a sysadmin quickly would appreciate.
|
||||
|
||||
Part of that appreciation comes from the way Ansible communicates with its client computers—namely, via SSH. As sysadmins, you're all very familiar with connecting to computers via SSH, so right from the word "go", you have a better understanding of Ansible than the other alternatives.
|
||||
|
||||
With that in mind, I'm planning to write a few articles exploring how to take advantage of Ansible. It's a great system, but when I was first exposed to it, it wasn't clear how to start. It's not that the learning curve is steep. In fact, if anything, the problem was that I didn't really have that much to learn before starting to use Ansible, and that made it confusing. For example, if you don't have to install an agent program (Ansible doesn't have any software installed on the client computers), how do you start?
|
||||
|
||||
### Getting to the Starting Line
|
||||
|
||||
The reason Ansible was so difficult for me at first is because it's so flexible with how to configure the server/client relationship, I didn't know what I was supposed to do. The truth is that Ansible doesn't really care how you set up the SSH system; it will utilize whatever configuration you have. There are just a couple things to consider:
|
||||
|
||||
1. Ansible needs to connect to the client computer via SSH.
|
||||
|
||||
2. Once connected, Ansible needs to elevate privilege so it can configure the system, install packages and so on.
|
||||
|
||||
Unfortunately, those two considerations really open a can of worms. Connecting to a remote computer and elevating privilege is a scary thing to allow. For some reason, it feels less vulnerable when you simply install an agent on the remote computer and let Chef or Puppet handle privilege escalation. It's not that Ansible is any less secure, but rather, it puts the security decisions in your hands.
|
||||
|
||||
Next I'm going to list a bunch of potential configurations, along with the pros and cons of each. This isn't an exhaustive list, but it should get you thinking along the right lines for what will be ideal in your environment. I also should note that I'm not going to mention systems like Vagrant, because although Vagrant is wonderful for building a quick infrastructure for testing and developing, it's so very different from a bunch of servers that the considerations are too dissimilar really to compare.
|
||||
|
||||
### Some SSH Scenarios
|
||||
|
||||
1) SSHing into remote computer as root with password in Ansible config.
|
||||
|
||||
I started with a terrible idea. The "pros" of this setup is that it eliminates the need for privilege escalation, and there are no other user accounts required on the remote server. But, the cost for such convenience isn't worth it. First, most systems won't let you SSH in as root without changing the default configuration. Those default configurations are there because, quite frankly, it's just a bad idea to allow the root user to connect remotely. Second, putting a root password in a plain-text configuration file on the Ansible machine is mortifying. Really, I mentioned this possibility because it is a possibility, but it's one that should be avoided. Remember, Ansible allows you to configure the connection yourself, and it will let you do really dumb things. Please don't.
|
||||
|
||||
2) SSHing into a remote computer as a regular user, using a password stored in the Ansible config.
|
||||
|
||||
An advantage of this scenario is that it doesn't require much configuration of the clients. Most users are able to SSH in by default, so Ansible should be able to use credentials and log in fine. I personally dislike the idea of a password being stored in plain text in a configuration file, but at least it isn't the root password. If you use this method, be sure to consider how privilege escalation will take place on the remote server. I know I haven't talked about escalating privilege yet, but if you have a password in the config file, that same password likely will be used to gain sudo access. So with one slip, you've compromised not only the remote user's account, but also potentially the entire system.
|
||||
|
||||
3) SSHing into a remote computer as a regular user, authenticating with a key pair that has an empty passphrase.
|
||||
|
||||
This eliminates storing passwords in a configuration file, at least for the logging in part of the process. Key pairs without passphrases aren't ideal, but it's something I often do in an environment like my house. On my internal network, I typically use a key pair without a passphrase to automate many things like cron jobs that require authentication. This isn't the most secure option, because a compromised private key means unrestricted access to the remote user's account, but I like it better than a password in a config file.
|
||||
|
||||
4) SSHing into a remote computer as a regular user, authenticating with a key pair that is secured by a passphrase.
|
||||
|
||||
This is a very secure way of handling remote access, because it requires two different authentication factors: 1) the private key and 2) the passphrase to decrypt it. If you're just running Ansible interactively, this might be the ideal setup. When you run a command, Ansible should prompt you for the private key's passphrase, and then it'll use the key pair to log in to the remote system. Yes, the same could be done by just using a standard password login and not specifying the password in the configuration file, but if you're going to be typing a password on the command line anyway, why not add the layer of protection a key pair offers?
|
||||
|
||||
5) SSHing with a passphrase-protected key pair, but using ssh-agent to "unlock" the private key.
|
||||
|
||||
This doesn't perfectly answer the question of unattended, automated Ansible commands, but it does make a fairly secure setup convenient as well. The ssh-agent program authenticates the passphrase one time and then uses that authentication to make future connections. When I'm using Ansible, this is what I think I'd like to be doing. If I'm completely honest, I still usually use key pairs without passphrases, but that's typically because I'm working on my home servers, not something prone to attack.
|
||||
|
||||
There are some other considerations to keep in mind when configuring your SSH environment. Perhaps you're able to restrict the Ansible user (which is often your local user name) so it can log in only from a specific IP address. Perhaps your Ansible server can live in a different subnet, behind a strong firewall so its private keys are more difficult to access remotely. Maybe the Ansible server doesn't have an SSH server installed on itself so there's no incoming access at all. Again, one of the strengths of Ansible is that it uses the SSH protocol for communication, and it's a protocol you've all had years to tweak into a system that works best in your environment. I'm not a big fan of proclaiming what the "best practice" is, because in reality, the best practice is to consider your environment and choose the setup that fits your situation the best.
|
||||
|
||||
### Privilege Escalation
|
||||
|
||||
Once your Ansible server connects to its clients via SSH, it needs to be able to escalate privilege. If you chose option 1 above, you're already root, and this is a moot point. But since no one chose option 1 (right?), you need to consider how a regular user on the client computer gains access. Ansible supports a wide variety of escalation systems, but in Linux, the most common options are sudo and su. As with SSH, there are a few situations to consider, although there are certainly other options.
|
||||
|
||||
1) Escalate privilege with su.
|
||||
|
||||
For Red Hat/CentOS users, the instinct might be to use su in order to gain system access. By default, those systems configure the root password during install, and to gain privileged access, you need to type it in. The problem with using su is that although it gives you total access to the remote system, it also gives you total access to the remote system. (Yes, that was sarcasm.) Also, the su program doesn't have the ability to authenticate with key pairs, so the password either must be interactively typed or stored in the configuration file. And since it's literally the root password, storing it in the config file should sound like a horrible idea, because it is.
|
||||
|
||||
2) Escalate privilege with sudo.
|
||||
|
||||
This is how Debian/Ubuntu systems are configured. A user in the correct group has access to sudo a command and execute it with root privileges. Out of the box, this still has the problem of password storage or interactive typing. Since storing the user's password in the configuration file seems a little less horrible, I guess this is a step up from using su, but it still gives complete access to a system if the password is compromised. (After all, typing sudo su - will allow users to become root just as if they had the root password.)
|
||||
|
||||
3) Escalate privilege with sudo and configure NOPASSWD in the sudoers file.
|
||||
|
||||
Again, in my local environment, this is what I do. It's not perfect, because it gives unrestricted root access to the user account and doesn't require any passwords. But when I do this, and use SSH key pairs without passphrases, it allows me to automate Ansible commands easily. I'll note again, that although it is convenient, it is not a terribly secure idea.
|
||||
|
||||
4) Escalate privilege with sudo and configure NOPASSWD on specific executables.
|
||||
|
||||
This idea might be the best compromise of security and convenience. Basically, if you know what you plan to do with Ansible, you can give NOPASSWD privilege to the remote user for just those applications it will need to use. It might get a little confusing, since Ansible uses Python for lots of things, but with enough trial and error, you should be able to figure things out. It is more work, but does eliminate some of the glaring security holes.
|
||||
|
||||
### Implementing Your Plan
|
||||
|
||||
Once you decide how you're going to handle Ansible authentication and privilege escalation, you need to set it up. After you become well versed at Ansible, you might be able to use the tool itself to help "bootstrap" new clients, but at first, it's important to configure clients manually so you know what's happening. It's far better to automate a process you're familiar with than to start with automation from the beginning.
|
||||
|
||||
I've written about SSH key pairs in the past, and there are countless articles online for setting it up. The short version, from your Ansible computer, looks something like this:
|
||||
|
||||
```
|
||||
|
||||
# ssh-keygen
|
||||
# ssh-copy-id -i .ssh/id_dsa.pub remoteuser@remote.computer.ip
|
||||
# ssh remoteuser@remote.computer.ip
|
||||
|
||||
```
|
||||
|
||||
If you've chosen to use no passphrase when creating your key pairs, that last step should get you into the remote computer without typing a password or passphrase.
|
||||
|
||||
In order to set up privilege escalation in sudo, you'll need to edit the sudoers file. You shouldn't edit the file directly, but rather use:
|
||||
|
||||
```
|
||||
|
||||
# sudo visudo
|
||||
|
||||
```
|
||||
|
||||
This will open the sudoers file and allow you to make changes safely (it error-checks when you save, so you don't accidentally lock yourself out with a typo). There are examples in the file, so you should be able to figure out how to assign the exact privileges you want.
|
||||
|
||||
Once it's all configured, you should test it manually before bringing Ansible into the picture. Try SSHing to the remote client, and then try escalating privilege using whatever methods you've chosen. Once you have configured the way you'll connect, it's time to install Ansible.
|
||||
|
||||
### Installing Ansible
|
||||
|
||||
Since the Ansible program gets installed only on the single computer, it's not a big chore to get going. Red Hat/Ubuntu systems do package installs a bit differently, but neither is difficult.
|
||||
|
||||
In Red Hat/CentOS, first enable the EPEL repository:
|
||||
|
||||
```
|
||||
|
||||
sudo yum install epel-release
|
||||
|
||||
```
|
||||
|
||||
Then install Ansible:
|
||||
|
||||
```
|
||||
|
||||
sudo yum install ansible
|
||||
|
||||
```
|
||||
|
||||
In Ubuntu, first enable the Ansible PPA:
|
||||
|
||||
```
|
||||
|
||||
sudo apt-add-repository spa:ansible/ansible
|
||||
(press ENTER to access the key and add the repo)
|
||||
|
||||
```
|
||||
|
||||
Then install Ansible:
|
||||
|
||||
```
|
||||
|
||||
sudo apt-get update
|
||||
sudo apt-get install ansible
|
||||
|
||||
```
|
||||
|
||||
### Configuring Ansible Hosts File
|
||||
|
||||
The Ansible system has no way of knowing which clients you want it to control unless you give it a list of computers. That list is very simple, and it looks something like this:
|
||||
|
||||
```
|
||||
|
||||
# file /etc/ansible/hosts
|
||||
|
||||
[webservers]
|
||||
blogserver ansible_host=192.168.1.5
|
||||
wikiserver ansible_host=192.168.1.10
|
||||
|
||||
[dbservers]
|
||||
mysql_1 ansible_host=192.168.1.22
|
||||
pgsql_1 ansible_host=192.168.1.23
|
||||
|
||||
```
|
||||
|
||||
The bracketed sections are specifying groups. Individual hosts can be listed in multiple groups, and Ansible can refer either to individual hosts or groups. This is also the configuration file where things like plain-text passwords would be stored, if that's the sort of setup you've planned. Each line in the configuration file configures a single host, and you can add multiple declarations after the ansible_host statement. Some useful options are:
|
||||
|
||||
```
|
||||
|
||||
ansible_ssh_pass
|
||||
ansible_become
|
||||
ansible_become_method
|
||||
ansible_become_user
|
||||
ansible_become_pass
|
||||
|
||||
```
|
||||
|
||||
### The Ansible Vault
|
||||
|
||||
I also should note that although the setup is more complex, and not something you'll likely do during your first foray into the world of Ansible, the program does offer a way to encrypt passwords in a vault. Once you're familiar with Ansible and you want to put it into production, storing those passwords in an encrypted Ansible vault is ideal. But in the spirit of learning to crawl before you walk, I recommend starting in a non-production environment and using passwordless methods at first.
|
||||
|
||||
### Testing Your System
|
||||
|
||||
Finally, you should test your system to make sure your clients are connecting. The ping test will make sure the Ansible computer can ping each host:
|
||||
|
||||
```
|
||||
|
||||
ansible -m ping all
|
||||
|
||||
```
|
||||
|
||||
After running, you should see a message for each defined host showing a ping: pong if the ping was successful. This doesn't actually test authentication, just the network connectivity. Try this to test your authentication:
|
||||
|
||||
```
|
||||
|
||||
ansible -m shell -a 'uptime' webservers
|
||||
|
||||
```
|
||||
|
||||
You should see the results of the uptime command for each host in the webservers group.
|
||||
|
||||
In a future article, I plan start to dig in to Ansible's ability to manage the remote computers. I'll look at various modules and how you can use the ad-hoc mode to accomplish in a few keystrokes what would take a long time to handle individually on the command line. If you didn't get the results you expected from the sample Ansible commands above, take this time to make sure authentication is working. Check out [the Ansible docs][1] for more help if you get stuck.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/ansible-automation-framework-thinks-sysadmin
|
||||
|
||||
作者:[Shawn Powers][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/users/shawn-powers
|
||||
[1]:http://docs.ansible.com
|
43
sources/tech/20180111 What is the deal with GraphQL.md
Normal file
43
sources/tech/20180111 What is the deal with GraphQL.md
Normal file
@ -0,0 +1,43 @@
|
||||
translating---geekpi
|
||||
|
||||
What is the deal with GraphQL?
|
||||
======
|
||||
|
||||

|
||||
|
||||
There has been lots of talks lately about this thing called [GraphQL][1]. It is a relatively new technology coming out of Facebook and is starting to be widely adopted by large companies like [Github][2], Facebook, Twitter, Yelp, and many others. Basically, GraphQL is an alternative to REST, it replaces many dumb endpoints, `/user/1`, `/user/1/comments` with `/graphql` and you use the post body or query string to request the data you need, like, `/graphql?query={user(id:1){id,username,comments{text}}}`. You pick the pieces of data you need and can nest down to relations to avoid multiple calls. This is a different way of thinking about a backend, but in some situations, it makes practical sense.
|
||||
|
||||
### My Experience with GraphQL
|
||||
|
||||
Originally when I heard about it I was very skeptical, after dabbling in [Apollo Server][3] I was not convinced. Why would you use some silly new technology when you can simply build REST endpoints! But after digging deeper and learning more about its use cases, I came around. I still think REST has a place and will be important for the foreseeable future, but with how bad many APIs and their documentation are, this can be a breath of fresh air...
|
||||
|
||||
### Why Use GraphQL Over REST?
|
||||
|
||||
Although I have used GraphQL, and think it is a compelling and exciting technology, I believe it does not replace REST. That being said there are compelling reasons to pick GraphQL over REST in some situations. When you are building mobile apps or web apps which are made with high mobile traffic in mind GraphQL really shines. The reason for this is mobile data. REST uses many calls and often returns unused data whereas, with GraphQL, you can define precisely what you want to be returned for minimal data usage.
|
||||
|
||||
You can get do all the above with REST by making multiple endpoints available, but that also adds complexity to the project. It also means there will be back and forth between the front and backend teams.
|
||||
|
||||
### What Should You Use?
|
||||
|
||||
GraphQL is a new technology which is now mainstream. But many developers are not aware of it or choose not to learn it because they think it's a fad. I feel like for most projects you can get away using either REST or GraphQL. Developing using GraphQL has great benefits like enforcing documentation, which helps teams work better together, and provides clear expectations for each query. This will likely speed up development after the initial hurdle of wrapping your head around GraphQL.
|
||||
|
||||
Although I have been comparing GraphQL and REST, I think in most cases a mixture of the two will produce the best results. Combine the strengths of both instead of seeing it strightly as just using GraphQL or just using REST.
|
||||
|
||||
### Final Thoughts
|
||||
|
||||
Both technologies are here to stay. And done right both technologies can make fast and efficient backends. GraphQL has an edge up because it allows the client to query only the data they need by default, but that is at a potential sacrifice of endpoint speed. Ultimately, if I were starting a new project, I would go with a mix of both GraphQL and REST.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://ryanmccue.ca/what-is-the-deal-with-graphql/
|
||||
|
||||
作者:[Ryan McCue][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://ryanmccue.ca/author/ryan/
|
||||
[1]:http://graphql.org/
|
||||
[2]:https://developer.github.com/v4/
|
||||
[3]:https://github.com/apollographql/apollo-server
|
@ -0,0 +1,181 @@
|
||||
in which the cost of structured data is reduced
|
||||
======
|
||||
Last year I got the wonderful opportunity to attend [RacketCon][1] as it was hosted only 30 minutes away from my home. The two-day conference had a number of great talks on the first day, but what really impressed me was the fact that the entire second day was spent focusing on contribution. The day started out with a few 15- to 20-minute talks about how to contribute to a specific codebase (including that of Racket itself), and after that people just split off into groups focused around specific codebases. Each table had maintainers helping guide other folks towards how to work with the codebase and construct effective patch submissions.
|
||||
|
||||
![lensmen chronicles][2]
|
||||
|
||||
I came away from the conference with a great sense of appreciation for how friendly and welcoming the Racket community is, and how great Racket is as a swiss-army-knife type tool for quick tasks. (Not that it's unsuitable for large projects, but I don't have the opportunity to start any new large projects very frequently.)
|
||||
|
||||
The other day I wanted to generate colored maps of the world by categorizing countries interactively, and Racket seemed like it would fit the bill nicely. The job is simple: show an image of the world with one country selected; when a key is pressed, categorize that country, then show the map again with all categorized countries colored, and continue with the next country selected.
|
||||
|
||||
### GUIs and XML
|
||||
|
||||
I have yet to see a language/framework more accessible and straightforward out of the box for drawing1. Here's the entry point which sets up state and then constructs a canvas that handles key input and display:
|
||||
```
|
||||
(define (main path)
|
||||
(let ([frame (new frame% [label "World color"])]
|
||||
[categorizations (box '())]
|
||||
[doc (call-with-input-file path read-xml/document)])
|
||||
(new (class canvas%
|
||||
(define/override (on-char event)
|
||||
(handle-key this categorizations (send event get-key-code)))
|
||||
(super-new))
|
||||
[parent frame]
|
||||
[paint-callback (draw doc categorizations)])
|
||||
(send frame show #t)))
|
||||
|
||||
```
|
||||
|
||||
While the class system is not one of my favorite things about Racket (most newer code seems to avoid it in favor of [generic interfaces][3] in the rare case that polymorphism is truly called for), the fact that classes can be constructed in a light-weight, anonymous way makes it much less onerous than it could be. This code sets up all mutable state in a [`box`][4] which you use in the way you'd use a `ref` in ML or Clojure: a mutable wrapper around an immutable data structure.
|
||||
|
||||
The world map I'm using is [an SVG of the Robinson projection][5] from Wikipedia. If you look closely there's a call to bind `doc` that calls [`call-with-input-file`][6] with [`read-xml/document`][7] which loads up the whole map file's SVG; just about as easily as you could ask for.
|
||||
|
||||
The data you get back from `read-xml/document` is in fact a [document][8] struct, which contains an `element` struct containing `attribute` structs and lists of more `element` structs. All very sensible, but maybe not what you would expect in other dynamic languages like Clojure or Lua where free-form maps reign supreme. Racket really wants structure to be known up-front when possible, which is one of the things that help it produce helpful error messages when things go wrong.
|
||||
|
||||
Here's how we handle keyboard input; we're displaying a map with one country highlighted, and `key` here tells us what the user pressed to categorize the highlighted country. If that key is in the `categories` hash then we put it into `categorizations`.
|
||||
```
|
||||
(define categories #hash((select . "eeeeff")
|
||||
(#\1 . "993322")
|
||||
(#\2 . "229911")
|
||||
(#\3 . "ABCD31")
|
||||
(#\4 . "91FF55")
|
||||
(#\5 . "2439DF")))
|
||||
|
||||
(define (handle-key canvas categorizations key)
|
||||
(cond [(equal? #\backspace key) (swap! categorizations cdr)]
|
||||
[(member key (dict-keys categories)) (swap! categorizations (curry cons key))]
|
||||
[(equal? #\space key) (display (unbox categorizations))])
|
||||
(send canvas refresh))
|
||||
|
||||
```
|
||||
|
||||
### Nested updates: the bad parts
|
||||
|
||||
Finally once we have a list of categorizations, we need to apply it to the map document and display. We apply a [`fold`][9] reduction over the XML document struct and the list of country categorizations (plus `'select` for the country that's selected to be categorized next) to get back a "modified" document struct where the proper elements have the style attributes applied for the given categorization, then we turn it into an image and hand it to [`draw-pict`][10]:
|
||||
```
|
||||
|
||||
(define (update original-doc categorizations)
|
||||
(for/fold ([doc original-doc])
|
||||
([category (cons 'select (unbox categorizations))]
|
||||
[n (in-range (length (unbox categorizations)) 0 -1)])
|
||||
(set-style doc n (style-for category))))
|
||||
|
||||
(define ((draw doc categorizations) _ context)
|
||||
(let* ([newdoc (update doc categorizations)]
|
||||
[xml (call-with-output-string (curry write-xml newdoc))])
|
||||
(draw-pict (call-with-input-string xml svg-port->pict) context 0 0)))
|
||||
|
||||
```
|
||||
|
||||
The problem is in that pesky `set-style` function. All it has to do is reach deep down into the `document` struct to find the `n`th `path` element (the one associated with a given country), and change its `'style` attribute. It ought to be a simple task. Unfortunately this function ends up being anything but simple:
|
||||
```
|
||||
|
||||
(define (set-style doc n new-style)
|
||||
(let* ([root (document-element doc)]
|
||||
[g (list-ref (element-content root) 8)]
|
||||
[paths (element-content g)]
|
||||
[path (first (drop (filter element? paths) n))]
|
||||
[path-num (list-index (curry eq? path) paths)]
|
||||
[style-index (list-index (lambda (x) (eq? 'style (attribute-name x)))
|
||||
(element-attributes path))]
|
||||
[attr (list-ref (element-attributes path) style-index)]
|
||||
[new-attr (make-attribute (source-start attr)
|
||||
(source-stop attr)
|
||||
(attribute-name attr)
|
||||
new-style)]
|
||||
[new-path (make-element (source-start path)
|
||||
(source-stop path)
|
||||
(element-name path)
|
||||
(list-set (element-attributes path)
|
||||
style-index new-attr)
|
||||
(element-content path))]
|
||||
[new-g (make-element (source-start g)
|
||||
(source-stop g)
|
||||
(element-name g)
|
||||
(element-attributes g)
|
||||
(list-set paths path-num new-path))]
|
||||
[root-contents (list-set (element-content root) 8 new-g)])
|
||||
(make-document (document-prolog doc)
|
||||
(make-element (source-start root)
|
||||
(source-stop root)
|
||||
(element-name root)
|
||||
(element-attributes root)
|
||||
root-contents)
|
||||
(document-misc doc))))
|
||||
|
||||
```
|
||||
|
||||
The reason for this is that while structs are immutable, they don't support functional updates. Whenever you're working with immutable data structures, you want to be able to say "give me a new version of this data, but with field `x` replaced by the value of `(f (lookup x))`". Racket can [do this with dictionaries][11] but not with structs2. If you want a modified version you have to create a fresh one3.
|
||||
|
||||
### Lenses to the rescue?
|
||||
|
||||
![first lensman][12]
|
||||
|
||||
When I brought this up in the `#racket` channel on Freenode, I was helpfully pointed to the 3rd-party [Lens][13] library. Lenses are a general-purpose way of composing arbitrarily nested lookups and updates. Unfortunately at this time there's [a flaw][14] preventing them from working with `xml` structs, so it seemed I was out of luck.
|
||||
|
||||
But then I was pointed to [X-expressions][15] as an alternative to structs. The [`xml->xexpr`][16] function turns the structs into a deeply-nested list tree with symbols and strings in it. The tag is the first item in the list, followed by an associative list of attributes, then the element's children. While this gives you fewer up-front guarantees about the structure of the data, it does work around the lens issue.
|
||||
|
||||
For this to work, we need to compose a new lens based on the "path" we want to use to drill down into the `n`th country and its `style` attribute. The [`lens-compose`][17] function lets us do that. Note that the order here might be backwards from what you'd expect; it works deepest-first (the way [`compose`][18] works for functions). Also note that defining one lens gives us the ability to both get nested values (with [`lens-view`][19]) and update them.
|
||||
```
|
||||
(define (style-lens n)
|
||||
(lens-compose (dict-ref-lens 'style)
|
||||
second-lens
|
||||
(list-ref-lens (add1 (* n 2)))
|
||||
(list-ref-lens 10)))
|
||||
```
|
||||
|
||||
Our `<path>` XML elements are under the 10th item of the root xexpr, (hence the [`list-ref-lens`][20] with 10) and they are interspersed with whitespace, so we have to double `n` to find the `<path>` we want. The [`second-lens`][21] call gets us to that element's attribute alist, and [`dict-ref-lens`][22] lets us zoom in on the `'style` key out of that alist.
|
||||
|
||||
Once we have our lens, it's just a matter of replacing `set-style` with a call to [`lens-set`][23] in our `update` function we had above, and then we're off:
|
||||
```
|
||||
(define (update doc categorizations)
|
||||
(for/fold ([d doc])
|
||||
([category (cons 'select (unbox categorizations))]
|
||||
[n (in-range (length (unbox categorizations)) 0 -1)])
|
||||
(lens-set (style-lens n) d (list (style-for category)))))
|
||||
```
|
||||
|
||||
![second stage lensman][24]
|
||||
|
||||
Often times the trade-off between freeform maps/hashes vs structured data feels like one of convenience vs long-term maintainability. While it's unfortunate that they can't be used with the `xml` structs4, lenses provide a way to get the best of both worlds, at least in some situations.
|
||||
|
||||
The final version of the code clocks in at 51 lines and is is available [on GitLab][25].
|
||||
|
||||
๛
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://technomancy.us/185
|
||||
|
||||
作者:[Phil Hagelberg][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://technomancy.us/
|
||||
[1]:https://con.racket-lang.org/
|
||||
[2]:https://technomancy.us/i/chronicles-of-lensmen.jpg
|
||||
[3]:https://docs.racket-lang.org/reference/struct-generics.html
|
||||
[4]:https://docs.racket-lang.org/reference/boxes.html?q=box#%28def._%28%28quote._~23~25kernel%29._box%29%29
|
||||
[5]:https://commons.wikimedia.org/wiki/File:BlankMap-World_gray.svg
|
||||
[6]:https://docs.racket-lang.org/reference/port-lib.html#(def._((lib._racket%2Fport..rkt)._call-with-input-string))
|
||||
[7]:https://docs.racket-lang.org/xml/index.html?q=read-xml#%28def._%28%28lib._xml%2Fmain..rkt%29._read-xml%2Fdocument%29%29
|
||||
[8]:https://docs.racket-lang.org/xml/#%28def._%28%28lib._xml%2Fmain..rkt%29._document%29%29
|
||||
[9]:https://docs.racket-lang.org/reference/for.html?q=for%2Ffold#%28form._%28%28lib._racket%2Fprivate%2Fbase..rkt%29._for%2Ffold%29%29
|
||||
[10]:https://docs.racket-lang.org/pict/Rendering.html?q=draw-pict#%28def._%28%28lib._pict%2Fmain..rkt%29._draw-pict%29%29
|
||||
[11]:https://docs.racket-lang.org/reference/dicts.html?q=dict-update#%28def._%28%28lib._racket%2Fdict..rkt%29._dict-update%29%29
|
||||
[12]:https://technomancy.us/i/first-lensman.jpg
|
||||
[13]:https://docs.racket-lang.org/lens/lens-guide.html
|
||||
[14]:https://github.com/jackfirth/lens/issues/290
|
||||
[15]:https://docs.racket-lang.org/pollen/second-tutorial.html?q=xexpr#%28part._.X-expressions%29
|
||||
[16]:https://docs.racket-lang.org/xml/index.html?q=xexpr#%28def._%28%28lib._xml%2Fmain..rkt%29._xml-~3exexpr%29%29
|
||||
[17]:https://docs.racket-lang.org/lens/lens-reference.html#%28def._%28%28lib._lens%2Fcommon..rkt%29._lens-compose%29%29
|
||||
[18]:https://docs.racket-lang.org/reference/procedures.html#%28def._%28%28lib._racket%2Fprivate%2Flist..rkt%29._compose%29%29
|
||||
[19]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fcommon..rkt%29._lens-view%29%29
|
||||
[20]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fdata%2Flist..rkt%29._list-ref-lens%29%29
|
||||
[21]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fdata%2Flist..rkt%29._second-lens%29%29
|
||||
[22]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fdata%2Fdict..rkt%29._dict-ref-lens%29%29
|
||||
[23]:https://docs.racket-lang.org/lens/lens-reference.html?q=lens-view#%28def._%28%28lib._lens%2Fcommon..rkt%29._lens-set%29%29
|
||||
[24]:https://technomancy.us/i/second-stage-lensman.jpg
|
||||
[25]:https://gitlab.com/technomancy/world-color/blob/master/world-color.rkt
|
@ -1,84 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Configuring MSMTP On Ubuntu 16.04 (Again)
|
||||
======
|
||||
This post exists as a copy of what I had on my previous blog about configuring MSMTP on Ubuntu 16.04; I'm posting it as-is for posterity, and have no idea if it'll work on later versions. As I'm not hosting my own Ubuntu/MSMTP server anymore I can't see any updates being made to this, but if I ever do have to set this up again I'll create an updated post! Anyway, here's what I had…
|
||||
|
||||
I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in a previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you're using Apache as the web server, but I'm sure it shouldn't be too different if your web server of choice is something else.
|
||||
|
||||
I use [msmtp][1] for sending emails from this blog to notify me of comments and upgrades etc. Here I'm going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too.
|
||||
|
||||
To begin, we need to install 3 packages:
|
||||
`sudo apt-get install msmtp msmtp-mta ca-certificates`
|
||||
Once these are installed, a default config is required. By default msmtp will look at `/etc/msmtprc`, so I created that using vim, though any text editor will do the trick. This file looked something like this:
|
||||
```
|
||||
# Set defaults.
|
||||
defaults
|
||||
# Enable or disable TLS/SSL encryption.
|
||||
tls on
|
||||
tls_starttls on
|
||||
tls_trust_file /etc/ssl/certs/ca-certificates.crt
|
||||
# Setup WP account's settings.
|
||||
account
|
||||
host smtp.gmail.com
|
||||
port 587
|
||||
auth login
|
||||
user
|
||||
password
|
||||
from
|
||||
logfile /var/log/msmtp/msmtp.log
|
||||
|
||||
account default :
|
||||
|
||||
```
|
||||
|
||||
Any of the uppercase items (i.e. ``) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to.
|
||||
|
||||
Once that file is saved, we'll update the permissions on the above configuration file -- msmtp won't run if the permissions on that file are too open -- and create the directory for the log file.
|
||||
```
|
||||
sudo mkdir /var/log/msmtp
|
||||
sudo chown -R www-data:adm /var/log/msmtp
|
||||
sudo chmod 0600 /etc/msmtprc
|
||||
|
||||
```
|
||||
|
||||
Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don't get too large as well as keeping the log directory a little tidier. To do this, we create `/etc/logrotate.d/msmtp` and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently.
|
||||
```
|
||||
/var/log/msmtp/*.log {
|
||||
rotate 12
|
||||
monthly
|
||||
compress
|
||||
missingok
|
||||
notifempty
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Now that the logging is configured, we need to tell PHP to use msmtp by editing `/etc/php/7.0/apache2/php.ini` and updating the sendmail path from
|
||||
`sendmail_path =`
|
||||
to
|
||||
`sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a -t"`
|
||||
Here I did run into an issue where even though I specified the account name it wasn't sending emails correctly when I tested it. This is why the line `account default : ` was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run `sudo service apache2 restart`, then run `php -a` and execute the following
|
||||
```
|
||||
mail ('personal@email.com', 'Test Subject', 'Test body text');
|
||||
exit();
|
||||
|
||||
```
|
||||
|
||||
Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps).
|
||||
|
||||
I make no claims that this is the most secure configuration, so if you come across this and realise it's grossly insecure or something is drastically wrong please let me know and I'll update it accordingly.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://codingproductivity.wordpress.com/2018/01/18/configuring-msmtp-on-ubuntu-16-04-again/
|
||||
|
||||
作者:[JOE][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://codingproductivity.wordpress.com/author/joeb454/
|
||||
[1]:http://msmtp.sourceforge.net/
|
@ -1,213 +0,0 @@
|
||||
Getting Started with ncurses
|
||||
======
|
||||
How to use curses to draw to the terminal screen.
|
||||
|
||||
While graphical user interfaces are very cool, not every program needs to run with a point-and-click interface. For example, the venerable vi editor ran in plain-text terminals long before the first GUI.
|
||||
|
||||
The vi editor is one example of a screen-oriented program that draws in "text" mode, using a library called curses, which provides a set of programming interfaces to manipulate the terminal screen. The curses library originated in BSD UNIX, but Linux systems provide this functionality through the ncurses library.
|
||||
|
||||
[For a "blast from the past" on ncurses, see ["ncurses: Portable Screen-Handling for Linux"][1], September 1, 1995, by Eric S. Raymond.]
|
||||
|
||||
Creating programs that use curses is actually quite simple. In this article, I show an example program that leverages curses to draw to the terminal screen.
|
||||
|
||||
### Sierpinski's Triangle
|
||||
|
||||
One simple way to demonstrate a few curses functions is by generating Sierpinski's Triangle. If you aren't familiar with this method to generate Sierpinski's Triangle, here are the rules:
|
||||
|
||||
1. Set three points that define a triangle.
|
||||
|
||||
2. Randomly select a point anywhere (x,y).
|
||||
|
||||
Then:
|
||||
|
||||
1. Randomly select one of the triangle's points.
|
||||
|
||||
2. Set the new x,y to be the midpoint between the previous x,y and the triangle point.
|
||||
|
||||
3. Repeat.
|
||||
|
||||
So with those instructions, I wrote this program to draw Sierpinski's Triangle to the terminal screen using the curses functions:
|
||||
|
||||
```
|
||||
|
||||
1 /* triangle.c */
|
||||
2
|
||||
3 #include
|
||||
4 #include
|
||||
5
|
||||
6 #include "getrandom_int.h"
|
||||
7
|
||||
8 #define ITERMAX 10000
|
||||
9
|
||||
10 int main(void)
|
||||
11 {
|
||||
12 long iter;
|
||||
13 int yi, xi;
|
||||
14 int y[3], x[3];
|
||||
15 int index;
|
||||
16 int maxlines, maxcols;
|
||||
17
|
||||
18 /* initialize curses */
|
||||
19
|
||||
20 initscr();
|
||||
21 cbreak();
|
||||
22 noecho();
|
||||
23
|
||||
24 clear();
|
||||
25
|
||||
26 /* initialize triangle */
|
||||
27
|
||||
28 maxlines = LINES - 1;
|
||||
29 maxcols = COLS - 1;
|
||||
30
|
||||
31 y[0] = 0;
|
||||
32 x[0] = 0;
|
||||
33
|
||||
34 y[1] = maxlines;
|
||||
35 x[1] = maxcols / 2;
|
||||
36
|
||||
37 y[2] = 0;
|
||||
38 x[2] = maxcols;
|
||||
39
|
||||
40 mvaddch(y[0], x[0], '0');
|
||||
41 mvaddch(y[1], x[1], '1');
|
||||
42 mvaddch(y[2], x[2], '2');
|
||||
43
|
||||
44 /* initialize yi,xi with random values */
|
||||
45
|
||||
46 yi = getrandom_int() % maxlines;
|
||||
47 xi = getrandom_int() % maxcols;
|
||||
48
|
||||
49 mvaddch(yi, xi, '.');
|
||||
50
|
||||
51 /* iterate the triangle */
|
||||
52
|
||||
53 for (iter = 0; iter < ITERMAX; iter++) {
|
||||
54 index = getrandom_int() % 3;
|
||||
55
|
||||
56 yi = (yi + y[index]) / 2;
|
||||
57 xi = (xi + x[index]) / 2;
|
||||
58
|
||||
59 mvaddch(yi, xi, '*');
|
||||
60 refresh();
|
||||
61 }
|
||||
62
|
||||
63 /* done */
|
||||
64
|
||||
65 mvaddstr(maxlines, 0, "Press any key to quit");
|
||||
66
|
||||
67 refresh();
|
||||
68
|
||||
69 getch();
|
||||
70 endwin();
|
||||
71
|
||||
72 exit(0);
|
||||
73 }
|
||||
|
||||
```
|
||||
|
||||
Let me walk through that program by way of explanation. First, the getrandom_int() is my own wrapper to the Linux getrandom() system call, but it's guaranteed to return a positive integer value. Otherwise, you should be able to identify the code lines that initialize and then iterate Sierpinski's Triangle, based on the above rules. Aside from that, let's look at the curses functions I used to draw the triangle on a terminal.
|
||||
|
||||
Most curses programs will start with these four instructions. 1) The initscr() function determines the terminal type, including its size and features, and sets up the curses environment based on what the terminal can support. The cbreak() function disables line buffering and sets curses to take one character at a time. The noecho() function tells curses not to echo the input back to the screen, and the clear() function clears the screen:
|
||||
|
||||
```
|
||||
|
||||
20 initscr();
|
||||
21 cbreak();
|
||||
22 noecho();
|
||||
23
|
||||
24 clear();
|
||||
|
||||
```
|
||||
|
||||
The program then sets a few variables to define the three points that define a triangle. Note the use of LINES and COLS here, which were set by initscr(). These values tell the program how many lines and columns exist on the terminal. Screen coordinates start at zero, so the top-left of the screen is row 0, column 0\. The bottom-right of the screen is row LINES - 1, column COLS - 1\. To make this easy to remember, my program sets these values in the variables maxlines and maxcols, respectively.
|
||||
|
||||
Two simple methods to draw text on the screen are the addch() and addstr() functions. To put text at a specific screen location, use the related mvaddch() and mvaddstr() functions. My program uses these functions in several places. First, the program draws the three points that define the triangle, labeled "0", "1" and "2":
|
||||
|
||||
```
|
||||
|
||||
40 mvaddch(y[0], x[0], '0');
|
||||
41 mvaddch(y[1], x[1], '1');
|
||||
42 mvaddch(y[2], x[2], '2');
|
||||
|
||||
```
|
||||
|
||||
To draw the random starting point, the program makes a similar call:
|
||||
|
||||
```
|
||||
|
||||
49 mvaddch(yi, xi, '.');
|
||||
|
||||
```
|
||||
|
||||
And to draw each successive point in Sierpinski's Triangle iteration:
|
||||
|
||||
```
|
||||
|
||||
59 mvaddch(yi, xi, '*');
|
||||
|
||||
```
|
||||
|
||||
When the program is done, it displays a helpful message at the lower-left corner of the screen (at row maxlines, column 0):
|
||||
|
||||
```
|
||||
|
||||
65 mvaddstr(maxlines, 0, "Press any key to quit");
|
||||
|
||||
```
|
||||
|
||||
It's important to note that curses maintains a version of the screen in memory and updates the screen only when you ask it to. This provides greater performance, especially if you want to display a lot of text to the screen. This is because curses can update only those parts of the screen that changed since the last update. To cause curses to update the terminal screen, use the refresh() function.
|
||||
|
||||
In my example program, I've chosen to update the screen after "drawing" each successive point in Sierpinski's Triangle. By doing so, users should be able to observe each iteration in the triangle.
|
||||
|
||||
Before exiting, I use the getch() function to wait for the user to press a key. Then I call endwin() to exit the curses environment and return the terminal screen to normal control:
|
||||
|
||||
```
|
||||
|
||||
69 getch();
|
||||
70 endwin();
|
||||
|
||||
```
|
||||
|
||||
### Compiling and Sample Output
|
||||
|
||||
Now that you have your first sample curses program, it's time to compile and run it. Remember that Linux systems implement the curses functionality via the ncurses library, so you need to link with -lncurses when you compile—for example:
|
||||
|
||||
```
|
||||
|
||||
$ ls
|
||||
getrandom_int.c getrandom_int.h triangle.c
|
||||
|
||||
$ gcc -Wall -lncurses -o triangle triangle.c getrandom_int.c
|
||||
|
||||
```
|
||||
|
||||
Running the triangle program on a standard 80x24 terminal is not very interesting. You just can't see much detail in Sierpinski's Triangle at that resolution. If you run a terminal window and set a very small font size, you can see the fractal nature of Sierpinski's Triangle more easily. On my system, the output looks like Figure 1.
|
||||
|
||||

|
||||
|
||||
Figure 1. Output of the triangle Program
|
||||
|
||||
Despite the random nature of the iteration, every run of Sierpinski's Triangle will look pretty much the same. The only difference will be where the first few points are drawn to the screen. In this example, you can see the single dot that starts the triangle, near point 1\. It looks like the program picked point 2 next, and you can see the asterisk halfway between the dot and the "2". And it looks like the program randomly picked point 2 for the next random number, because you can see the asterisk halfway between the first asterisk and the "2". From there, it's impossible to tell how the triangle was drawn, because all of the successive dots fall within the triangle area.
|
||||
|
||||
### Starting to Learn ncurses
|
||||
|
||||
This program is a simple example of how to use the curses functions to draw characters to the screen. You can do so much more with curses, depending on what you need your program to do. In a follow up article, I will show how to use curses to allow the user to interact with the screen. If you are interested in getting a head start with curses, I encourage you to read Pradeep Padala's ["NCURSES Programming HOWTO"][2], at the Linux Documentation Project.
|
||||
|
||||
### About the author
|
||||
|
||||
Jim Hall is an advocate for free and open-source software, best known for his work on the FreeDOS Project, and he also focuses on the usability of open-source software. Jim is the Chief Information Officer at Ramsey County, Minn.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/getting-started-ncurses
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/users/jim-hall
|
||||
[1]:http://www.linuxjournal.com/article/1124
|
||||
[2]:http://tldp.org/HOWTO/NCURSES-Programming-HOWTO
|
@ -0,0 +1,102 @@
|
||||
How to Install Tripwire IDS (Intrusion Detection System) on Linux
|
||||
============================================================
|
||||
|
||||
|
||||
Tripwire is a popular Linux Intrusion Detection System (IDS) that runs on systems in order to detect if unauthorized filesystem changes occurred over time.
|
||||
|
||||
In CentOS and RHEL distributions, tripwire is not a part of official repositories. However, the tripwire package can be installed via [Epel repositories][1].
|
||||
|
||||
To begin, first install Epel repositories in CentOS and RHEL system, by issuing the below command.
|
||||
|
||||
```
|
||||
# yum install epel-release
|
||||
```
|
||||
|
||||
After you’ve installed Epel repositories, make sure you update the system with the following command.
|
||||
|
||||
```
|
||||
# yum update
|
||||
```
|
||||
|
||||
After the update process finishes, install Tripwire IDS software by executing the below command.
|
||||
|
||||
```
|
||||
# yum install tripwire
|
||||
```
|
||||
|
||||
Fortunately, tripwire is a part of Ubuntu and Debian default repositories and can be installed with following commands.
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
$ sudo apt install tripwire
|
||||
```
|
||||
|
||||
On Ubuntu and Debian, the tripwire installation will be asked to choose and confirm a site key and local key passphrase. These keys are used by tripwire to secure its configuration files.
|
||||
|
||||
[][2]
|
||||
|
||||
Create Tripwire Site and Local Key
|
||||
|
||||
On CentOS and RHEL, you need to create tripwire keys with the below command and supply a passphrase for site key and local key.
|
||||
|
||||
```
|
||||
# tripwire-setup-keyfiles
|
||||
```
|
||||
[][3]
|
||||
|
||||
Create Tripwire Keys
|
||||
|
||||
In order to validate your system, you need to initialize Tripwire database with the following command. Due to the fact that the database hasn’t been initialized yet, tripwire will display a lot of false-positive warnings.
|
||||
|
||||
```
|
||||
# tripwire --init
|
||||
```
|
||||
[][4]
|
||||
|
||||
Initialize Tripwire Database
|
||||
|
||||
Finally, generate a tripwire system report in order to check the configurations by issuing the below command. Use `--help` switch to list all tripwire check command options.
|
||||
|
||||
```
|
||||
# tripwire --check --help
|
||||
# tripwire --check
|
||||
```
|
||||
|
||||
After tripwire check command completes, review the report by opening the file with the extension `.twr` from /var/lib/tripwire/report/ directory with your favorite text editor command, but before that you need to convert to text file.
|
||||
|
||||
```
|
||||
# twprint --print-report --twrfile /var/lib/tripwire/report/tecmint-20170727-235255.twr > report.txt
|
||||
# vi report.txt
|
||||
```
|
||||
[][5]
|
||||
|
||||
Tripwire System Report
|
||||
|
||||
That’s It! you have successfully installed Tripwire on Linux server. I hope you can now easily configure your [Tripwire IDS][6].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.
|
||||
|
||||
-------
|
||||
|
||||
via: https://www.tecmint.com/install-tripwire-ids-intrusion-detection-system-on-linux/
|
||||
|
||||
作者:[ Matei Cezar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.tecmint.com/author/cezarmatei/
|
||||
[1]:https://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
|
||||
[2]:https://www.tecmint.com/wp-content/uploads/2018/01/Create-Site-and-Local-key.png
|
||||
[3]:https://www.tecmint.com/wp-content/uploads/2018/01/Create-Tripwire-Keys.png
|
||||
[4]:https://www.tecmint.com/wp-content/uploads/2018/01/Initialize-Tripwire-Database.png
|
||||
[5]:https://www.tecmint.com/wp-content/uploads/2018/01/Tripwire-System-Report.png
|
||||
[6]:https://www.tripwire.com/
|
||||
[7]:https://www.tecmint.com/author/cezarmatei/
|
||||
[8]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
|
||||
[9]:https://www.tecmint.com/free-linux-shell-scripting-books/
|
@ -1,174 +0,0 @@
|
||||
Translating by yizhuoyan
|
||||
|
||||
Linux rm Command Explained for Beginners (8 Examples)
|
||||
======
|
||||
|
||||
Deleting files is a fundamental operation, just like copying files or renaming/moving them. In Linux, there's a dedicated command - dubbed **rm** \- that lets you perform all deletion-related operations. In this tutorial, we will discuss the basics of this tool along with some easy to understand examples.
|
||||
|
||||
But before we do that, it's worth mentioning that all examples mentioned in the article have been tested on Ubuntu 16.04 LTS.
|
||||
|
||||
#### Linux rm command
|
||||
|
||||
So in layman's terms, we can simply say the rm command is used for removing/deleting files and directories. Following is the syntax of the command:
|
||||
|
||||
```
|
||||
rm [OPTION]... [FILE]...
|
||||
```
|
||||
|
||||
And here's how the tool's man page describes it:
|
||||
```
|
||||
This manual page documents the GNU version of rm. rm removes each specified file. By default, it
|
||||
does not remove directories.
|
||||
|
||||
If the -I or --interactive=once option is given, and there are more than three files or the -r,
|
||||
-R, or --recursive are given, then rm prompts the user for whether to proceed with the entire
|
||||
operation. If the response is not affirmative, the entire command is aborted.
|
||||
|
||||
Otherwise, if a file is unwritable, standard input is a terminal, and the -f or --force option is
|
||||
not given, or the -i or --interactive=always option is given, rm prompts the user for whether to
|
||||
remove the file. If the response is not affirmative, the file is skipped.
|
||||
```
|
||||
|
||||
The following Q&A-styled examples will give you a better idea on how the tool works.
|
||||
|
||||
#### Q1. How to remove files using rm command?
|
||||
|
||||
That's pretty easy and straightforward. All you have to do is to pass the name of the files (along with paths if they are not in the current working directory) as input to the rm command.
|
||||
|
||||
```
|
||||
rm [filename]
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
rm testfile.txt
|
||||
```
|
||||
|
||||
[![How to remove files using rm command][1]][2]
|
||||
|
||||
#### Q2. How to remove directories using rm command?
|
||||
|
||||
If you are trying to remove a directory, then you need to use the **-r** command line option. Otherwise, rm will throw an error saying what you are trying to delete is a directory.
|
||||
|
||||
```
|
||||
rm -r [dir name]
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
rm -r testdir
|
||||
```
|
||||
|
||||
[![How to remove directories using rm command][3]][4]
|
||||
|
||||
#### Q3. How to make rm prompt before every removal?
|
||||
|
||||
If you want rm to prompt before each delete action it performs, then use the **-i** command line option.
|
||||
|
||||
```
|
||||
rm -i [file or dir]
|
||||
```
|
||||
|
||||
For example, suppose you want to delete a directory 'testdir' and all its contents, but want rm to prompt before every deletion, then here's how you can do that:
|
||||
|
||||
```
|
||||
rm -r -i testdir
|
||||
```
|
||||
|
||||
[![How to make rm prompt before every removal][5]][6]
|
||||
|
||||
#### Q4. How to force rm to ignore nonexistent files?
|
||||
|
||||
The rm command lets you know through an error message if you try deleting a non-existent file or directory.
|
||||
|
||||
[![Linux rm command example][7]][8]
|
||||
|
||||
However, if you want, you can make rm suppress such error/notifications - all you have to do is to use the **-f** command line option.
|
||||
|
||||
```
|
||||
rm -f [filename]
|
||||
```
|
||||
|
||||
[![How to force rm to ignore nonexistent files][9]][10]
|
||||
|
||||
#### Q5. How to make rm prompt only in some scenarios?
|
||||
|
||||
There exists a command line option **-I** , which when used, makes sure the command only prompts once before removing more than three files, or when removing recursively.
|
||||
|
||||
For example, the following screenshot shows this option in action - there was no prompt when two files were deleted, but the command prompted when more than three files were deleted.
|
||||
|
||||
[![How to make rm prompt only in some scenarios][11]][12]
|
||||
|
||||
#### Q6. How rm works when dealing with root directory?
|
||||
|
||||
Of course, deleting root directory is the last thing a Linux user would want. That's why, the rm command doesn't let you perform a recursive delete operation on this directory by default.
|
||||
|
||||
[![How rm works when dealing with root directory][13]][14]
|
||||
|
||||
However, if you want to go ahead with this operation for whatever reason, then you need to tell this to rm by using the **\--no-preserve-root** option. When this option is enabled, rm doesn't treat the root directory (/) specially.
|
||||
|
||||
In case you want to know the scenarios in which a user might want to delete the root directory of their system, head [here][15].
|
||||
|
||||
#### Q7. How to make rm only remove empty directories?
|
||||
|
||||
In case you want to restrict rm's directory deletion ability to only empty directories, then you can use the -d command line option.
|
||||
|
||||
```
|
||||
rm -d [dir]
|
||||
```
|
||||
|
||||
The following screenshot shows the -d command line option in action - only empty directory got deleted.
|
||||
|
||||
[![How to make rm only remove empty directories][16]][17]
|
||||
|
||||
#### Q8. How to force rm to emit details of operation it is performing?
|
||||
|
||||
If you want rm to display detailed information of the operation being performed, then this can be done by using the **-v** command line option.
|
||||
|
||||
```
|
||||
rm -v [file or directory name]
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
[![How to force rm to emit details of operation it is performing][18]][19]
|
||||
|
||||
#### Conclusion
|
||||
|
||||
Given the kind of functionality it offers, rm is one of the most frequently used commands in Linux (like [cp][20] and mv). Here, in this tutorial, we have covered almost all major command line options this tool provides. rm has a bit of learning curve associated with, so you'll have to spent some time practicing its options before you start using the tool in your day to day work. For more information, head to the command's [man page][21].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-rm-command/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/images/command-tutorial/rm-basic-usage.png
|
||||
[2]:https://www.howtoforge.com/images/command-tutorial/big/rm-basic-usage.png
|
||||
[3]:https://www.howtoforge.com/images/command-tutorial/rm-r.png
|
||||
[4]:https://www.howtoforge.com/images/command-tutorial/big/rm-r.png
|
||||
[5]:https://www.howtoforge.com/images/command-tutorial/rm-i-option.png
|
||||
[6]:https://www.howtoforge.com/images/command-tutorial/big/rm-i-option.png
|
||||
[7]:https://www.howtoforge.com/images/command-tutorial/rm-non-ext-error.png
|
||||
[8]:https://www.howtoforge.com/images/command-tutorial/big/rm-non-ext-error.png
|
||||
[9]:https://www.howtoforge.com/images/command-tutorial/rm-f-option.png
|
||||
[10]:https://www.howtoforge.com/images/command-tutorial/big/rm-f-option.png
|
||||
[11]:https://www.howtoforge.com/images/command-tutorial/rm-I-option.png
|
||||
[12]:https://www.howtoforge.com/images/command-tutorial/big/rm-I-option.png
|
||||
[13]:https://www.howtoforge.com/images/command-tutorial/rm-root-default.png
|
||||
[14]:https://www.howtoforge.com/images/command-tutorial/big/rm-root-default.png
|
||||
[15]:https://superuser.com/questions/742334/is-there-a-scenario-where-rm-rf-no-preserve-root-is-needed
|
||||
[16]:https://www.howtoforge.com/images/command-tutorial/rm-d-option.png
|
||||
[17]:https://www.howtoforge.com/images/command-tutorial/big/rm-d-option.png
|
||||
[18]:https://www.howtoforge.com/images/command-tutorial/rm-v-option.png
|
||||
[19]:https://www.howtoforge.com/images/command-tutorial/big/rm-v-option.png
|
||||
[20]:https://www.howtoforge.com/linux-cp-command/
|
||||
[21]:https://linux.die.net/man/1/rm
|
193
sources/tech/20180123 Migrating to Linux- The Command Line.md
Normal file
193
sources/tech/20180123 Migrating to Linux- The Command Line.md
Normal file
@ -0,0 +1,193 @@
|
||||
Migrating to Linux: The Command Line
|
||||
======
|
||||
|
||||

|
||||
|
||||
This is the fourth article in our series on migrating to Linux. If you missed the previous installments, we've covered [Linux for new users][1], [files and filesystems][2], and [graphical environments][3]. Linux is everywhere. It's used to run most Internet services like web servers, email servers, and others. It's also used in your cell phone, your car console, and a whole lot more. So, you might be curious to try out Linux and learn more about how it works.
|
||||
|
||||
Under Linux, the command line is very useful. On desktop Linux systems, although the command line is optional, you will often see people have a command line window open alongside other application windows. On Internet servers, and when Linux is running in a device, the command line is often the only way to interact directly with the system. So, it's good to know at least some command line basics.
|
||||
|
||||
In the command line (often called a shell in Linux), everything is done by entering commands. You can list files, move files, display the contents of files, edit files, and more, even display web pages, all from the command line.
|
||||
|
||||
If you are already familiar with using the command line in Windows (either CMD.EXE or PowerShell), you may want to jump down to the section titled Familiar with Windows Command Line? and read that first.
|
||||
|
||||
### Navigating
|
||||
|
||||
In the command line, there is the concept of the current working directory (Note: A folder and a directory are synonymous, and in Linux they're usually called directories). Many commands will look in this directory by default if no other directory path is specified. For example, typing ls to list files, will list files in this working directory. For example:
|
||||
```
|
||||
$ ls
|
||||
Desktop Documents Downloads Music Pictures README.txt Videos
|
||||
```
|
||||
|
||||
The command, ls Documents, will instead list files in the Documents directory:
|
||||
```
|
||||
$ ls Documents
|
||||
report.txt todo.txt EmailHowTo.pdf
|
||||
```
|
||||
|
||||
You can display the current working directory by typing pwd. For example:
|
||||
```
|
||||
$ pwd
|
||||
/home/student
|
||||
```
|
||||
|
||||
You can change the current directory by typing cd and then the directory you want to change to. For example:
|
||||
```
|
||||
$ pwd
|
||||
/home/student
|
||||
$ cd Downloads
|
||||
$ pwd
|
||||
/home/student/Downloads
|
||||
```
|
||||
|
||||
A directory path is a list of directories separated by a / (slash) character. The directories in a path have an implied hierarchy, for example, where the path /home/student expects there to be a directory named home in the top directory, and a directory named student to be in that directory home.
|
||||
|
||||
Directory paths are either absolute or relative. Absolute directory paths start with the / character.
|
||||
|
||||
Relative paths start with either . (dot) or .. (dot dot). In a path, a . (dot) means the current directory, and .. (dot dot) means one directory up from the current one. For example, ls ../Documents means look in the directory up one from the current one and show the contents of the directory named Documents in there:
|
||||
```
|
||||
$ pwd
|
||||
/home/student
|
||||
$ ls
|
||||
Desktop Documents Downloads Music Pictures README.txt Videos
|
||||
$ cd Downloads
|
||||
$ pwd
|
||||
/home/student/Downloads
|
||||
$ ls ../Documents
|
||||
report.txt todo.txt EmailHowTo.pdf
|
||||
```
|
||||
|
||||
When you first open a command line window on a Linux system, your current working directory is set to your home directory, usually: /home/<your login name here>. Your home directory is dedicated to your login where you can store your own files.
|
||||
|
||||
The environment variable $HOME expands to the directory path to your home directory. For example:
|
||||
```
|
||||
$ echo $HOME
|
||||
/home/student
|
||||
```
|
||||
|
||||
The following table shows a summary of some of the common commands used to navigate directories and manage simple text files.
|
||||
|
||||
### Searching
|
||||
|
||||
Sometimes I forget where a file resides, or I forget the name of the file I am looking for. There are a couple of commands in the Linux command line that you can use to help you find files and search the contents of files.
|
||||
|
||||
The first command is find. You can use find to search for files and directories by name or other attribute. For example, if I forgot where I kept my todo.txt file, I can run the following:
|
||||
```
|
||||
$ find $HOME -name todo.txt
|
||||
/home/student/Documents/todo.txt
|
||||
```
|
||||
|
||||
The find program has a lot of features and options. A simple form of the command is:
|
||||
find <directory to search> -name <filename>
|
||||
|
||||
If there is more than one file named todo.txt from the example above, it will show me all the places where it found a file by that name. The find command has many options to search by type (file, directory, or other), by date, newer than date, by size, and more. You can type:
|
||||
```
|
||||
man find
|
||||
```
|
||||
|
||||
to get help on how to use the find command.
|
||||
|
||||
You can also use a command called grep to search inside files for specific contents. For example:
|
||||
```
|
||||
grep "01/02/2018" todo.txt
|
||||
```
|
||||
|
||||
will show me all the lines that have the January 2, 2018 date in them.
|
||||
|
||||
### Getting Help
|
||||
|
||||
There are a lot of commands in Linux, and it would be too much to describe all of them here. So the next best step to show how to get help on commands.
|
||||
|
||||
The command apropos helps you find commands that do certain things. Maybe you want to find out all the commands that operate on directories or get a list of open files, but you don't know what command to run. So, you can try:
|
||||
```
|
||||
apropos directory
|
||||
```
|
||||
|
||||
which will give a list of commands and have the word "directory" in their help text. Or, you can do:
|
||||
```
|
||||
apropos "list open files"
|
||||
```
|
||||
|
||||
which will show one command, lsof, that you can use to list open files.
|
||||
|
||||
If you know the command you need to use but aren't sure which options to use to get it to behave the way you want, you can use the command called man, which is short for manual. You would use man <command>, for example:
|
||||
```
|
||||
man ls
|
||||
```
|
||||
|
||||
You can try man ls on your own. It will give several pages of information.
|
||||
|
||||
The man command explains all the options and parameters you can give to a command, and often will even give an example.
|
||||
|
||||
Many commands often also have a help option (e.g., ls --help), which will give information on how to use a command. The man pages are usually more detailed, while the --help option is useful for a quick lookup.
|
||||
|
||||
### Scripts
|
||||
|
||||
One of the best things about the Linux command line is that the commands that are typed in can be scripted, and run over and over again. Commands can be placed as separate lines in a file. You can put #!/bin/sh as the first line in the file, followed by the commands. Then, once the file is marked as executable, you can run the script as if it were its own command. For example,
|
||||
```
|
||||
--- contents of get_todays_todos.sh ---
|
||||
#!/bin/sh
|
||||
todays_date=`date +"%m/%d/%y"`
|
||||
grep $todays_date $HOME/todos.txt
|
||||
```
|
||||
|
||||
Scripts help automate certain tasks in a set of repeatable steps. Scripts can also get very sophisticated if needed, with loops, conditional statements, routines, and more. There's not space here to go into detail, but you can find more information about Linux bash scripting online.
|
||||
|
||||
Familiar with Windows Command Line?
|
||||
|
||||
If you are familiar with the Windows CMD or PowerShell program, typing commands at a command prompt should feel familiar. However, several things work differently in Linux and if you don't understand those differences, it may be confusing.
|
||||
|
||||
First, under Linux, the PATH environment variable works different than it does under Windows. In Windows, the current directory is assumed to be the first directory on the path, even though it's not listed in the list of directories in PATH. Under Linux, the current directory is not assumed to be on the path, and it is not explicitly put on the path either. Putting . in the PATH environment variable is considered to be a security risk under Linux. In Linux, to run a program in the current directory, you need to prefix it with ./ (which is the file's relative path from the current directory). This trips up a lot of CMD users. For example:
|
||||
```
|
||||
./my_program
|
||||
```
|
||||
|
||||
rather than
|
||||
```
|
||||
my_program
|
||||
```
|
||||
|
||||
In addition, in Windows paths are separated by a ; (semicolon) character in the PATH environment variable. On Linux, in PATH, directories are separated by a : (colon) character. Also in Linux, directories in a single path are separated by a / (slash) character while under Windows directories in a single path are separated by a \ (backslash) character. So a typical PATH environment variable in Windows might look like:
|
||||
```
|
||||
PATH="C:\Program Files;C:\Program Files\Firefox;"
|
||||
while on Linux it might look like:
|
||||
PATH="/usr/bin:/opt/mozilla/firefox"
|
||||
```
|
||||
|
||||
Also note that environment variables are expanded with a $ on Linux, so $PATH expands to the contents of the PATH environment variable whereas in Windows you need to enclose the variable in percent symbols (e.g., %PATH%).
|
||||
|
||||
In Linux, options are commonly passed to programs using a - (dash) character in front of the option, while under Windows options are passed by preceding options with a / (slash) character. So, under Linux, you would do:
|
||||
```
|
||||
a_prog -h
|
||||
```
|
||||
|
||||
rather than
|
||||
```
|
||||
a_prog /h
|
||||
```
|
||||
|
||||
Under Linux, file extensions generally don't signify anything. For example, renaming myscript to myscript.bat doesn't make it executable. Instead to make a file executable, the file's executable permission flag needs to be set. File permissions are covered in more detail next time.
|
||||
|
||||
Under Linux when file and directory names start with a . (dot) character they are hidden. So, for example, if you're told to edit the file, .bashrc, and you don't see it in your home directory, it probably really is there. It's just hidden. In the command line, you can use option -a on the command ls to see hidden files. For example:
|
||||
```
|
||||
ls -a
|
||||
```
|
||||
|
||||
Under Linux, common commands are also different from those in the Windows command line. The following table that shows a mapping from common items used under CMD and the alternative used under Linux.
|
||||
|
||||

|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2018/1/migrating-linux-command-line
|
||||
|
||||
作者:[John Bonesio][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/johnbonesio
|
||||
[1]:https://www.linux.com/blog/learn/intro-to-linux/2017/10/migrating-linux-introduction
|
||||
[2]:https://www.linux.com/blog/learn/intro-to-linux/2017/11/migrating-linux-disks-files-and-filesystems
|
||||
[3]:https://www.linux.com/blog/learn/2017/12/migrating-linux-graphical-environments
|
@ -1,61 +0,0 @@
|
||||
Containers, the GPL, and copyleft: No reason for concern
|
||||
============================================================
|
||||
|
||||
### Wondering how open source licensing affects Linux containers? Here's what you need to know.
|
||||
|
||||
|
||||

|
||||
Image by : opensource.com
|
||||
|
||||
Though open source is thoroughly mainstream, new software technologies and old technologies that get newly popularized sometimes inspire hand-wringing about open source licenses. Most often the concern is about the GNU General Public License (GPL), and specifically the scope of its copyleft requirement, which is often described (somewhat misleadingly) as the GPL’s derivative work issue.
|
||||
|
||||
One imperfect way of framing the question is whether GPL-licensed code, when combined in some sense with proprietary code, forms a single modified work such that the proprietary code could be interpreted as being subject to the terms of the GPL. While we haven’t yet seen much of that concern directed to Linux containers, we expect more questions to be raised as adoption of containers continues to grow. But it’s fairly straightforward to show that containers do _not_ raise new or concerning GPL scope issues.
|
||||
|
||||
Statutes and case law provide little help in interpreting a license like the GPL. On the other hand, many of us give significant weight to the interpretive views of the Free Software Foundation (FSF), the drafter and steward of the GPL, even in the typical case where the FSF is not a copyright holder of the software at issue. In addition to being the author of the license text, the FSF has been engaged for many years in providing commentary and guidance on its licenses to the community. Its views have special credibility and influence based on its public interest mission and leadership in free software policy.
|
||||
|
||||
The FSF’s existing guidance on GPL interpretation has relevance for understanding the effects of including GPL and non-GPL code in containers. The FSF has placed emphasis on the process boundary when considering copyleft scope, and on the mechanism and semantics of the communication between multiple software components to determine whether they are closely integrated enough to be considered a single program for GPL purposes. For example, the [GNU Licenses FAQ][4] takes the view that pipes, sockets, and command-line arguments are mechanisms that are normally suggestive of separateness (in the absence of sufficiently "intimate" communications).
|
||||
|
||||
Consider the case of a container in which both GPL code and proprietary code might coexist and execute. A container is, in essence, an isolated userspace stack. In the [OCI container image format][5], code is packaged as a set of filesystem changeset layers, with the base layer normally being a stripped-down conventional Linux distribution without a kernel. As with the userspace of non-containerized Linux distributions, these base layers invariably contain many GPL-licensed packages (both GPLv2 and GPLv3), as well as packages under licenses considered GPL-incompatible, and commonly function as a runtime for proprietary as well as open source applications. The ["mere aggregation" clause][6] in GPLv2 (as well as its counterpart GPLv3 provision on ["aggregates"][7]) shows that this type of combination is generally acceptable, is specifically contemplated under the GPL, and has no effect on the licensing of the two programs, assuming incompatibly licensed components are separate and independent.
|
||||
|
||||
Of course, in a given situation, the relationship between two components may not be "mere aggregation," but the same is true of software running in non-containerized userspace on a Linux system. There is nothing in the technical makeup of containers or container images that suggests a need to apply a special form of copyleft scope analysis.
|
||||
|
||||
It follows that when looking at the relationship between code running in a container and code running outside a container, the "separate and independent" criterion is almost certainly met. The code will run as separate processes, and the whole technical point of using containers is isolation from other software running on the system.
|
||||
|
||||
Now consider the case where two components, one GPL-licensed and one proprietary, are running in separate but potentially interacting containers, perhaps as part of an application designed with a [microservices][8] architecture. In the absence of very unusual facts, we should not expect to see copyleft scope extending across multiple containers. Separate containers involve separate processes. Communication between containers by way of network interfaces is analogous to such mechanisms as pipes and sockets, and a multi-container microservices scenario would seem to preclude what the FSF calls "[intimate][9]" communication by definition. The composition of an application using multiple containers may not be dispositive of the GPL scope issue, but it makes the technical boundaries between the components more apparent and provides a strong basis for arguing separateness. Here, too, there is no technical feature of containers that suggests application of a different and stricter approach to copyleft scope analysis.
|
||||
|
||||
A company that is overly concerned with the potential effects of distributing GPL-licensed code might attempt to prohibit its developers from adding any such code to a container image that it plans to distribute. Insofar as the aim is to avoid distributing code under the GPL, this is a dubious strategy. As noted above, the base layers of conventional container images will contain multiple GPL-licensed components. If the company pushes a container image to a registry, there is normally no way it can guarantee that this will not include the base layer, even if it is widely shared.
|
||||
|
||||
On the other hand, the company might decide to embrace containerization as a means of limiting copyleft scope issues by isolating GPL and proprietary code—though one would hope that technical benefits would drive the decision, rather than legal concerns likely based on unfounded anxiety about the GPL. While in a non-containerized setting the relationship between two interacting software components will often be mere aggregation, the evidence of separateness that containers provide may be comforting to those who worry about GPL scope.
|
||||
|
||||
Open source license compliance obligations may arise when sharing container images. But there’s nothing technically different or unique about containers that changes the nature of these obligations or makes them harder to satisfy. With respect to copyleft scope, containerization should, if anything, ease the concerns of the extra-cautious.
|
||||
|
||||
|
||||
### About the author
|
||||
|
||||
[][10] Richard Fontana - Richard is Senior Commercial Counsel on the Products and Technologies team in Red Hat's legal department. Most of his work focuses on open source-related legal issues.[More about me][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/containers-gpl-and-copyleft
|
||||
|
||||
作者:[Richard Fontana ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/fontana
|
||||
[1]:https://opensource.com/article/18/1/containers-gpl-and-copyleft?rate=qTlANxnuA2tf0hcGE6Po06RGUzcbB-cBxbU3dCuCt9w
|
||||
[2]:https://opensource.com/users/fontana
|
||||
[3]:https://opensource.com/user/10544/feed
|
||||
[4]:https://www.gnu.org/licenses/gpl-faq.en.html#MereAggregation
|
||||
[5]:https://github.com/opencontainers/image-spec/blob/master/spec.md
|
||||
[6]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html#section2
|
||||
[7]:https://www.gnu.org/licenses/gpl.html#section5
|
||||
[8]:https://www.redhat.com/en/topics/microservices
|
||||
[9]:https://www.gnu.org/licenses/gpl-faq.en.html#GPLPlugins
|
||||
[10]:https://opensource.com/users/fontana
|
||||
[11]:https://opensource.com/users/fontana
|
||||
[12]:https://opensource.com/users/fontana
|
||||
[13]:https://opensource.com/tags/licensing
|
||||
[14]:https://opensource.com/tags/containers
|
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,153 @@
|
||||
Building a Linux-based HPC system on the Raspberry Pi with Ansible
|
||||
============================================================
|
||||
|
||||
### Create a high-performance computing cluster with low-cost hardware and open source software.
|
||||
|
||||

|
||||
Image by : opensource.com
|
||||
|
||||
In my [previous article for Opensource.com][14], I introduced the [OpenHPC][15] project, which aims to accelerate innovation in high-performance computing (HPC). This article goes a step further by using OpenHPC's capabilities to build a small HPC system. To call it an _HPC system_ might sound bigger than it is, so maybe it is better to say this is a system based on the [Cluster Building Recipes][16] published by the OpenHPC project.
|
||||
|
||||
The resulting cluster consists of two Raspberry Pi 3 systems acting as compute nodes and one virtual machine acting as the master node:
|
||||
|
||||
|
||||

|
||||
|
||||
My master node is running CentOS on x86_64 and my compute nodes are running a slightly modified CentOS on aarch64.
|
||||
|
||||
This is what the setup looks in real life:
|
||||
|
||||
|
||||

|
||||
|
||||
To set up my system like an HPC system, I followed some of the steps from OpenHPC's Cluster Building Recipes [install guide for CentOS 7.4/aarch64 + Warewulf + Slurm][17] (PDF). This recipe includes provisioning instructions using [Warewulf][18]; because I manually installed my three systems, I skipped the Warewulf parts and created an [Ansible playbook][19] for the steps I took.
|
||||
|
||||
|
||||
Once my cluster was set up by the [Ansible][26] playbooks, I could start to submit jobs to my resource manager. The resource manager, [Slurm][27] in my case, is the instance in the cluster that decides where and when my jobs are executed. One possibility to start a simple job on the cluster is:
|
||||
```
|
||||
[ohpc@centos01 ~]$ srun hostname
|
||||
calvin
|
||||
```
|
||||
|
||||
If I need more resources, I can tell Slurm that I want to run my command on eight CPUs:
|
||||
|
||||
```
|
||||
[ohpc@centos01 ~]$ srun -n 8 hostname
|
||||
hobbes
|
||||
hobbes
|
||||
hobbes
|
||||
hobbes
|
||||
calvin
|
||||
calvin
|
||||
calvin
|
||||
calvin
|
||||
```
|
||||
|
||||
In the first example, Slurm ran the specified command (`hostname`) on a single CPU, and in the second example Slurm ran the command on eight CPUs. One of my compute nodes is named `calvin` and the other is named `hobbes`; that can be seen in the output of the above commands. Each of the compute nodes is a Raspberry Pi 3 with four CPU cores.
|
||||
|
||||
Another way to submit jobs to my cluster is the command `sbatch`, which can be used to execute scripts with the output written to a file instead of my terminal.
|
||||
|
||||
```
|
||||
[ohpc@centos01 ~]$ cat script1.sh
|
||||
#!/bin/sh
|
||||
date
|
||||
hostname
|
||||
sleep 10
|
||||
date
|
||||
[ohpc@centos01 ~]$ sbatch script1.sh
|
||||
Submitted batch job 101
|
||||
```
|
||||
|
||||
This will create an output file called `slurm-101.out` with the following content:
|
||||
|
||||
```
|
||||
Mon 11 Dec 16:42:31 UTC 2017
|
||||
calvin
|
||||
Mon 11 Dec 16:42:41 UTC 2017
|
||||
```
|
||||
|
||||
To demonstrate the basic functionality of the resource manager, simple and serial command line tools are suitable—but a bit boring after doing all the work to set up an HPC-like system.
|
||||
|
||||
A more interesting application is running an [Open MPI][20] parallelized job on all available CPUs on the cluster. I'm using an application based on [Game of Life][21], which was used in a [video][22] called "Running Game of Life across multiple architectures with Red Hat Enterprise Linux." In addition to the previously used MPI-based Game of Life implementation, the version now running on my cluster colors the cells for each involved host differently. The following script starts the application interactively with a graphical output:
|
||||
|
||||
```
|
||||
$ cat life.mpi
|
||||
#!/bin/bash
|
||||
|
||||
module load gnu6 openmpi3
|
||||
|
||||
if [[ "$SLURM_PROCID" != "0" ]]; then
|
||||
exit
|
||||
fi
|
||||
|
||||
mpirun ./mpi_life -a -p -b
|
||||
```
|
||||
|
||||
I start the job with the following command, which tells Slurm to allocate eight CPUs for the job:
|
||||
|
||||
```
|
||||
$ srun -n 8 --x11 life.mpi
|
||||
```
|
||||
|
||||
For demonstration purposes, the job has a graphical interface that shows the current result of the calculation:
|
||||
|
||||
|
||||

|
||||
|
||||
The position of the red cells is calculated on one of the compute nodes, and the green cells are calculated on the other compute node. I can also tell the Game of Life program to color the cell for each used CPU (there are four per compute node) differently, which leads to the following output:
|
||||
|
||||
|
||||

|
||||
|
||||
Thanks to the installation recipes and the software packages provided by OpenHPC, I was able to set up two compute nodes and a master node in an HPC-type configuration. I can submit jobs to my resource manager, and I can use the software provided by OpenHPC to start MPI applications utilizing all my Raspberry Pis' CPUs.
|
||||
|
||||
* * *
|
||||
|
||||
_To learn more about using OpenHPC to build a Raspberry Pi cluster, please attend Adrian Reber's talks at [DevConf.cz 2018][10], January 26-28, in Brno, Czech Republic, and at the [CentOS Dojo 2018][11], on February 2, in Brussels._
|
||||
|
||||
### About the author
|
||||
|
||||
[][23] Adrian Reber - Adrian is a Senior Software Engineer at Red Hat and is migrating processes at least since 2010\. He started to migrate processes in a high performance computing environment and at some point he migrated so many processes that he got a PhD for that and since he joined Red Hat he started to migrate containers. Occasionally he still migrates single processes and is still interested in high performance computing topics.[More about me][12]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/how-build-hpc-system-raspberry-pi-and-openhpc
|
||||
|
||||
作者:[Adrian Reber ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/adrianreber
|
||||
[1]:https://opensource.com/resources/what-are-linux-containers?utm_campaign=containers&intcmp=70160000000h1s6AAA
|
||||
[2]:https://opensource.com/resources/what-docker?utm_campaign=containers&intcmp=70160000000h1s6AAA
|
||||
[3]:https://opensource.com/resources/what-is-kubernetes?utm_campaign=containers&intcmp=70160000000h1s6AAA
|
||||
[4]:https://developers.redhat.com/blog/2016/01/13/a-practical-introduction-to-docker-container-terminology/?utm_campaign=containers&intcmp=70160000000h1s6AAA
|
||||
[5]:https://opensource.com/file/384031
|
||||
[6]:https://opensource.com/file/384016
|
||||
[7]:https://opensource.com/file/384021
|
||||
[8]:https://opensource.com/file/384026
|
||||
[9]:https://opensource.com/article/18/1/how-build-hpc-system-raspberry-pi-and-openhpc?rate=l9n6B6qRcR20LJyXEoUoWEZ4mb2nDc9sFZ1YSPc60vE
|
||||
[10]:https://devconfcz2018.sched.com/event/DJYi/openhpc-introduction
|
||||
[11]:https://wiki.centos.org/Events/Dojo/Brussels2018
|
||||
[12]:https://opensource.com/users/adrianreber
|
||||
[13]:https://opensource.com/user/188446/feed
|
||||
[14]:https://opensource.com/article/17/11/openhpc
|
||||
[15]:https://openhpc.community/
|
||||
[16]:https://openhpc.community/downloads/
|
||||
[17]:https://github.com/openhpc/ohpc/releases/download/v1.3.3.GA/Install_guide-CentOS7-Warewulf-SLURM-1.3.3-aarch64.pdf
|
||||
[18]:https://en.wikipedia.org/wiki/Warewulf
|
||||
[19]:http://people.redhat.com/areber/openhpc/ansible/
|
||||
[20]:https://www.open-mpi.org/
|
||||
[21]:https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
|
||||
[22]:https://www.youtube.com/watch?v=n8DvxMcOMXk
|
||||
[23]:https://opensource.com/users/adrianreber
|
||||
[24]:https://opensource.com/users/adrianreber
|
||||
[25]:https://opensource.com/users/adrianreber
|
||||
[26]:https://www.ansible.com/
|
||||
[27]:https://slurm.schedmd.com/
|
||||
[28]:https://opensource.com/tags/raspberry-pi
|
||||
[29]:https://opensource.com/tags/programming
|
||||
[30]:https://opensource.com/tags/linux
|
||||
[31]:https://opensource.com/tags/ansible
|
@ -1,109 +0,0 @@
|
||||
translating by wenwensnow
|
||||
Linux whereis Command Explained for Beginners (5 Examples)
|
||||
======
|
||||
|
||||
Sometimes, while working on the command line, we just need to quickly find out the location of the binary file for a command. Yes, the [find][1] command is an option in this case, but it's a bit time consuming and will likely produce some non-desired results as well. There's a specific command that's designed for this purpose: **whereis**.
|
||||
|
||||
In this article, we will discuss the basics of this command using some easy to understand examples. But before we do that, it's worth mentioning that all examples in this tutorial have been tested on Ubuntu 16.04LTS.
|
||||
|
||||
### Linux whereis command
|
||||
|
||||
The whereis command lets users locate binary, source, and manual page files for a command. Following is its syntax:
|
||||
|
||||
```
|
||||
whereis [options] [-BMS directory... -f] name...
|
||||
```
|
||||
|
||||
And here's how the tool's man page explains it:
|
||||
```
|
||||
whereis locates the binary, source and manual files for the specified command names. The supplied
|
||||
names are first stripped of leading pathname components and any (single) trailing extension of the
|
||||
form .ext (for example: .c) Prefixes of s. resulting from use of source code control are also dealt
|
||||
with. whereis then attempts to locate the desired program in the standard Linux places, and in the
|
||||
places specified by $PATH and $MANPATH.
|
||||
```
|
||||
|
||||
The following Q&A-styled examples should give you a good idea on how the whereis command works.
|
||||
|
||||
### Q1. How to find location of binary file using whereis?
|
||||
|
||||
Suppose you want to find the location for, let's say, the whereis command itself. Then here's how you can do that:
|
||||
|
||||
```
|
||||
whereis whereis
|
||||
```
|
||||
|
||||
[![How to find location of binary file using whereis][2]][3]
|
||||
|
||||
Note that the first path in the output is what you are looking for. The whereis command also produces paths for manual pages and source code (if available, which isn't in this case). So the second path you see in the output above is the path to the whereis manual file(s).
|
||||
|
||||
### Q2. How to specifically search for binaries, manuals, or source code?
|
||||
|
||||
If you want to search specifically for, say binary, then you can use the **-b** command line option. For example:
|
||||
|
||||
```
|
||||
whereis -b cp
|
||||
```
|
||||
|
||||
[![How to specifically search for binaries, manuals, or source code][4]][5]
|
||||
|
||||
Similarly, the **-m** and **-s** options are used in case you want to find manuals and sources.
|
||||
|
||||
### Q3. How to limit whereis search as per requirement?
|
||||
|
||||
By default whereis tries to find files from hard-coded paths, which are defined with glob patterns. However, if you want, you can limit the search using specific command line options. For example, if you want whereis to only search for binary files in /usr/bin, then you can do this using the **-B** command line option.
|
||||
|
||||
```
|
||||
whereis -B /usr/bin/ -f cp
|
||||
```
|
||||
|
||||
**Note** : Since you can pass multiple paths this way, the **-f** command line option terminates the directory list and signals the start of file names.
|
||||
|
||||
Similarly, if you want to limit manual or source searches, you can use the **-M** and **-S** command line options.
|
||||
|
||||
### Q4. How to see paths that whereis uses for search?
|
||||
|
||||
There's an option for this as well. Just run the command with **-l**.
|
||||
|
||||
```
|
||||
whereis -l
|
||||
```
|
||||
|
||||
Here is the list (partial) it produced for us:
|
||||
|
||||
[![How to see paths that whereis uses for search][6]][7]
|
||||
|
||||
### Q5. How to find command names with unusual entries?
|
||||
|
||||
For whereis, a command becomes unusual if it does not have just one entry of each explicitly requested type. For example, commands with no documentation available, or those with documentation in multiple places are considered unusual. The **-u** command line option, when used, makes whereis show the command names that have unusual entries.
|
||||
|
||||
For example, the following command should display files in the current directory which have no documentation file, or more than one.
|
||||
|
||||
```
|
||||
whereis -m -u *
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
Agreed, whereis is not the kind of command line tool that you'll require very frequently. But when the situation arises, it definitely makes your life easy. We've covered some of the important command line options the tool offers, so do practice them. For more info, head to its [man page][8].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.howtoforge.com/linux-whereis-command/
|
||||
|
||||
作者:[Himanshu Arora][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.howtoforge.com
|
||||
[1]:https://www.howtoforge.com/tutorial/linux-find-command/
|
||||
[2]:https://www.howtoforge.com/images/command-tutorial/whereis-basic-usage.png
|
||||
[3]:https://www.howtoforge.com/images/command-tutorial/big/whereis-basic-usage.png
|
||||
[4]:https://www.howtoforge.com/images/command-tutorial/whereis-b-option.png
|
||||
[5]:https://www.howtoforge.com/images/command-tutorial/big/whereis-b-option.png
|
||||
[6]:https://www.howtoforge.com/images/command-tutorial/whereis-l.png
|
||||
[7]:https://www.howtoforge.com/images/command-tutorial/big/whereis-l.png
|
||||
[8]:https://linux.die.net/man/1/whereis
|
@ -0,0 +1,106 @@
|
||||
An introduction to the Web::Simple Perl module, a minimalist web framework
|
||||
============================================================
|
||||
|
||||
### Perl module Web::Simple is easy to learn and packs a big enough punch for a variety of one-offs and smaller services.
|
||||
|
||||
|
||||

|
||||
Image credits : [You as a Machine][10]. Modified by Rikki Endsley. [CC BY-SA 2.0][11].
|
||||
|
||||
One of the more-prominent members of the Perl community is [Matt Trout][12], technical director at [Shadowcat Systems][13]. He's been building core tools for Perl applications for years, including being a co-maintaner of the [Catalyst][14] MVC (Model, View, Controller) web framework, creator of the [DBIx::Class][15] object-management system, and much more. In person, he's energetic, interesting, brilliant, and sometimes hard to keep up with. When Matt writes code…well, think of a runaway chainsaw, with the trigger taped down and the safety features disabled. He's off and running, and you never quite know what will come out. Two things are almost certain: the module will precisely fit the purpose Matt has in mind, and it will show up on CPAN for others to use.
|
||||
|
||||
|
||||
One of Matt's special-purpose modules is [Web::Simple][23]. Touted as "a quick and easy way to build simple web applications," it is a stripped-down, minimalist web framework, with an easy to learn interface. Web::Simple is not at all designed for a large-scale application; however, it may be ideal for a small tool that does one or two things in a lower-traffic environment. I can also envision it being used for rapid prototyping if you wanted to create quick wireframes of a new application for demonstrations.
|
||||
|
||||
### Installation, and a quick "Howdy!"
|
||||
|
||||
You can install the module using `cpan` or `cpanm`. Once you've got it installed, you're ready to write simple web apps without having to hassle with managing the connections or any of that—just your functionality. Here's a quick example:
|
||||
|
||||
```
|
||||
#!/usr/bin/perl
|
||||
package HelloReader;
|
||||
use Web::Simple;
|
||||
|
||||
sub dispatch_request {
|
||||
GET => sub {
|
||||
[ 200, [ 'Content-type', 'text/plain' ], [ 'Howdy, Opensource.com reader!' ] ]
|
||||
},
|
||||
'' => sub {
|
||||
[ 405, [ 'Content-type', 'text/plain' ], [ 'You cannot do that, friend. Sorry.' ] ]
|
||||
}
|
||||
}
|
||||
|
||||
HelloReader->run_if_script;
|
||||
```
|
||||
|
||||
There are a couple of things to notice right off. For one, I didn't `use strict` and `use warnings` like I usually would. Web::Simple imports those for you, so you don't have to. It also imports [Moo][16], a minimalist OO framework, so if you know Moo and want to use it here, you can! The heart of the system lies in the `dispatch_request`method, which you must define in your application. Each entry in the method is a match string, followed by a subroutine to respond if that string matches. The subroutine must return an array reference containing status, headers, and content of the reply to the request.
|
||||
|
||||
### Matching
|
||||
|
||||
The matching system in Web::Simple is powerful, allowing for complicated matches, passing parameters in a URL, query parameters, and extension matches, in pretty much any combination you want. As you can see in the example above, starting with a capital letter will match on the request method, and you can combine that with a path match easily:
|
||||
|
||||
```
|
||||
'GET + /person/*' => sub {
|
||||
my ($self, $person) = @_;
|
||||
# write some code to retrieve and display a person
|
||||
},
|
||||
'POST + /person/* + %*' => sub {
|
||||
my ($self, $person, $params) = @_;
|
||||
# write some code to modify a person, perhaps
|
||||
}
|
||||
```
|
||||
|
||||
In the latter case, the third part of the match indicates that we should pick up all the POST parameters and put them in a hashref called `$params` for use by the subroutine. Using `?` instead of `%` in that part of the match would pick up query parameters, as normally used in a GET request. There's also a useful exported subroutine called `redispatch_to`. This tool lets you redirect, without using a 3xx redirect; it's handled internally, invisible to the user. So:
|
||||
|
||||
```
|
||||
'GET + /some/url' => sub {
|
||||
redispatch_to '/some/other/url';
|
||||
}
|
||||
```
|
||||
|
||||
A GET request to `/some/url` would get handled as if it was sent to `/some/other/url`, without a redirect, and the user won't see a redirect in their browser.
|
||||
|
||||
I've just scratched the surface with this module. If you're looking for something production-ready for larger projects, you'll be better off with [Dancer][17] or [Catalyst][18]. But with its light weight and built-in Moo integration, Web::Simple packs a big enough punch for a variety of one-offs and smaller services.
|
||||
|
||||
### About the author
|
||||
|
||||
[][19] Ruth Holloway - Ruth Holloway has been a system administrator and software developer for a long, long time, getting her professional start on a VAX 11/780, way back when. She spent a lot of her career (so far) serving the technology needs of libraries, and has been a contributor since 2008 to the Koha open source library automation suite.Ruth is currently a Perl Developer at cPanel in Houston, and also serves as chief of staff for an obnoxious cat. In her copious free time, she occasionally reviews old romance... [more about Ruth Holloway][7][More about me][8]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/introduction-websimple-perl-module-minimalist-web-framework
|
||||
|
||||
作者:[Ruth Holloway ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/druthb
|
||||
[1]:https://opensource.com/tags/python?src=programming_resource_menu1
|
||||
[2]:https://opensource.com/tags/javascript?src=programming_resource_menu2
|
||||
[3]:https://opensource.com/tags/perl?src=programming_resource_menu3
|
||||
[4]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ&src=programming_resource_menu4
|
||||
[5]:http://perldoc.perl.org/functions/package.html
|
||||
[6]:https://opensource.com/article/18/1/introduction-websimple-perl-module-minimalist-web-framework?rate=ICN35y076ElpInDKoMqp-sN6f4UVF-n2Qt6dL6lb3kM
|
||||
[7]:https://opensource.com/users/druthb
|
||||
[8]:https://opensource.com/users/druthb
|
||||
[9]:https://opensource.com/user/36051/feed
|
||||
[10]:https://www.flickr.com/photos/youasamachine/8025582590/in/photolist-decd6C-7pkccp-aBfN9m-8NEffu-3JDbWb-aqf5Tx-7Z9MTZ-rnYTRu-3MeuPx-3yYwA9-6bSLvd-irmvxW-5Asr4h-hdkfCA-gkjaSQ-azcgct-gdV5i4-8yWxCA-9G1qDn-5tousu-71V8U2-73D4PA-iWcrTB-dDrya8-7GPuxe-5pNb1C-qmnLwy-oTxwDW-3bFhjL-f5Zn5u-8Fjrua-bxcdE4-ddug5N-d78G4W-gsYrFA-ocrBbw-pbJJ5d-682rVJ-7q8CbF-7n7gDU-pdfgkJ-92QMx2-aAmM2y-9bAGK1-dcakkn-8rfyTz-aKuYvX-hqWSNP-9FKMkg-dyRPkY
|
||||
[11]:https://creativecommons.org/licenses/by/2.0/
|
||||
[12]:https://shadow.cat/resources/bios/matt_short/
|
||||
[13]:https://shadow.cat/
|
||||
[14]:https://metacpan.org/pod/Catalyst
|
||||
[15]:https://metacpan.org/pod/DBIx::Class
|
||||
[16]:https://metacpan.org/pod/Moo
|
||||
[17]:http://perldancer.org/
|
||||
[18]:http://www.catalystframework.org/
|
||||
[19]:https://opensource.com/users/druthb
|
||||
[20]:https://opensource.com/users/druthb
|
||||
[21]:https://opensource.com/users/druthb
|
||||
[22]:https://opensource.com/article/18/1/introduction-websimple-perl-module-minimalist-web-framework#comments
|
||||
[23]:https://metacpan.org/pod/Web::Simple
|
||||
[24]:https://opensource.com/tags/perl
|
||||
[25]:https://opensource.com/tags/programming
|
||||
[26]:https://opensource.com/tags/perl-column
|
||||
[27]:https://opensource.com/tags/web-development
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user