mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
commit
df3c357f61
@ -0,0 +1,348 @@
|
|||||||
|
[#]: collector: (lujun9972)
|
||||||
|
[#]: translator: (liujing97)
|
||||||
|
[#]: reviewer: (wxy)
|
||||||
|
[#]: publisher: (wxy)
|
||||||
|
[#]: url: (https://linux.cn/article-10716-1.html)
|
||||||
|
[#]: subject: (How To Understand And Identify File types in Linux)
|
||||||
|
[#]: via: (https://www.2daygeek.com/how-to-understand-and-identify-file-types-in-linux/)
|
||||||
|
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||||
|
|
||||||
|
怎样理解和识别 Linux 中的文件类型
|
||||||
|
======
|
||||||
|
|
||||||
|
众所周知,在 Linux 中一切皆为文件,包括硬盘和显卡等。在 Linux 中导航时,大部分的文件都是普通文件和目录文件。但是也有其他的类型,对应于 5 类不同的作用。因此,理解 Linux 中的文件类型在许多方面都是非常重要的。
|
||||||
|
|
||||||
|
如果你不相信,那只需要浏览全文,就会发现它有多重要。如果你不能理解文件类型,就不能够毫无畏惧的做任意的修改。
|
||||||
|
|
||||||
|
如果你做了一些错误的修改,会毁坏你的文件系统,那么当你操作的时候请小心一点。在 Linux 系统中文件是非常重要的,因为所有的设备和守护进程都被存储为文件。
|
||||||
|
|
||||||
|
### 在 Linux 中有多少种可用类型?
|
||||||
|
|
||||||
|
据我所知,在 Linux 中总共有 7 种类型的文件,分为 3 大类。具体如下。
|
||||||
|
|
||||||
|
* 普通文件
|
||||||
|
* 目录文件
|
||||||
|
* 特殊文件(该类有 5 个文件类型)
|
||||||
|
* 链接文件
|
||||||
|
* 字符设备文件
|
||||||
|
* Socket 文件
|
||||||
|
* 命名管道文件
|
||||||
|
* 块文件
|
||||||
|
|
||||||
|
参考下面的表可以更好地理解 Linux 中的文件类型。
|
||||||
|
|
||||||
|
| 符号 | 意义 |
|
||||||
|
| ------- | --------------------------------- |
|
||||||
|
| `–` | 普通文件。长列表中以下划线 `_` 开头。 |
|
||||||
|
| `d` | 目录文件。长列表中以英文字母 `d` 开头。 |
|
||||||
|
| `l` | 链接文件。长列表中以英文字母 `l` 开头。 |
|
||||||
|
| `c` | 字符设备文件。长列表中以英文字母 `c` 开头。 |
|
||||||
|
| `s` | Socket 文件。长列表中以英文字母 `s` 开头。 |
|
||||||
|
| `p` | 命名管道文件。长列表中以英文字母 `p` 开头。 |
|
||||||
|
| `b` | 块文件。长列表中以英文字母 `b` 开头。 |
|
||||||
|
|
||||||
|
|
||||||
|
### 方法1:手动识别 Linux 中的文件类型
|
||||||
|
|
||||||
|
如果你很了解 Linux,那么你可以借助上表很容易地识别文件类型。
|
||||||
|
|
||||||
|
#### 在 Linux 中如何查看普通文件?
|
||||||
|
|
||||||
|
在 Linux 中使用下面的命令去查看普通文件。在 Linux 文件系统中普通文件可以出现在任何地方。
|
||||||
|
普通文件的颜色是“白色”。
|
||||||
|
|
||||||
|
```
|
||||||
|
# ls -la | grep ^-
|
||||||
|
-rw-------. 1 mageshm mageshm 1394 Jan 18 15:59 .bash_history
|
||||||
|
-rw-r--r--. 1 mageshm mageshm 18 May 11 2012 .bash_logout
|
||||||
|
-rw-r--r--. 1 mageshm mageshm 176 May 11 2012 .bash_profile
|
||||||
|
-rw-r--r--. 1 mageshm mageshm 124 May 11 2012 .bashrc
|
||||||
|
-rw-r--r--. 1 root root 26 Dec 27 17:55 liks
|
||||||
|
-rw-r--r--. 1 root root 104857600 Jan 31 2006 test100.dat
|
||||||
|
-rw-r--r--. 1 root root 104874307 Dec 30 2012 test100.zip
|
||||||
|
-rw-r--r--. 1 root root 11536384 Dec 30 2012 test10.zip
|
||||||
|
-rw-r--r--. 1 root root 61 Dec 27 19:05 test2-bzip2.txt
|
||||||
|
-rw-r--r--. 1 root root 61 Dec 31 14:24 test3-bzip2.txt
|
||||||
|
-rw-r--r--. 1 root root 60 Dec 27 19:01 test-bzip2.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何查看目录文件?
|
||||||
|
|
||||||
|
在 Linux 中使用下面的命令去查看目录文件。在 Linux 文件系统中目录文件可以出现在任何地方。目录文件的颜色是“蓝色”。
|
||||||
|
|
||||||
|
```
|
||||||
|
# ls -la | grep ^d
|
||||||
|
drwxr-xr-x. 3 mageshm mageshm 4096 Dec 31 14:24 links/
|
||||||
|
drwxrwxr-x. 2 mageshm mageshm 4096 Nov 16 15:44 perl5/
|
||||||
|
drwxr-xr-x. 2 mageshm mageshm 4096 Nov 16 15:37 public_ftp/
|
||||||
|
drwxr-xr-x. 3 mageshm mageshm 4096 Nov 16 15:37 public_html/
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何查看链接文件?
|
||||||
|
|
||||||
|
在 Linux 中使用下面的命令去查看链接文件。在 Linux 文件系统中链接文件可以出现在任何地方。
|
||||||
|
链接文件有两种可用类型,软连接和硬链接。链接文件的颜色是“浅绿宝石色”。
|
||||||
|
|
||||||
|
```
|
||||||
|
# ls -la | grep ^l
|
||||||
|
lrwxrwxrwx. 1 root root 31 Dec 7 15:11 s-link-file -> /links/soft-link/test-soft-link
|
||||||
|
lrwxrwxrwx. 1 root root 38 Dec 7 15:12 s-link-folder -> /links/soft-link/test-soft-link-folder
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何查看字符设备文件?
|
||||||
|
|
||||||
|
在 Linux 中使用下面的命令查看字符设备文件。字符设备文件仅出现在特定位置。它出现在目录 `/dev` 下。字符设备文件的颜色是“黄色”。
|
||||||
|
|
||||||
|
```
|
||||||
|
# ls -la | grep ^c
|
||||||
|
# ls -la | grep ^c
|
||||||
|
crw-------. 1 root root 5, 1 Jan 28 14:05 console
|
||||||
|
crw-rw----. 1 root root 10, 61 Jan 28 14:05 cpu_dma_latency
|
||||||
|
crw-rw----. 1 root root 10, 62 Jan 28 14:05 crash
|
||||||
|
crw-rw----. 1 root root 29, 0 Jan 28 14:05 fb0
|
||||||
|
crw-rw-rw-. 1 root root 1, 7 Jan 28 14:05 full
|
||||||
|
crw-rw-rw-. 1 root root 10, 229 Jan 28 14:05 fuse
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何查看块文件?
|
||||||
|
|
||||||
|
在 Linux 中使用下面的命令查看块文件。块文件仅出现在特定位置。它出现在目录 `/dev` 下。块文件的颜色是“黄色”。
|
||||||
|
|
||||||
|
```
|
||||||
|
# ls -la | grep ^b
|
||||||
|
brw-rw----. 1 root disk 7, 0 Jan 28 14:05 loop0
|
||||||
|
brw-rw----. 1 root disk 7, 1 Jan 28 14:05 loop1
|
||||||
|
brw-rw----. 1 root disk 7, 2 Jan 28 14:05 loop2
|
||||||
|
brw-rw----. 1 root disk 7, 3 Jan 28 14:05 loop3
|
||||||
|
brw-rw----. 1 root disk 7, 4 Jan 28 14:05 loop4
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何查看 Socket 文件?
|
||||||
|
|
||||||
|
在 Linux 中使用下面的命令查看 Socket 文件。Socket 文件可以出现在任何地方。Scoket 文件的颜色是“粉色”。(LCTT 译注:此处及下面关于 Socket 文件、命名管道文件可出现的位置原文描述有误,已修改。)
|
||||||
|
|
||||||
|
```
|
||||||
|
# ls -la | grep ^s
|
||||||
|
srw-rw-rw- 1 root root 0 Jan 5 16:36 system_bus_socket
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何查看命名管道文件?
|
||||||
|
|
||||||
|
在 Linux 中使用下面的命令查看命名管道文件。命名管道文件可以出现在任何地方。命名管道文件的颜色是“黄色”。
|
||||||
|
|
||||||
|
```
|
||||||
|
# ls -la | grep ^p
|
||||||
|
prw-------. 1 root root 0 Jan 28 14:06 replication-notify-fifo|
|
||||||
|
prw-------. 1 root root 0 Jan 28 14:06 stats-mail|
|
||||||
|
```
|
||||||
|
|
||||||
|
### 方法2:在 Linux 中如何使用 file 命令识别文件类型
|
||||||
|
|
||||||
|
在 Linux 中 `file` 命令允许我们去确定不同的文件类型。这里有三个测试集,按此顺序进行三组测试:文件系统测试、魔术字节测试和用于识别文件类型的语言测试。
|
||||||
|
|
||||||
|
#### 在 Linux 中如何使用 file 命令查看普通文件
|
||||||
|
|
||||||
|
在你的终端简单地输入 `file` 命令跟着普通文件。`file` 命令将会读取提供的文件内容并且准确地显示文件的类型。
|
||||||
|
|
||||||
|
这就是我们看到对于每个普通文件有不同结果的原因。参考下面普通文件的不同结果。
|
||||||
|
|
||||||
|
```
|
||||||
|
# file 2daygeek_access.log
|
||||||
|
2daygeek_access.log: ASCII text, with very long lines
|
||||||
|
|
||||||
|
# file powertop.html
|
||||||
|
powertop.html: HTML document, ASCII text, with very long lines
|
||||||
|
|
||||||
|
# file 2g-test
|
||||||
|
2g-test: JSON data
|
||||||
|
|
||||||
|
# file powertop.txt
|
||||||
|
powertop.txt: HTML document, UTF-8 Unicode text, with very long lines
|
||||||
|
|
||||||
|
# file 2g-test-05-01-2019.tar.gz
|
||||||
|
2g-test-05-01-2019.tar.gz: gzip compressed data, last modified: Sat Jan 5 18:22:20 2019, from Unix, original size 450560
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何使用 file 命令查看目录文件?
|
||||||
|
|
||||||
|
在你的终端简单地输入 `file` 命令跟着目录。参阅下面的结果。
|
||||||
|
|
||||||
|
```
|
||||||
|
# file Pictures/
|
||||||
|
Pictures/: directory
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何使用 file 命令查看链接文件?
|
||||||
|
|
||||||
|
在你的终端简单地输入 `file` 命令跟着链接文件。参阅下面的结果。
|
||||||
|
|
||||||
|
```
|
||||||
|
# file log
|
||||||
|
log: symbolic link to /run/systemd/journal/dev-log
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何使用 file 命令查看字符设备文件?
|
||||||
|
|
||||||
|
在你的终端简单地输入 `file` 命令跟着字符设备文件。参阅下面的结果。
|
||||||
|
|
||||||
|
```
|
||||||
|
# file vcsu
|
||||||
|
vcsu: character special (7/64)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何使用 file 命令查看块文件?
|
||||||
|
|
||||||
|
在你的终端简单地输入 `file` 命令跟着块文件。参阅下面的结果。
|
||||||
|
|
||||||
|
```
|
||||||
|
# file sda1
|
||||||
|
sda1: block special (8/1)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何使用 file 命令查看 Socket 文件?
|
||||||
|
|
||||||
|
在你的终端简单地输入 `file` 命令跟着 Socket 文件。参阅下面的结果。
|
||||||
|
|
||||||
|
```
|
||||||
|
# file system_bus_socket
|
||||||
|
system_bus_socket: socket
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何使用 file 命令查看命名管道文件?
|
||||||
|
|
||||||
|
在你的终端简单地输入 `file` 命令跟着命名管道文件。参阅下面的结果。
|
||||||
|
|
||||||
|
```
|
||||||
|
# file pipe-test
|
||||||
|
pipe-test: fifo (named pipe)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 方法 3:在 Linux 中如何使用 stat 命令识别文件类型?
|
||||||
|
|
||||||
|
`stat` 命令允许我们去查看文件类型或文件系统状态。该实用程序比 `file` 命令提供更多的信息。它显示文件的大量信息,例如大小、块大小、IO 块大小、Inode 值、链接、文件权限、UID、GID、文件的访问/更新和修改的时间等详细信息。
|
||||||
|
|
||||||
|
#### 在 Linux 中如何使用 stat 命令查看普通文件?
|
||||||
|
|
||||||
|
在你的终端简单地输入 `stat` 命令跟着普通文件。参阅下面的结果。
|
||||||
|
|
||||||
|
```
|
||||||
|
# stat 2daygeek_access.log
|
||||||
|
File: 2daygeek_access.log
|
||||||
|
Size: 14406929 Blocks: 28144 IO Block: 4096 regular file
|
||||||
|
Device: 10301h/66305d Inode: 1727555 Links: 1
|
||||||
|
Access: (0644/-rw-r--r--) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
|
||||||
|
Access: 2019-01-03 14:05:26.430328867 +0530
|
||||||
|
Modify: 2019-01-03 14:05:26.460328868 +0530
|
||||||
|
Change: 2019-01-03 14:05:26.460328868 +0530
|
||||||
|
Birth: -
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何使用 stat 命令查看目录文件?
|
||||||
|
|
||||||
|
在你的终端简单地输入 `stat` 命令跟着目录文件。参阅下面的结果。
|
||||||
|
|
||||||
|
```
|
||||||
|
# stat Pictures/
|
||||||
|
File: Pictures/
|
||||||
|
Size: 4096 Blocks: 8 IO Block: 4096 directory
|
||||||
|
Device: 10301h/66305d Inode: 1703982 Links: 3
|
||||||
|
Access: (0755/drwxr-xr-x) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
|
||||||
|
Access: 2018-11-24 03:22:11.090000828 +0530
|
||||||
|
Modify: 2019-01-05 18:27:01.546958817 +0530
|
||||||
|
Change: 2019-01-05 18:27:01.546958817 +0530
|
||||||
|
Birth: -
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何使用 stat 命令查看链接文件?
|
||||||
|
|
||||||
|
在你的终端简单地输入 `stat` 命令跟着链接文件。参阅下面的结果。
|
||||||
|
|
||||||
|
```
|
||||||
|
# stat /dev/log
|
||||||
|
File: /dev/log -> /run/systemd/journal/dev-log
|
||||||
|
Size: 28 Blocks: 0 IO Block: 4096 symbolic link
|
||||||
|
Device: 6h/6d Inode: 278 Links: 1
|
||||||
|
Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)
|
||||||
|
Access: 2019-01-05 16:36:31.033333447 +0530
|
||||||
|
Modify: 2019-01-05 16:36:30.766666768 +0530
|
||||||
|
Change: 2019-01-05 16:36:30.766666768 +0530
|
||||||
|
Birth: -
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何使用 stat 命令查看字符设备文件?
|
||||||
|
|
||||||
|
在你的终端简单地输入 `stat` 命令跟着字符设备文件。参阅下面的结果。
|
||||||
|
|
||||||
|
```
|
||||||
|
# stat /dev/vcsu
|
||||||
|
File: /dev/vcsu
|
||||||
|
Size: 0 Blocks: 0 IO Block: 4096 character special file
|
||||||
|
Device: 6h/6d Inode: 16 Links: 1 Device type: 7,40
|
||||||
|
Access: (0660/crw-rw----) Uid: ( 0/ root) Gid: ( 5/ tty)
|
||||||
|
Access: 2019-01-05 16:36:31.056666781 +0530
|
||||||
|
Modify: 2019-01-05 16:36:31.056666781 +0530
|
||||||
|
Change: 2019-01-05 16:36:31.056666781 +0530
|
||||||
|
Birth: -
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何使用 stat 命令查看块文件?
|
||||||
|
|
||||||
|
在你的终端简单地输入 `stat` 命令跟着块文件。参阅下面的结果。
|
||||||
|
|
||||||
|
```
|
||||||
|
# stat /dev/sda1
|
||||||
|
File: /dev/sda1
|
||||||
|
Size: 0 Blocks: 0 IO Block: 4096 block special file
|
||||||
|
Device: 6h/6d Inode: 250 Links: 1 Device type: 8,1
|
||||||
|
Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 994/ disk)
|
||||||
|
Access: 2019-01-05 16:36:31.596666806 +0530
|
||||||
|
Modify: 2019-01-05 16:36:31.596666806 +0530
|
||||||
|
Change: 2019-01-05 16:36:31.596666806 +0530
|
||||||
|
Birth: -
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何使用 stat 命令查看 Socket 文件?
|
||||||
|
|
||||||
|
在你的终端简单地输入 `stat` 命令跟着 Socket 文件。参阅下面的结果。
|
||||||
|
|
||||||
|
```
|
||||||
|
# stat /var/run/dbus/system_bus_socket
|
||||||
|
File: /var/run/dbus/system_bus_socket
|
||||||
|
Size: 0 Blocks: 0 IO Block: 4096 socket
|
||||||
|
Device: 15h/21d Inode: 576 Links: 1
|
||||||
|
Access: (0666/srw-rw-rw-) Uid: ( 0/ root) Gid: ( 0/ root)
|
||||||
|
Access: 2019-01-05 16:36:31.823333482 +0530
|
||||||
|
Modify: 2019-01-05 16:36:31.810000149 +0530
|
||||||
|
Change: 2019-01-05 16:36:31.810000149 +0530
|
||||||
|
Birth: -
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 Linux 中如何使用 stat 命令查看命名管道文件?
|
||||||
|
|
||||||
|
在你的终端简单地输入 `stat` 命令跟着命名管道文件。参阅下面的结果。
|
||||||
|
|
||||||
|
```
|
||||||
|
# stat pipe-test
|
||||||
|
File: pipe-test
|
||||||
|
Size: 0 Blocks: 0 IO Block: 4096 fifo
|
||||||
|
Device: 10301h/66305d Inode: 1705583 Links: 1
|
||||||
|
Access: (0644/prw-r--r--) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
|
||||||
|
Access: 2019-01-06 02:00:03.040394731 +0530
|
||||||
|
Modify: 2019-01-06 02:00:03.040394731 +0530
|
||||||
|
Change: 2019-01-06 02:00:03.040394731 +0530
|
||||||
|
Birth: -
|
||||||
|
```
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.2daygeek.com/how-to-understand-and-identify-file-types-in-linux/
|
||||||
|
|
||||||
|
作者:[Magesh Maruthamuthu][a]
|
||||||
|
选题:[lujun9972][b]
|
||||||
|
译者:[liujing97](https://github.com/liujing97)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://www.2daygeek.com/author/magesh/
|
||||||
|
[b]: https://github.com/lujun9972
|
@ -1,8 +1,8 @@
|
|||||||
[#]: collector: (lujun9972)
|
[#]: collector: (lujun9972)
|
||||||
[#]: translator: (Moelf)
|
[#]: translator: (Moelf)
|
||||||
[#]: reviewer: (acyanbird)
|
[#]: reviewer: (acyanbird, wxy)
|
||||||
[#]: publisher: ( )
|
[#]: publisher: (wxy)
|
||||||
[#]: url: ( )
|
[#]: url: (https://linux.cn/article-10714-1.html)
|
||||||
[#]: subject: (A Look Back at the History of Firefox)
|
[#]: subject: (A Look Back at the History of Firefox)
|
||||||
[#]: via: (https://itsfoss.com/history-of-firefox)
|
[#]: via: (https://itsfoss.com/history-of-firefox)
|
||||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||||
@ -10,56 +10,57 @@
|
|||||||
回顾 Firefox 历史
|
回顾 Firefox 历史
|
||||||
======
|
======
|
||||||
|
|
||||||
从很久之前开始,火狐浏览器就一直是开源社区的一根顶梁柱。这些年来它几乎是所有 Linux 发行版的默认浏览器,并且曾是阻挡微软彻底争霸浏览器界的最后一块磐石。这款浏览器的起源可以一直回溯到互联网创生的时代。本周(此文发布于 2019.3.14)是互联网成立 30 周年的纪念日,趁这个机会回顾一下我们熟悉并爱戴的火狐浏览器实在是再好不过了。
|
从很久之前开始,火狐浏览器就一直是开源社区的一根顶梁柱。这些年来它几乎是所有 Linux 发行版的默认浏览器,并且曾是阻挡微软彻底争霸浏览器界的最后一块磐石。这款浏览器的起源可以一直回溯到互联网创生的时代。本周(LCTT 译注:此文发布于 2019.3.14)是互联网成立 30 周年的纪念日,趁这个机会回顾一下我们熟悉并爱戴的火狐浏览器实在是再好不过了。
|
||||||
|
|
||||||
### 发源
|
### 发源
|
||||||
|
|
||||||
在90年代早期,一个叫 [Marc Andreessen][1] 的年轻人正在伊利诺伊大学攻读计算机科学学士学位。在那里,他开始为[国家超算应用中心][2]工作。就在这段时间内,[Tim Berners-Lee][3] 爵士发布了网络标准的早期版本 —— 现在这个网络广为人之。Marc 在那时候[了解][4]到了一款叫[ViolaWWW][5]的化石级浏览器。Marc 和 Eric Bina 看到了这种技术的潜力,他们开发了一个易于安装的基于 Unix 平台的浏览器,并取名 [NCSA Mosaic][6]。第一个 alpha 版本发布于 1993 年 6 月。到 9 月的时候,浏览器已经有 Windows 和 Macintosh 移植版本了。因为比当时其他任何浏览器软件都易于使用,Mosaic 很快变得相当流行。
|
在上世纪 90 年代早期,一个叫 [Marc Andreessen][1] 的年轻人正在伊利诺伊大学攻读计算机科学学士学位。在那里,他开始为[国家超算应用中心(NCSA)][2]工作。就在这段时间内,<ruby>[蒂姆·伯纳斯·李][3]<rt>Tim Berners-Lee</rt></ruby> 爵士发布了今天已经为我们所熟知的 Web 的早期标准。Marc 在那时候[了解][4]到了一款叫 [ViolaWWW][5] 的化石级浏览器。Marc 和 Eric Bina 看到了这种技术的潜力,他们开发了一个易于安装的基于 Unix 平台的浏览器,并取名 [NCSA Mosaic][6]。第一个 alpha 版本发布于 1993 年 6 月。到 9 月的时候,浏览器已经有 Windows 和 Macintosh 移植版本了。因为比当时其他任何浏览器软件都易于使用,Mosaic 很快变得相当流行。
|
||||||
|
|
||||||
1994 年,Marc 毕业并移居到加州。一个叫 Jim Clark 的人结识了他,Clark 那时候通过卖电脑软硬件赚了点钱。Clark 也用过 Mosaic 浏览器并且在互联网上看到了发家的机会。Clark 创立了一家公司并且雇了 Marc 和 Eric 专做互联网软件。公司一开始叫 “Mosaic 通讯”,但是伊利诺伊大学不喜欢[ Mosaic 这个名字][7]。所以公司转而改名大家后来熟悉的 “网景通讯”。
|
1994 年,Marc 毕业并移居到加州。一个叫 Jim Clark 的人结识了他,Clark 那时候通过卖电脑软硬件赚了点钱。Clark 也用过 Mosaic 浏览器并且看到了互联网的经济前景。Clark 创立了一家公司并且雇了 Marc 和 Eric 专做互联网软件。公司一开始叫 “Mosaic 通讯”,但是伊利诺伊大学并不喜欢他们用 [Mosaic 这个名字][7]。所以公司转而改名为 “<ruby>网景<rt>Netscape</rt></ruby>通讯”。
|
||||||
|
|
||||||
公司的第一个项目是给任天堂 64 开发在线对战网络,然而不怎么成功。他们第一个以公司名义发布的产品是一款叫做 Mosaic Netscape 0.9 的浏览器,很快这款浏览器被改名叫 Netscape Navigator。在内部,浏览器的开发代号就是 mozilla,意味着”Mosaic 杀手“。一位员工还创作了一幅[哥斯拉风格的][8]卡通画。他们当时想在竞争中彻底胜出。
|
该公司的第一个项目是给任天堂 64 开发在线对战网络,然而不怎么成功。他们第一个以公司名义发布的产品是一款叫做 Mosaic Netscape 0.9 的浏览器,很快这款浏览器被改名叫 Netscape Navigator。在内部,浏览器的开发代号就是 mozilla,意即 “Mosaic 杀手”。一位员工还创作了一幅[哥斯拉风格的][8]卡通画。他们当时想在竞争中彻底胜出。
|
||||||
|
|
||||||
|
![Early Firefox Mascot][9]
|
||||||
|
|
||||||
![Early Firefox Mascot][9]早期 Mozilla 在 Netscape 的吉祥物
|
*早期 Mozilla 在 Netscape 的吉祥物*
|
||||||
|
|
||||||
他们取得了辉煌的胜利。那时,Netscape 最大的优势是他们的浏览器在各种操作系统上体验极为一致。Netscape 把这个情况宣传为给所有人平等的互联网体验。
|
他们取得了辉煌的胜利。那时,Netscape 最大的优势是他们的浏览器在各种操作系统上体验极为一致。Netscape 将其宣传为给所有人平等的互联网体验。
|
||||||
|
|
||||||
随着越来越多的人使用 Netscape Navigator,NCSA Mosaic 的市场份额逐步下降。到了 1995 年,Netscape 公开上市了。[第一天][10],股价从开盘的 $28,直窜到 $78,收盘于 $58。Netscape 那时所向披靡。
|
随着越来越多的人使用 Netscape Navigator,NCSA Mosaic 的市场份额逐步下降。到了 1995 年,Netscape 公开上市了。[上市首日][10],股价从开盘的 $28,直窜到 $78,收盘于 $58。Netscape 那时所向披靡。
|
||||||
|
|
||||||
但好景不长。在 1994 年的夏天,微软发布了 Internet Explorer 1.0,这款浏览器基于 Spyglass Mosaic,而后者又直接基于 NCSA Mosaic。[浏览器战争][11] 就此展开。
|
但好景不长。在 1994 年的夏天,微软发布了 Internet Explorer 1.0,这款浏览器基于 Spyglass Mosaic,而后者又直接基于 NCSA Mosaic。[浏览器战争][11] 就此展开。
|
||||||
|
|
||||||
在接下来的几年里,Netscape 和微软就浏览器霸主地位展开斗争。他们各自加入了很多新特性以取得优势。不幸的是,IE 有和 Windows 操作系统捆绑的巨大优势。更甚于此,微软也有更多的程序员和资本可以调动。在接近 1997 年的尾声,Netscape 公司开始遇到财务问题。
|
在接下来的几年里,Netscape 和微软就浏览器霸主地位展开斗争。他们各自加入了很多新特性以取得优势。不幸的是,IE 有和 Windows 操作系统捆绑的巨大优势。更甚于此,微软也有更多的程序员和资本可以调动。在 1997 年年底,Netscape 公司开始遇到财务问题。
|
||||||
|
|
||||||
|
|
||||||
### 迈向开源
|
### 迈向开源
|
||||||
|
|
||||||
![Mozilla Firefox][12]
|
![Mozilla Firefox][12]
|
||||||
|
|
||||||
1998 年 1 月,Netscape 开源了 Netscape Communicator 4.0 软件套装的代码。[旨在][13] “集合互联网上万千程序员的才智,把最好的功能加入 Netscape 的软件。这一策略能加速开发,并且让 Netscape 在未来能向个人和商业用户提供高质量的 Netscape Communicator 版本”。
|
1998 年 1 月,Netscape 开源了 Netscape Communicator 4.0 软件套装的代码。[旨在][13] “集合互联网成千上万的程序员的才智,把最好的功能加入 Netscape 的软件。这一策略旨在加速开发,并且让 Netscape 在未来能向个人和商业用户免费提供高质量的 Netscape Communicator 版本”。
|
||||||
|
|
||||||
这个项目由新创立的 Mozilla Orgnization 管理。然而,Netscape Communicator 4.0 的代码由于大小和复杂程度,它很难被社区上的程序员们独自开发。雪上加霜的是,浏览器的一些组件由于第三方证书问题并不能被开源。到头来,他们决定用新兴的 [Gecko][14] 渲染引擎重新开发浏览器。
|
这个项目由新创立的 Mozilla 机构管理。然而,Netscape Communicator 4.0 的代码由于大小和复杂程度而很难开发。雪上加霜的是,浏览器的一些组件由于第三方的许可证问题而不能被开源。到头来,他们决定用新兴的 [Gecko][14] 渲染引擎重新开发浏览器。
|
||||||
|
|
||||||
到了 1998 年的 11 月,Netscape 被美国在线(AOL)以[价值 42 亿美元的股权][15]收购。
|
到了 1998 年的 11 月,Netscape 被美国在线(AOL)以[价值 42 亿美元的股权][15]收购。
|
||||||
|
|
||||||
从头来过是一项艰巨的任务。Mozilla Firefox(原名 Phoenix)直到 2002 年 6 月才面世,它同样可以运行在多种操作系统上:Linux,Mac OS,Windows 和 Solaris。
|
从头来过是一项艰巨的任务。Mozilla Firefox(最初名为 Phoenix)直到 2002 年 6 月才面世,它同样可以运行在多种操作系统上:Linux、Mac OS、Windows 和 Solaris。
|
||||||
|
|
||||||
1999 年,AOL 宣布他们将停止浏览器开发。紧接着 Mozilla 基金会成立了,用于管理 Mozilla 的商标和项目相关的融资事宜。最早 Mozilla 基金会收到了一笔来自 AOL,IBM,Sun Microsystems 和红帽(Red Hat)总计 2 百万美金的捐赠。
|
1999 年,AOL 宣布他们将停止浏览器开发。随后创建了 Mozilla 基金会,用于管理 Mozilla 的商标和项目相关的融资事宜。最早 Mozilla 基金会从 AOL、IBM、Sun Microsystems 和红帽(Red Hat)收到了总计 200 万美金的捐赠。
|
||||||
|
|
||||||
到了 2003 年 3 月,因为套件越来越臃肿,Mozilla [宣布][16] 计划把套件分割成单独的应用。这个单独的浏览器一开始起名 Phoenix。但是由于和 BIOS 制造企业凤凰科技的商标官司,浏览器改名 Firebird 火鸟 —— 结果和火鸟数据库的开发者又起了冲突。浏览器只能再次被重命名,才有了现在家喻户晓的 Firefox 火狐。
|
到了 2003 年 3 月,因为套件越来越臃肿,Mozilla [宣布][16] 计划把该套件分割成单独的应用。这个单独的浏览器一开始起名 Phoenix。但是由于和 BIOS 制造企业凤凰科技的商标官司,浏览器改名 Firebird(火鸟) —— 结果和火鸟数据库的开发者又起了冲突。浏览器只能再次被重命名,才有了现在家喻户晓的 Firefox(火狐)。
|
||||||
|
|
||||||
那时,[Mozilla 说][17],”我们在过去一年里学到了很多关于起名的技巧(不是因为我们愿意才学的)。我们现在很小心地研究了名字,确保不会再有什么夭蛾子了。我们同时展开了在美国专利局和商标办注册我们新品牌的流程”。
|
那时,[Mozilla 说][17],”我们在过去一年里学到了很多关于起名的技巧(不是因为我们愿意才学的)。我们现在很小心地研究了名字,确保不会再有什么夭蛾子了。我们已经开始向美国专利商标局注册我们新商标”。
|
||||||
|
|
||||||
|
![Mozilla Firefox 1.0][18]
|
||||||
|
|
||||||
![Mozilla Firefox 1.0][18]Firefox 1.0 : [图片致谢][19]
|
*Firefox 1.0 : [图片致谢][19]*
|
||||||
|
|
||||||
第一个正式的 Firefox 版本是 [0.8][20],发布于 2004 年 2 月 8 日。紧接着 11 月 9 日他们发布了 1.0 版本。接着 2.0 和 3.0 分别在 06 年 10 月 和 08 年 6 月问世。每个大版本更新都带来了很多新的特性和提升。从很多角度上讲,Firefox 都领先 IE 不少,无论是功能还是技术先进性,即便如此 IE 还是有更多用户。
|
第一个正式的 Firefox 版本是 [0.8][20],发布于 2004 年 2 月 8 日。紧接着 11 月 9 日他们发布了 1.0 版本。2.0 和 3.0 版本分别在 06 年 10 月 和 08 年 6 月问世。每个大版本更新都带来了很多新的特性和提升。从很多角度上讲,Firefox 都领先 IE 不少,无论是功能还是技术先进性,即便如此 IE 还是有更多用户。
|
||||||
|
|
||||||
一切都在 Google 发布 Chrome 浏览器的时候改变了。在 Chrome 发布(2008 年 9 月)的前几个月,Firefox 占有 30% 的[浏览器份额][21] 而 IE 有超过 60%。而在 [2019 年 1 月][22] 的 StatCounter's 报告里,Firefox 有不到 10% 的份额,而 Chrome 有超过 70%。
|
一切都在 Google 发布 Chrome 浏览器的时候改变了。在 Chrome 发布(2008 年 9 月)的前几个月,Firefox 占有 30% 的[浏览器份额][21] 而 IE 有超过 60%。而在 StatCounter 的 [2019 年 1 月][22]报告里,Firefox 有不到 10% 的份额,而 Chrome 有超过 70%。
|
||||||
|
|
||||||
趣味知识点
|
> 趣味知识点
|
||||||
|
|
||||||
和大家以为的不一样,火狐的 logo 其实没有狐狸。那其实是个 [小熊猫][23]。在中文里,“火狐狸”是小熊猫的一个昵称(译者:我真的从来没听说过)
|
> 和大家以为的不一样,火狐的 logo 其实没有狐狸。那其实是个 <ruby>[小熊猫][23]<rt>Red Panda</rt></ruby>。在中文里,“火狐狸”是小熊猫的另一个名字。
|
||||||
|
|
||||||
### 展望未来
|
### 展望未来
|
||||||
|
|
||||||
@ -67,12 +68,11 @@
|
|||||||
|
|
||||||
这也许看起来和 Netscape 当年的辉煌形成鲜明的对比。但让我们不要忘记 Firefox 已经有的许多成就。一群来自世界各地的程序员,就这么开发出了这个星球上第二大份额的浏览器。他们在微软垄断如日中天的时候还占据这 30% 的份额,他们可以再次做到这一点。无论如何,他们都有我们。开源社区坚定地站在他们身后。
|
这也许看起来和 Netscape 当年的辉煌形成鲜明的对比。但让我们不要忘记 Firefox 已经有的许多成就。一群来自世界各地的程序员,就这么开发出了这个星球上第二大份额的浏览器。他们在微软垄断如日中天的时候还占据这 30% 的份额,他们可以再次做到这一点。无论如何,他们都有我们。开源社区坚定地站在他们身后。
|
||||||
|
|
||||||
抗争垄断是众多我使用 Firefox [的原因之一][26]。Mozilla 在改头换面的 [Firefox Quantum][27] 上赢回了一些份额,我相信他们还能一路向上攀爬。
|
抗争垄断是我使用 Firefox [的众多原因之一][26]。随着 Mozilla 在改头换面的 [Firefox Quantum][27] 上赢回了一些份额,我相信它将一路向上攀爬。
|
||||||
|
|
||||||
你还想了解 Linux 和开源历史上的什么其他事件?欢迎在评论区告诉我们。
|
你还想了解 Linux 和开源历史上的什么其他事件?欢迎在评论区告诉我们。
|
||||||
|
|
||||||
如果你觉得这篇文章不错,请大方在社交媒体上分享!比如 Hacker News 或者 [Reddit][28]。(译者:可惜 Reddit 已经是不存在的网站了)
|
如果你觉得这篇文章不错,请在社交媒体上分享!比如 Hacker News 或者 [Reddit][28]。
|
||||||
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
@ -80,8 +80,8 @@ via: https://itsfoss.com/history-of-firefox
|
|||||||
|
|
||||||
作者:[John Paul][a]
|
作者:[John Paul][a]
|
||||||
选题:[lujun9972][b]
|
选题:[lujun9972][b]
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
译者:[Moelf](https://github.com/Moelf)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[acyanbird](https://github.com/acyanbird), [wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
@ -1,24 +1,24 @@
|
|||||||
[#]: collector: (lujun9972)
|
[#]: collector: (lujun9972)
|
||||||
[#]: translator: (HankChow)
|
[#]: translator: (HankChow)
|
||||||
[#]: reviewer: ( )
|
[#]: reviewer: (wxy)
|
||||||
[#]: publisher: ( )
|
[#]: publisher: (wxy)
|
||||||
[#]: url: ( )
|
[#]: url: (https://linux.cn/article-10717-1.html)
|
||||||
[#]: subject: (Using Square Brackets in Bash: Part 1)
|
[#]: subject: (Using Square Brackets in Bash: Part 1)
|
||||||
[#]: via: (https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1)
|
[#]: via: (https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1)
|
||||||
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
|
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
|
||||||
|
|
||||||
在 Bash 中使用方括号 [第 1 节]
|
在 Bash 中使用[方括号] (一)
|
||||||
======
|
======
|
||||||
|
|
||||||
![square brackets][1]
|
![square brackets][1]
|
||||||
|
|
||||||
> 这篇文章将要介绍方括号及其在命令行中的用法。
|
> 这篇文章将要介绍方括号及其在命令行中的不同用法。
|
||||||
|
|
||||||
看完[花括号在命令行中的用法][3]之后,现在我们继续来看方括号(`[]`)在上下文中是如何发挥作用的。
|
看完[花括号在命令行中的用法][3]之后,现在我们继续来看方括号(`[]`)在上下文中是如何发挥作用的。
|
||||||
|
|
||||||
### <ruby>通配<rt>Globbing</rt></ruby>
|
### 通配
|
||||||
|
|
||||||
方括号最简单的用法就是通配。你可能在知道“通配”这个概念之前就已经通过通配来匹配内容了,列出具有相同特征的多个文件就是一个很常见的场景,例如列出所有 JPEG 文件:
|
方括号最简单的用法就是通配。你可能在知道“<ruby><rt>Globbing</rt></ruby>”这个概念之前就已经通过通配来匹配内容了,列出具有相同特征的多个文件就是一个很常见的场景,例如列出所有 JPEG 文件:
|
||||||
|
|
||||||
```
|
```
|
||||||
ls *.jpg
|
ls *.jpg
|
||||||
@ -70,7 +70,7 @@ ls file0[259][278]
|
|||||||
cp file0[12]? archive0[12]?
|
cp file0[12]? archive0[12]?
|
||||||
```
|
```
|
||||||
|
|
||||||
因为通配只能针对已有的文件,而 archive 开头的文件并不存在,不能进行通配。
|
因为通配只能针对已有的文件,而 `archive` 开头的文件并不存在,不能进行通配。
|
||||||
|
|
||||||
而这条命令
|
而这条命令
|
||||||
|
|
||||||
@ -82,7 +82,6 @@ cp file0[12]? archive0[1..2][0..9]
|
|||||||
|
|
||||||
```
|
```
|
||||||
mkdir archive
|
mkdir archive
|
||||||
|
|
||||||
cp file0[12]? archive
|
cp file0[12]? archive
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -94,7 +93,6 @@ cp file0[12]? archive
|
|||||||
|
|
||||||
```
|
```
|
||||||
myvar="Hello World"
|
myvar="Hello World"
|
||||||
|
|
||||||
echo Goodbye Cruel ${myvar#Hello}
|
echo Goodbye Cruel ${myvar#Hello}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -104,11 +102,8 @@ echo Goodbye Cruel ${myvar#Hello}
|
|||||||
|
|
||||||
```
|
```
|
||||||
for i in file0[12]?;\
|
for i in file0[12]?;\
|
||||||
|
|
||||||
do\
|
do\
|
||||||
|
|
||||||
cp $i archive${i#file};\
|
cp $i archive${i#file};\
|
||||||
|
|
||||||
done
|
done
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -120,7 +115,7 @@ done
|
|||||||
"archive" + "file019" - "file" = "archive019"
|
"archive" + "file019" - "file" = "archive019"
|
||||||
```
|
```
|
||||||
|
|
||||||
最终整个 `cp` 命令是这样的:
|
最终整个 `cp` 命令展开为:
|
||||||
|
|
||||||
```
|
```
|
||||||
cp file019 archive019
|
cp file019 archive019
|
||||||
@ -137,7 +132,7 @@ via: https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1
|
|||||||
作者:[Paul Brown][a]
|
作者:[Paul Brown][a]
|
||||||
选题:[lujun9972][b]
|
选题:[lujun9972][b]
|
||||||
译者:[HankChow](https://github.com/HankChow)
|
译者:[HankChow](https://github.com/HankChow)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
@ -145,5 +140,5 @@ via: https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1
|
|||||||
[b]: https://github.com/lujun9972
|
[b]: https://github.com/lujun9972
|
||||||
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/square-gabriele-diwald-475007-unsplash.jpg?itok=cKmysLfd "square brackets"
|
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/square-gabriele-diwald-475007-unsplash.jpg?itok=cKmysLfd "square brackets"
|
||||||
[2]: https://www.linux.com/LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
|
[2]: https://www.linux.com/LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
|
||||||
[3]: https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash
|
[3]: https://linux.cn/article-10624-1.html
|
||||||
|
|
@ -1,18 +1,18 @@
|
|||||||
[#]: collector: (lujun9972)
|
[#]: collector: (lujun9972)
|
||||||
[#]: translator: (geekpi)
|
[#]: translator: (geekpi)
|
||||||
[#]: reviewer: ( )
|
[#]: reviewer: (wxy)
|
||||||
[#]: publisher: ( )
|
[#]: publisher: (wxy)
|
||||||
[#]: url: ( )
|
[#]: url: (https://linux.cn/article-10715-1.html)
|
||||||
[#]: subject: (Setting kernel command line arguments with Fedora 30)
|
[#]: subject: (Setting kernel command line arguments with Fedora 30)
|
||||||
[#]: via: (https://fedoramagazine.org/setting-kernel-command-line-arguments-with-fedora-30/)
|
[#]: via: (https://fedoramagazine.org/setting-kernel-command-line-arguments-with-fedora-30/)
|
||||||
[#]: author: (Laura Abbott https://fedoramagazine.org/makes-fedora-kernel/)
|
[#]: author: (Laura Abbott https://fedoramagazine.org/makes-fedora-kernel/)
|
||||||
|
|
||||||
在 Fedora 30 中设置内核命令行参数
|
如何在 Fedora 30 中设置内核命令行参数
|
||||||
======
|
======
|
||||||
|
|
||||||
![][1]
|
![][1]
|
||||||
|
|
||||||
在调试或试验内核时,向内核命令行添加选项是一项常见任务。即将发布的 Fedora 30 版本改为使用 Bootloader 规范([BLS] [2])。根据你修改内核命令行选项的方式,你的工作流可能会更改。继续阅读获取更多信息。
|
在调试或试验内核时,向内核命令行添加选项是一项常见任务。即将发布的 Fedora 30 版本改为使用 Bootloader 规范([BLS][2])。根据你修改内核命令行选项的方式,你的工作流可能会更改。继续阅读获取更多信息。
|
||||||
|
|
||||||
要确定你的系统是使用 BLS 还是旧的规范,请查看文件:
|
要确定你的系统是使用 BLS 还是旧的规范,请查看文件:
|
||||||
|
|
||||||
@ -28,24 +28,19 @@ GRUB_ENABLE_BLSCFG=true
|
|||||||
|
|
||||||
看到这个,你运行的是 BLS,你可能需要更改设置内核命令行参数的方式。
|
看到这个,你运行的是 BLS,你可能需要更改设置内核命令行参数的方式。
|
||||||
|
|
||||||
如果你只想修改单个内核条目(例如,暂时解决显示问题),可以使用 grubby 命令:
|
如果你只想修改单个内核条目(例如,暂时解决显示问题),可以使用 `grubby` 命令:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ grubby --update-kernel /boot/vmlinuz-5.0.1-300.fc30.x86_64 --args="amdgpu.dc=0"
|
$ grubby --update-kernel /boot/vmlinuz-5.0.1-300.fc30.x86_64 --args="amdgpu.dc=0"
|
||||||
```
|
```
|
||||||
|
|
||||||
要删除内核参数,可以传递
|
要删除内核参数,可以传递 `--remove-args` 参数给 `grubby`:
|
||||||
|
|
||||||
```
|
|
||||||
--remove-args
|
|
||||||
```
|
|
||||||
参数给 grubby
|
|
||||||
|
|
||||||
```
|
```
|
||||||
$ grubby --update-kernel /boot/vmlinuz-5.0.1-300.fc30.x86_64 --remove-args="amdgpu.dc=0"
|
$ grubby --update-kernel /boot/vmlinuz-5.0.1-300.fc30.x86_64 --remove-args="amdgpu.dc=0"
|
||||||
```
|
```
|
||||||
|
|
||||||
如果有应该添加到每个内核命令行的选项(例如,你希望禁用 rdrand 指令生成随机数),则可以运行 grubby 命令:
|
如果有应该添加到每个内核命令行的选项(例如,你希望禁用 `rdrand` 指令生成随机数),则可以运行 `grubby` 命令:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ grubby --update-kernel=ALL --args="nordrand"
|
$ grubby --update-kernel=ALL --args="nordrand"
|
||||||
@ -53,16 +48,7 @@ $ grubby --update-kernel=ALL --args="nordrand"
|
|||||||
|
|
||||||
这将更新所有内核条目的命令行,并保存作为将来条目的命令行选项。
|
这将更新所有内核条目的命令行,并保存作为将来条目的命令行选项。
|
||||||
|
|
||||||
如果你想要从所有内核中删除该选项,则可以再次使用
|
如果你想要从所有内核中删除该选项,则可以再次使用 `--remove-args` 和 `--update-kernel=ALL`:
|
||||||
|
|
||||||
```
|
|
||||||
--remove-args
|
|
||||||
```
|
|
||||||
还有
|
|
||||||
|
|
||||||
```
|
|
||||||
--update-kernel=ALL
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
```
|
||||||
$ grubby --update-kernel=ALL --remove-args="nordrand"
|
$ grubby --update-kernel=ALL --remove-args="nordrand"
|
||||||
@ -75,7 +61,7 @@ via: https://fedoramagazine.org/setting-kernel-command-line-arguments-with-fedor
|
|||||||
作者:[Laura Abbott][a]
|
作者:[Laura Abbott][a]
|
||||||
选题:[lujun9972][b]
|
选题:[lujun9972][b]
|
||||||
译者:[geekpi](https://github.com/geekpi)
|
译者:[geekpi](https://github.com/geekpi)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
@ -0,0 +1,69 @@
|
|||||||
|
[#]: collector: (lujun9972)
|
||||||
|
[#]: translator: ( )
|
||||||
|
[#]: reviewer: ( )
|
||||||
|
[#]: publisher: ( )
|
||||||
|
[#]: url: ( )
|
||||||
|
[#]: subject: (Anti-lasers could give us perfect antennas, greater data capacity)
|
||||||
|
[#]: via: (https://www.networkworld.com/article/3386879/anti-lasers-could-give-us-perfect-antennas-greater-data-capacity.html#tk.rss_all)
|
||||||
|
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||||
|
|
||||||
|
Anti-lasers could give us perfect antennas, greater data capacity
|
||||||
|
======
|
||||||
|
Anti-lasers get close to providing a 100% efficient signal channel for data, say engineers.
|
||||||
|
![Guirong Hao / Valery Brozhinsky / Getty Images][1]
|
||||||
|
|
||||||
|
Playing laser light backwards could adjust data transmission signals so that they perfectly match receiving antennas. The fine-tuning of signals like this, not achieved with such detail before, could create more capacity for ever-increasing data demand.
|
||||||
|
|
||||||
|
"Imagine, for example, that you could adjust a cell phone signal exactly the right way, so that it is perfectly absorbed by the antenna in your phone," says Stefan Rotter of the Institute for Theoretical Physics of Technische Universität Wien (TU Wien) in a [press release][2].
|
||||||
|
|
||||||
|
Rotter is talking about “Random Anti-Laser,” a project he has been a part of. The idea behind it is that if one could time-reverse a laser, then the laser (right now considered the best light source ever built) becomes the best available light absorber. Perfect absorption of a signal wave would mean that all of the data-carrying energy is absorbed by the receiving device, thus it becomes 100% efficient.
|
||||||
|
|
||||||
|
**[ Related:[What is 5G wireless? How it will change networking as we know it?][3] ]**
|
||||||
|
|
||||||
|
“The easiest way to think about this process is in terms of a movie showing a conventional laser sending out laser light, which is played backwards,” the TU Wein article says. The anti-laser is the exact opposite of the laser — instead of sending specific colors perfectly when energy is applied, it receives specific colors perfectly.
|
||||||
|
|
||||||
|
Perfect absorption of a signal wave would mean that all of the data-carrying energy is absorbed by the receiving device, thus it becomes 100% efficient.
|
||||||
|
|
||||||
|
Counter-intuitively, it’s the random scattering of light in all directions that’s behind the engineering. However, the Vienna, Austria, university group performs precise calculations on those scattering, splitting signals. That lets the researchers harness the light.
|
||||||
|
|
||||||
|
### How the anti-laser technology works
|
||||||
|
|
||||||
|
The microwave-based, experimental device the researchers have built in the lab to prove the idea doesn’t just potentially apply to cell phones; wireless internet of things (IoT) devices would also get more data throughput. How it works: The device consists of an antenna-containing chamber encompassed by cylinders, all arranged haphazardly, the researchers explain. The cylinders distribute an elaborate, arbitrary wave pattern “similar to [throwing] stones in a puddle of water, at which water waves are deflected.”
|
||||||
|
|
||||||
|
Measurements then take place to identify exactly how the signals return. The team involved, which also includes collaborators from the University of Nice, France, then “characterize the random structure and calculate the wave front that is completely swallowed by the central antenna at the right absorption strength.” Ninety-nine point eight percent is absorbed, making it remarkably and virtually perfect. Data throughput, range, and other variables thus improve.
|
||||||
|
|
||||||
|
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][4] ]**
|
||||||
|
|
||||||
|
Achieving perfect antennas has been pretty much only theoretically possible for engineers to date. Reflected energy (RF back into the transmitter from antenna inefficiencies) has always been an issue in general. Reflections from surfaces, too, have been always been a problem.
|
||||||
|
|
||||||
|
“Think about a mobile phone signal that is reflected several times before it reaches your cell phone,” Rotter says. It’s not easy to get the tuning right — as the antennas’ physical locations move, reflected surfaces become different.
|
||||||
|
|
||||||
|
### Scattering lasers
|
||||||
|
|
||||||
|
Scattering, similar to that used in this project, is becoming more important in communications overall. “Waves that are being scattered in a complex way are really all around us,” the group says.
|
||||||
|
|
||||||
|
An example is random-lasers (which the group’s anti-laser is based on) that unlike traditional lasers, do not use reflective surfaces but trap scattered light and then “emit a very complicated, system-specific laser field when supplied with energy.” The anti-random-laser developed by Rotter and his group simply reverses that in time:
|
||||||
|
|
||||||
|
“Instead of a light source that emits a specific wave depending on its random inner structure, it is also possible to build the perfect absorber.” The anti-random-laser.
|
||||||
|
|
||||||
|
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.networkworld.com/article/3386879/anti-lasers-could-give-us-perfect-antennas-greater-data-capacity.html#tk.rss_all
|
||||||
|
|
||||||
|
作者:[Patrick Nelson][a]
|
||||||
|
选题:[lujun9972][b]
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||||
|
[b]: https://github.com/lujun9972
|
||||||
|
[1]: https://images.idgesg.net/images/article/2019/03/data_cubes_transformation_conversion_by_guirong_hao_gettyimages-1062387214_plus_abstract_binary_by_valerybrozhinsky_gettyimages-865457032_3x2_2400x1600-100790211-large.jpg
|
||||||
|
[2]: https://www.tuwien.ac.at/en/news/news_detail/article/126574/
|
||||||
|
[3]: https://www.networkworld.com/article/3203489/lan-wan/what-is-5g-wireless-networking-benefits-standards-availability-versus-lte.html
|
||||||
|
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
|
||||||
|
[5]: https://www.facebook.com/NetworkWorld/
|
||||||
|
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,60 @@
|
|||||||
|
[#]: collector: (lujun9972)
|
||||||
|
[#]: translator: ( )
|
||||||
|
[#]: reviewer: ( )
|
||||||
|
[#]: publisher: ( )
|
||||||
|
[#]: url: ( )
|
||||||
|
[#]: subject: (Google partners with Intel, HPE and Lenovo for hybrid cloud)
|
||||||
|
[#]: via: (https://www.networkworld.com/article/3388062/google-partners-with-intel-hpe-and-lenovo-for-hybrid-cloud.html#tk.rss_all)
|
||||||
|
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||||
|
|
||||||
|
Google partners with Intel, HPE and Lenovo for hybrid cloud
|
||||||
|
======
|
||||||
|
Google boosted its on-premises and cloud connections with Kubernetes and serverless computing.
|
||||||
|
![Ilze Lucero \(CC0\)][1]
|
||||||
|
|
||||||
|
Still struggling to get its Google Cloud business out of single-digit marketshare, Google this week introduced new partnerships with Lenovo and Intel to help bolster its hybrid cloud offerings, both built on Google’s Kubernetes container technology.
|
||||||
|
|
||||||
|
At Google’s Next ’19 show this week, Intel and Google said they will collaborate on Google's Anthos, a new reference design based on the second-Generation Xeon Scalable processor introduced last week and an optimized Kubernetes software stack designed to deliver increased workload portability between public and private cloud environments.
|
||||||
|
|
||||||
|
**[ Read also:[What hybrid cloud means in practice][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
|
||||||
|
|
||||||
|
As part the Anthos announcement, Hewlett Packard Enterprise (HPE) said it has validated Anthos on its ProLiant servers, while Lenovo has done the same for its ThinkAgile platform. This solution will enable customers to get a consistent Kubernetes experience between Google Cloud and their on-premises HPE or Lenovo servers. No official word from Dell yet, but they can’t be far behind.
|
||||||
|
|
||||||
|
Users will be able to manage their Kubernetes clusters and enforce policy consistently across environments – either in the public cloud or on-premises. In addition, Anthos delivers a fully integrated stack of hardened components, including OS and container runtimes that are tested and validated by Google, so customers can upgrade their clusters with confidence and minimize downtime.
|
||||||
|
|
||||||
|
### What is Google Anthos?
|
||||||
|
|
||||||
|
Google formally introduced [Anthos][4] at this year’s show. Anthos, formerly Cloud Services Platform, is meant to allow users to run their containerized applications without spending time on building, managing, and operating Kubernetes clusters. It runs both on Google Cloud Platform (GCP) with Google Kubernetes Engine (GKE) and in your data center with GKE On-Prem. Anthos will also let you manage workloads running on third-party clouds such as Amazon Web Services (AWS) and Microsoft Azure.
|
||||||
|
|
||||||
|
Google also announced the beta release of Anthos Migrate, which auto-migrates virtual machines (VM) from on-premises or other clouds directly into containers in GKE with minimal effort. This allows enterprises to migrate their infrastructure in one streamlined motion, without upfront modifications to the original VMs or applications.
|
||||||
|
|
||||||
|
Intel said it will publish the production design as an Intel Select Solution, as well as a developer platform, making it available to anyone who wants it.
|
||||||
|
|
||||||
|
### Serverless environments
|
||||||
|
|
||||||
|
Google isn’t stopping with Kubernetes containers, it’s also pushing ahead with serverless environments. [Cloud Run][5] is Google’s implementation of serverless computing, which is something of a misnomer. You still run your apps on servers; you just aren’t using a dedicated physical server. It is stateless, so resources are not allocated until you actually run or use the application.
|
||||||
|
|
||||||
|
Cloud Run is a fully serverless offering that takes care of all infrastructure management, including the provisioning, configuring, scaling, and managing of servers. It automatically scales up or down within seconds, even down to zero depending on traffic, ensuring you pay only for the resources you actually use. Cloud Run can be used on GKE, offering the option to run side by side with other workloads deployed in the same cluster.
|
||||||
|
|
||||||
|
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.networkworld.com/article/3388062/google-partners-with-intel-hpe-and-lenovo-for-hybrid-cloud.html#tk.rss_all
|
||||||
|
|
||||||
|
作者:[Andy Patrizio][a]
|
||||||
|
选题:[lujun9972][b]
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||||
|
[b]: https://github.com/lujun9972
|
||||||
|
[1]: https://images.idgesg.net/images/article/2018/03/cubes_blocks_squares_containers_ilze_lucero_cc0_via_unsplash_1200x800-100752172-large.jpg
|
||||||
|
[2]: https://www.networkworld.com/article/3249495/what-hybrid-cloud-mean-practice
|
||||||
|
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||||
|
[4]: https://cloud.google.com/blog/topics/hybrid-cloud/new-platform-for-managing-applications-in-todays-multi-cloud-world
|
||||||
|
[5]: https://cloud.google.com/blog/products/serverless/announcing-cloud-run-the-newest-member-of-our-serverless-compute-stack
|
||||||
|
[6]: https://www.facebook.com/NetworkWorld/
|
||||||
|
[7]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,60 @@
|
|||||||
|
[#]: collector: (lujun9972)
|
||||||
|
[#]: translator: ( )
|
||||||
|
[#]: reviewer: ( )
|
||||||
|
[#]: publisher: ( )
|
||||||
|
[#]: url: ( )
|
||||||
|
[#]: subject: (HPE and Nutanix partner for hyperconverged private cloud systems)
|
||||||
|
[#]: via: (https://www.networkworld.com/article/3388297/hpe-and-nutanix-partner-for-hyperconverged-private-cloud-systems.html#tk.rss_all)
|
||||||
|
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||||
|
|
||||||
|
HPE and Nutanix partner for hyperconverged private cloud systems
|
||||||
|
======
|
||||||
|
Both companies will sell HP ProLiant appliances with Nutanix software but to different markets.
|
||||||
|
![Hewlett Packard Enterprise][1]
|
||||||
|
|
||||||
|
Hewlett Packard Enterprise (HPE) has partnered with Nutanix to offer Nutanix’s hyperconverged infrastructure (HCI) software available as a managed private cloud service and on HPE-branded appliances.
|
||||||
|
|
||||||
|
As part of the deal, the two companies will be competing against each other in hardware sales, sort of. If you want the consumption model you get through HPE’s GreenLake, where your usage is metered and you pay for only the time you use it (similar to the cloud), then you would get the ProLiant hardware from HPE.
|
||||||
|
|
||||||
|
If you want an appliance model where you buy the hardware outright, like in the traditional sense of server sales, you would get the same ProLiant through Nutanix.
|
||||||
|
|
||||||
|
**[ Read also:[What is hybrid cloud computing?][2] and [Multicloud mania: what to know][3] ]**
|
||||||
|
|
||||||
|
As it is, HPE GreenLake offers multiple cloud offerings to customers, including virtualization courtesy of VMware and Microsoft. With the Nutanix partnership, HPE is adding Nutanix’s free Acropolis hypervisor to its offerings.
|
||||||
|
|
||||||
|
“Customers get to choose an alternative to VMware with this,” said Pradeep Kumar, senior vice president and general manager of HPE’s Pointnext consultancy. “They like the Acropolis license model, since it’s license-free. Then they have choice points so pricing is competitive. Some like VMware, and I think it’s our job to offer them both and they can pick and choose.”
|
||||||
|
|
||||||
|
Kumar added that the whole Nutanix stack is 15 to 18% less with Acropolis than a VMware-powered system, since they save on the hypervisor.
|
||||||
|
|
||||||
|
The HPE-Nutanix partnership offers a fully managed hybrid cloud infrastructure delivered as a service and deployed in customers’ data centers or co-location facility. The managed private cloud service gives enterprises a hyperconverged environment in-house without having to manage the infrastructure themselves and, more importantly, without the burden of ownership. GreenLake operates more like a lease than ownership.
|
||||||
|
|
||||||
|
### HPE GreenLake's private cloud services promise to significantly reduce costs
|
||||||
|
|
||||||
|
HPE is pushing hard on GreenLake, which basically mimics cloud platform pricing models of paying for what you use rather than outright ownership. Kumar said HPE projects the consumption model will account for 30% of HPE’s business in the next few years.
|
||||||
|
|
||||||
|
GreenLake makes some hefty promises. According to Nutanix-commissioned IDC research, customers will achieve a 60% reduction in the five-year cost of operations, while a HPE-commissioned Forrester report found customers benefit from a 30% Capex savings due to eliminated need for overprovisioning and a 90% reduction in support and professional services costs.
|
||||||
|
|
||||||
|
By shifting to an IT as a Service model, HPE claims to provide a 40% increase in productivity by reducing the support load on IT operations staff and to shorten the time to deploy IT projects by 65%.
|
||||||
|
|
||||||
|
The two new offerings from the partnership – HPE GreenLake’s private cloud service running Nutanix software and the HPE-branded appliances integrated with Nutanix software – are expected to be available during the 2019 third quarter, the companies said.
|
||||||
|
|
||||||
|
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.networkworld.com/article/3388297/hpe-and-nutanix-partner-for-hyperconverged-private-cloud-systems.html#tk.rss_all
|
||||||
|
|
||||||
|
作者:[Andy Patrizio][a]
|
||||||
|
选题:[lujun9972][b]
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||||
|
[b]: https://github.com/lujun9972
|
||||||
|
[1]: https://images.techhive.com/images/article/2015/11/hpe_building-100625424-large.jpg
|
||||||
|
[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
|
||||||
|
[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
|
||||||
|
[4]: https://www.facebook.com/NetworkWorld/
|
||||||
|
[5]: https://www.linkedin.com/company/network-world
|
@ -1,3 +1,5 @@
|
|||||||
|
translating by MjSeven
|
||||||
|
|
||||||
Getting started with Sensu monitoring
|
Getting started with Sensu monitoring
|
||||||
======
|
======
|
||||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e)
|
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e)
|
||||||
|
@ -1,159 +0,0 @@
|
|||||||
[#]: collector: (lujun9972)
|
|
||||||
[#]: translator: (liujing97)
|
|
||||||
[#]: reviewer: ( )
|
|
||||||
[#]: publisher: ( )
|
|
||||||
[#]: url: ( )
|
|
||||||
[#]: subject: (7 Methods To Identify Disk Partition/FileSystem UUID On Linux)
|
|
||||||
[#]: via: (https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-unique-identifier-linux/)
|
|
||||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
|
||||||
|
|
||||||
7 Methods To Identify Disk Partition/FileSystem UUID On Linux
|
|
||||||
======
|
|
||||||
|
|
||||||
As a Linux administrator you should aware of that how do you check partition UUID or filesystem UUID.
|
|
||||||
|
|
||||||
Because most of the Linux systems are mount the partitions with UUID. The same has been verified in the `/etc/fstab` file.
|
|
||||||
|
|
||||||
There are many utilities are available to check UUID. In this article we will show you how to check UUID in many ways and you can choose the one which is suitable for you.
|
|
||||||
|
|
||||||
### What Is UUID?
|
|
||||||
|
|
||||||
UUID stands for Universally Unique Identifier which helps Linux system to identify a hard drives partition instead of block device file.
|
|
||||||
|
|
||||||
libuuid is part of the util-linux-ng package since kernel version 2.15.1 and it’s installed by default in Linux system.
|
|
||||||
|
|
||||||
The UUIDs generated by this library can be reasonably expected to be unique within a system, and unique across all systems.
|
|
||||||
|
|
||||||
It’s a 128 bit number used to identify information in computer systems. UUIDs were originally used in the Apollo Network Computing System (NCS) and later UUIDs are standardized by the Open Software Foundation (OSF) as part of the Distributed Computing Environment (DCE).
|
|
||||||
|
|
||||||
UUIDs are represented as 32 hexadecimal (base 16) digits, displayed in five groups separated by hyphens, in the form 8-4-4-4-12 for a total of 36 characters (32 alphanumeric characters and four hyphens).
|
|
||||||
|
|
||||||
For example: d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
|
|
||||||
|
|
||||||
Sample of my /etc/fstab file.
|
|
||||||
|
|
||||||
```
|
|
||||||
# cat /etc/fstab
|
|
||||||
|
|
||||||
# /etc/fstab: static file system information.
|
|
||||||
#
|
|
||||||
# Use 'blkid' to print the universally unique identifier for a device; this may
|
|
||||||
# be used with UUID= as a more robust way to name devices that works even if
|
|
||||||
# disks are added and removed. See fstab(5).
|
|
||||||
#
|
|
||||||
#
|
|
||||||
UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f / ext4 defaults,noatime 0 1
|
|
||||||
UUID=a2092b92-af29-4760-8e68-7a201922573b swap swap defaults,noatime 0 2
|
|
||||||
```
|
|
||||||
|
|
||||||
We can check this using the following seven commands.
|
|
||||||
|
|
||||||
* **`blkid Command:`** locate/print block device attributes.
|
|
||||||
* **`lsblk Command:`** lsblk lists information about all available or the specified block devices.
|
|
||||||
* **`hwinfo Command:`** hwinfo stands for hardware information tool is another great utility that used to probe for the hardware present in the system.
|
|
||||||
* **`udevadm Command:`** udev management tool.
|
|
||||||
* **`tune2fs Command:`** adjust tunable filesystem parameters on ext2/ext3/ext4 filesystems.
|
|
||||||
* **`dumpe2fs Command:`** dump ext2/ext3/ext4 filesystem information.
|
|
||||||
* **`Using by-uuid Path:`** The directory contains UUID and real block device files, UUIDs were symlink with real block device files.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### How To Check Disk Partition/FileSystem UUID In Linux Uusing blkid Command?
|
|
||||||
|
|
||||||
blkid is a command-line utility to locate/print block device attributes. It uses libblkid library to get disk partition UUID in Linux system.
|
|
||||||
|
|
||||||
```
|
|
||||||
# blkid
|
|
||||||
/dev/sda1: UUID="d92fa769-e00f-4fd7-b6ed-ecf7224af7fa" TYPE="ext4" PARTUUID="eab59449-01"
|
|
||||||
/dev/sdc1: UUID="d17e3c31-e2c9-4f11-809c-94a549bc43b7" TYPE="ext2" PARTUUID="8cc8f9e5-01"
|
|
||||||
/dev/sdc3: UUID="ca307aa4-0866-49b1-8184-004025789e63" TYPE="ext4" PARTUUID="8cc8f9e5-03"
|
|
||||||
/dev/sdc5: PARTUUID="8cc8f9e5-05"
|
|
||||||
```
|
|
||||||
|
|
||||||
### How To Check Disk Partition/FileSystem UUID In Linux Uusing lsblk Command?
|
|
||||||
|
|
||||||
lsblk lists information about all available or the specified block devices. The lsblk command reads the sysfs filesystem and udev db to gather information.
|
|
||||||
|
|
||||||
If the udev db is not available or lsblk is compiled without udev support than it tries to read LABELs, UUIDs and filesystem types from the block device. In this case root permissions are necessary. The command prints all block devices (except RAM disks) in a tree-like format by default.
|
|
||||||
|
|
||||||
```
|
|
||||||
# lsblk -o name,mountpoint,size,uuid
|
|
||||||
NAME MOUNTPOINT SIZE UUID
|
|
||||||
sda 30G
|
|
||||||
└─sda1 / 20G d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
|
|
||||||
sdb 10G
|
|
||||||
sdc 10G
|
|
||||||
├─sdc1 1G d17e3c31-e2c9-4f11-809c-94a549bc43b7
|
|
||||||
├─sdc3 1G ca307aa4-0866-49b1-8184-004025789e63
|
|
||||||
├─sdc4 1K
|
|
||||||
└─sdc5 1G
|
|
||||||
sdd 10G
|
|
||||||
sde 10G
|
|
||||||
sr0 1024M
|
|
||||||
```
|
|
||||||
|
|
||||||
### How To Check Disk Partition/FileSystem UUID In Linux Uusing by-uuid path?
|
|
||||||
|
|
||||||
The directory contains UUID and real block device files, UUIDs were symlink with real block device files.
|
|
||||||
|
|
||||||
```
|
|
||||||
# ls -lh /dev/disk/by-uuid/
|
|
||||||
total 0
|
|
||||||
lrwxrwxrwx 1 root root 10 Jan 29 08:34 ca307aa4-0866-49b1-8184-004025789e63 -> ../../sdc3
|
|
||||||
lrwxrwxrwx 1 root root 10 Jan 29 08:34 d17e3c31-e2c9-4f11-809c-94a549bc43b7 -> ../../sdc1
|
|
||||||
lrwxrwxrwx 1 root root 10 Jan 29 08:34 d92fa769-e00f-4fd7-b6ed-ecf7224af7fa -> ../../sda1
|
|
||||||
```
|
|
||||||
|
|
||||||
### How To Check Disk Partition/FileSystem UUID In Linux Uusing hwinfo Command?
|
|
||||||
|
|
||||||
**[hwinfo][1]** stands for hardware information tool is another great utility that used to probe for the hardware present in the system and display detailed information about varies hardware components in human readable format.
|
|
||||||
|
|
||||||
```
|
|
||||||
# hwinfo --block | grep by-uuid | awk '{print $3,$7}'
|
|
||||||
/dev/sdc1, /dev/disk/by-uuid/d17e3c31-e2c9-4f11-809c-94a549bc43b7
|
|
||||||
/dev/sdc3, /dev/disk/by-uuid/ca307aa4-0866-49b1-8184-004025789e63
|
|
||||||
/dev/sda1, /dev/disk/by-uuid/d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
|
|
||||||
```
|
|
||||||
|
|
||||||
### How To Check Disk Partition/FileSystem UUID In Linux Uusing udevadm Command?
|
|
||||||
|
|
||||||
udevadm expects a command and command specific options. It controls the runtime behavior of systemd-udevd, requests kernel events, manages the event queue, and provides simple debugging mechanisms.
|
|
||||||
|
|
||||||
```
|
|
||||||
udevadm info -q all -n /dev/sdc1 | grep -i by-uuid | head -1
|
|
||||||
S: disk/by-uuid/d17e3c31-e2c9-4f11-809c-94a549bc43b7
|
|
||||||
```
|
|
||||||
|
|
||||||
### How To Check Disk Partition/FileSystem UUID In Linux Uusing tune2fs Command?
|
|
||||||
|
|
||||||
tune2fs allows the system administrator to adjust various tunable filesystem parameters on Linux ext2, ext3, or ext4 filesystems. The current values of these options can be displayed by using the -l option.
|
|
||||||
|
|
||||||
```
|
|
||||||
# tune2fs -l /dev/sdc1 | grep UUID
|
|
||||||
Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7
|
|
||||||
```
|
|
||||||
|
|
||||||
### How To Check Disk Partition/FileSystem UUID In Linux Uusing dumpe2fs Command?
|
|
||||||
|
|
||||||
dumpe2fs prints the super block and blocks group information for the filesystem present on device.
|
|
||||||
|
|
||||||
```
|
|
||||||
# dumpe2fs /dev/sdc1 | grep UUID
|
|
||||||
dumpe2fs 1.43.5 (04-Aug-2017)
|
|
||||||
Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7
|
|
||||||
```
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-unique-identifier-linux/
|
|
||||||
|
|
||||||
作者:[Magesh Maruthamuthu][a]
|
|
||||||
选题:[lujun9972][b]
|
|
||||||
译者:[liujing97](https://github.com/liujing97)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]: https://www.2daygeek.com/author/magesh/
|
|
||||||
[b]: https://github.com/lujun9972
|
|
||||||
[1]: https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/
|
|
@ -1,74 +0,0 @@
|
|||||||
[#]: collector: (lujun9972)
|
|
||||||
[#]: translator: (geekpi)
|
|
||||||
[#]: reviewer: ( )
|
|
||||||
[#]: publisher: ( )
|
|
||||||
[#]: url: ( )
|
|
||||||
[#]: subject: (Ubuntu 14.04 is Reaching the End of Life. Here are Your Options)
|
|
||||||
[#]: via: (https://itsfoss.com/ubuntu-14-04-end-of-life/)
|
|
||||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
|
||||||
|
|
||||||
Ubuntu 14.04 is Reaching the End of Life. Here are Your Options
|
|
||||||
======
|
|
||||||
|
|
||||||
Ubuntu 14.04 is reaching its end of life on April 30, 2019. This means there will be no security and maintenance updates for Ubuntu 14.04 users beyond this date.
|
|
||||||
|
|
||||||
You won’t even get updates for installed applications and you won’t be able to install a new application using apt command or the Software Center without manually modifying the sources.list.
|
|
||||||
|
|
||||||
Ubuntu 14.04 was released almost five years ago. That’s the life of a long term support release of Ubuntu.
|
|
||||||
|
|
||||||
[Check Ubuntu version][1] and see if you are still using Ubuntu 14.04. If that’s the case either on desktops or on servers, you might be wondering what should you do in such a situation.
|
|
||||||
|
|
||||||
Let me help you out there. Let me tell you what options do you have in this case.
|
|
||||||
|
|
||||||
![][2]
|
|
||||||
|
|
||||||
### Upgrade to Ubuntu 16.04 LTS (easiest of them all)
|
|
||||||
|
|
||||||
If you have a good internet connection, you can upgrade to Ubuntu 16.04 LTS from within Ubuntu 14.04.
|
|
||||||
|
|
||||||
Ubuntu 16.04 is also a long term support release and it will be supported till April, 2021. Which means you’ll have two years before another upgrade.
|
|
||||||
|
|
||||||
I recommend reading this tutorial about [upgrading Ubuntu version][3]. It was originally written for upgrading Ubuntu 16.04 to Ubuntu 18.04 but the steps are applicable in your case as well.
|
|
||||||
|
|
||||||
### Make a backup, do a fresh install of Ubuntu 18.04 LTS (ideal for desktop users)
|
|
||||||
|
|
||||||
The other option is that you make a backup of your Documents, Music, Pictures, Downloads and any other folder where you have kept essential data that you cannot afford to lose.
|
|
||||||
|
|
||||||
When I say backup, it simply means copying these folders to an external USB disk. In other words, you should have a way to copy the data back to your computer because you’ll be formatting your system.
|
|
||||||
|
|
||||||
I would recommend this option for the desktop users. Ubuntu 18.04 is the current long term support release and it will be supported till at least April, 2023. You have four long years before you are forced for another upgrade.
|
|
||||||
|
|
||||||
### Pay for extended security maintenance and continue using Ubuntu 14.04
|
|
||||||
|
|
||||||
This is suited for enterprise/corporate clients. Canonical, the parent company of Ubuntu, provides the Ubuntu Advantage program where customers can pay for phone/email based support among other benefits.
|
|
||||||
|
|
||||||
Ubuntu Advantage program users also have the [Extended Security Maintenance][4] (ESM) feature. This program provides security updates even after reaching the end of life for a given version.
|
|
||||||
|
|
||||||
This comes at a cost. It costs $225 per year per physical node for server users. For desktop users, the price is $150 per year. You can read the detailed pricing of the Ubuntu Advantage program [here][5].
|
|
||||||
|
|
||||||
### Still using Ubuntu 14.04?
|
|
||||||
|
|
||||||
If you are still using Ubuntu 14.04, you should start exploring your options as you have less than two months to go.
|
|
||||||
|
|
||||||
In any case, you must not use Ubuntu 14.04 after 30 April 2019 because your system will be vulnerable due to lack of security updates. Not being able to install new applications will be an additional major pain.
|
|
||||||
|
|
||||||
So, what option do you choose here? Upgrading to Ubuntu 16.04 or 18.04 or paying for the ESM?
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://itsfoss.com/ubuntu-14-04-end-of-life/
|
|
||||||
|
|
||||||
作者:[Abhishek Prakash][a]
|
|
||||||
选题:[lujun9972][b]
|
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]: https://itsfoss.com/author/abhishek/
|
|
||||||
[b]: https://github.com/lujun9972
|
|
||||||
[1]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
|
|
||||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/ubuntu-14-04-end-of-life-featured.png?resize=800%2C450&ssl=1
|
|
||||||
[3]: https://itsfoss.com/upgrade-ubuntu-version/
|
|
||||||
[4]: https://www.ubuntu.com/esm
|
|
||||||
[5]: https://www.ubuntu.com/support/plans-and-pricing
|
|
@ -0,0 +1,54 @@
|
|||||||
|
[#]: collector: (lujun9972)
|
||||||
|
[#]: translator: (geekpi)
|
||||||
|
[#]: reviewer: ( )
|
||||||
|
[#]: publisher: ( )
|
||||||
|
[#]: url: ( )
|
||||||
|
[#]: subject: (How to contribute to the Raspberry Pi community)
|
||||||
|
[#]: via: (https://opensource.com/article/19/3/contribute-raspberry-pi-community)
|
||||||
|
[#]: author: (Anderson Silva (Red Hat) https://opensource.com/users/ansilva/users/kepler22b/users/ansilva)
|
||||||
|
|
||||||
|
How to contribute to the Raspberry Pi community
|
||||||
|
======
|
||||||
|
Find ways to get involved in the Raspberry Pi community in the 13th
|
||||||
|
article in our getting-started series.
|
||||||
|
![][1]
|
||||||
|
|
||||||
|
Things are starting to wind down in this series, and as much fun as I've had writing it, mostly I hope it has helped someone out there use start using a Raspberry Pi for education or entertainment. Maybe the articles convinced you to buy your first Raspberry Pi or perhaps helped you rediscover the device that was collecting dust in a drawer. If any of that is true, I'll consider the series a success.
|
||||||
|
|
||||||
|
If you now want to pay it forward and help spread the word on how versatile this little green digital board is, here are a few ways you can get connected to the Raspberry Pi community:
|
||||||
|
|
||||||
|
* Contribute to improving the [official documentation][2]
|
||||||
|
* Contribute code to [projects][3] the Raspberry Pi depends on
|
||||||
|
* File [bugs][4] with Raspbian
|
||||||
|
* File bugs with the different ARM architecture platform distributions
|
||||||
|
* Help kids learn to code by taking a look at the Raspberry Pi Foundation's [Code Club][5] in the UK or [Code Club International][6] outside the UK
|
||||||
|
* Help with [translation][7]
|
||||||
|
* Volunteer on a [Raspberry Jam][8]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
These are just a few of the ways you can contribute to the Raspberry Pi community. Last but not least, you can join me and [contribute articles][9] to your favorite open source website, [Opensource.com][10]. :-)
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/19/3/contribute-raspberry-pi-community
|
||||||
|
|
||||||
|
作者:[Anderson Silva (Red Hat)][a]
|
||||||
|
选题:[lujun9972][b]
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://opensource.com/users/ansilva/users/kepler22b/users/ansilva
|
||||||
|
[b]: https://github.com/lujun9972
|
||||||
|
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry_pi_community.jpg?itok=dcKwb5et
|
||||||
|
[2]: https://www.raspberrypi.org/documentation/CONTRIBUTING.md
|
||||||
|
[3]: https://www.raspberrypi.org/github/
|
||||||
|
[4]: https://www.raspbian.org/RaspbianBugs
|
||||||
|
[5]: https://www.codeclub.org.uk/
|
||||||
|
[6]: https://www.codeclubworld.org/
|
||||||
|
[7]: https://www.raspberrypi.org/translate/
|
||||||
|
[8]: https://www.raspberrypi.org/jam/
|
||||||
|
[9]: https://opensource.com/participate
|
||||||
|
[10]: http://Opensource.com
|
@ -1,5 +1,5 @@
|
|||||||
[#]: collector: (lujun9972)
|
[#]: collector: (lujun9972)
|
||||||
[#]: translator: ( )
|
[#]: translator: (HankChow)
|
||||||
[#]: reviewer: ( )
|
[#]: reviewer: ( )
|
||||||
[#]: publisher: ( )
|
[#]: publisher: ( )
|
||||||
[#]: url: ( )
|
[#]: url: ( )
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
[#]: collector: (lujun9972)
|
[#]: collector: (lujun9972)
|
||||||
[#]: translator: ( )
|
[#]: translator: (tomjlw)
|
||||||
[#]: reviewer: ( )
|
[#]: reviewer: ( )
|
||||||
[#]: publisher: ( )
|
[#]: publisher: ( )
|
||||||
[#]: url: ( )
|
[#]: url: ( )
|
||||||
@ -53,7 +53,7 @@ via: https://www.networkworld.com/article/3388218/cisco-google-reenergize-multic
|
|||||||
|
|
||||||
作者:[Michael Cooney][a]
|
作者:[Michael Cooney][a]
|
||||||
选题:[lujun9972][b]
|
选题:[lujun9972][b]
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
译者:[tomjlw](https://github.com/tomjlw)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
@ -0,0 +1,136 @@
|
|||||||
|
[#]: collector: (lujun9972)
|
||||||
|
[#]: translator: ( )
|
||||||
|
[#]: reviewer: ( )
|
||||||
|
[#]: publisher: ( )
|
||||||
|
[#]: url: ( )
|
||||||
|
[#]: subject: (How to enable serverless computing in Kubernetes)
|
||||||
|
[#]: via: (https://opensource.com/article/19/4/enabling-serverless-kubernetes)
|
||||||
|
[#]: author: (Daniel Oh (Red Hat, Community Moderator) https://opensource.com/users/daniel-oh/users/daniel-oh)
|
||||||
|
|
||||||
|
How to enable serverless computing in Kubernetes
|
||||||
|
======
|
||||||
|
Knative is a faster, easier way to develop serverless applications on
|
||||||
|
Kubernetes platforms.
|
||||||
|
![Kubernetes][1]
|
||||||
|
|
||||||
|
In the first two articles in this series about using serverless on an open source platform, I described [how to get started with serverless platforms][2] and [how to write functions][3] in popular languages and build components using containers on Apache OpenWhisk.
|
||||||
|
|
||||||
|
Here in the third article, I'll walk you through enabling serverless in your [Kubernetes][4] environment. Kubernetes is the most popular platform to manage serverless workloads and microservice application containers and uses a finely grained deployment model to process workloads more quickly and easily.
|
||||||
|
|
||||||
|
Keep in mind that serverless not only helps you reduce infrastructure management while utilizing a consumption model for actual service use but also provides many capabilities of what the cloud platform serves. There are many serverless or FaaS (Function as a Service) platforms, but Kuberenetes is the first-class citizen for building a serverless platform because there are more than [13 serverless or FaaS open source projects][5] based on Kubernetes.
|
||||||
|
|
||||||
|
However, Kubernetes won't allow you to build, serve, and manage app containers for your serverless workloads in a native way. For example, if you want to build a [CI/CD pipeline][6] on Kubernetes to build, test, and deploy cloud-native apps from source code, you need to use your own release management tool and integrate it with Kubernetes.
|
||||||
|
|
||||||
|
Likewise, it's difficult to use Kubernetes in combination with serverless computing unless you use an independent serverless or FaaS platform built on Kubernetes, such as [Apache OpenWhisk][7], [Riff][8], or [Kubeless][9]. More importantly, the Kubernetes environment is still difficult for developers to learn the features of how it deals with serverless workloads from cloud-native apps.
|
||||||
|
|
||||||
|
### Knative
|
||||||
|
|
||||||
|
[Knative][10] was born for developers to create serverless experiences natively without depending on extra serverless or FaaS frameworks and many custom tools. Knative has three primary components—[Build][11], [Serving][12], and [Eventing][13]—for addressing common patterns and best practices for developing serverless applications on Kubernetes platforms.
|
||||||
|
|
||||||
|
To learn more, let's go through the usual development process for using Knative to increase productivity and solve Kubernetes' difficulties from the developer's point of view.
|
||||||
|
|
||||||
|
**Step 1:** Generate your cloud-native application from scratch using [Spring Initializr][14] or [Thorntail Project Generator][15]. Begin implementing your business logic using the [12-factor app methodology][16], and you might also do assembly testing to see if the function works correctly in many local testing tools.
|
||||||
|
|
||||||
|
![Spring Initializr screenshot][17] | ![Thorntail Project Generator screenshot][18]
|
||||||
|
---|---
|
||||||
|
|
||||||
|
**Step 2:** Build container images from your source code repositories via the Knative Build component. You can define multiple steps, such as installing dependencies, running integration testing, and pushing container images to your secured image registry for using existing Kubernetes primitives. More importantly, Knative Build makes developers' daily work easier and simpler—"boring but difficult." Here's an example of the Build YAML:
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
apiVersion: build.knative.dev/v1alpha1
|
||||||
|
kind: Build
|
||||||
|
metadata:
|
||||||
|
name: docker-build
|
||||||
|
spec:
|
||||||
|
serviceAccountName: build-bot
|
||||||
|
source:
|
||||||
|
git:
|
||||||
|
revision: master
|
||||||
|
url: <https://github.com/redhat-developer-demos/knative-tutorial-event-greeter.git>
|
||||||
|
steps:
|
||||||
|
\- args:
|
||||||
|
\- --context=/workspace/java/springboot
|
||||||
|
\- --dockerfile=/workspace/java/springboot/Dockerfile
|
||||||
|
\- --destination=docker.io/demo/event-greeter:0.0.1
|
||||||
|
env:
|
||||||
|
\- name: DOCKER_CONFIG
|
||||||
|
value: /builder/home/.docker
|
||||||
|
image: gcr.io/kaniko-project/executor
|
||||||
|
name: docker-push
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 3:** Deploy and serve your container applications as serverless workloads via the Knative Serving component. This step shows the beauty of Knative in terms of automatically scaling up your serverless containers on Kubernetes then scaling them down to zero if there is no request to the containers for a specific period (e.g., two minutes). More importantly, [Istio][19] will automatically address ingress and egress networking traffic of serverless workloads in multiple, secure ways. Here's an example of the Serving YAML:
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
apiVersion: serving.knative.dev/v1alpha1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: greeter
|
||||||
|
spec:
|
||||||
|
runLatest:
|
||||||
|
configuration:
|
||||||
|
revisionTemplate:
|
||||||
|
spec:
|
||||||
|
container:
|
||||||
|
image: dev.local/rhdevelopers/greeter:0.0.1
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 4:** Bind running serverless containers to a variety of eventing platforms, such as SaaS, FaaS, and Kubernetes, via Knative's Eventing component. In this step, you can define event channels and subscriptions, which are delivered to your services via a messaging platform such as [Apache Kafka][20] or [NATS streaming][21]. Here's an example of the Event sourcing YAML:
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
apiVersion: sources.eventing.knative.dev/v1alpha1
|
||||||
|
kind: CronJobSource
|
||||||
|
metadata:
|
||||||
|
name: test-cronjob-source
|
||||||
|
spec:
|
||||||
|
schedule: "* * * * *"
|
||||||
|
data: '{"message": "Event sourcing!!!!"}'
|
||||||
|
sink:
|
||||||
|
apiVersion: eventing.knative.dev/v1alpha1
|
||||||
|
kind: Channel
|
||||||
|
name: ch-event-greeter
|
||||||
|
```
|
||||||
|
|
||||||
|
### Conclusion
|
||||||
|
|
||||||
|
Developing with Knative will save a lot of time in building serverless applications in the Kubernetes environment. It can also make developers' jobs easier by focusing on developing serverless applications, functions, or cloud-native containers.
|
||||||
|
|
||||||
|
* * *
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/19/4/enabling-serverless-kubernetes
|
||||||
|
|
||||||
|
作者:[Daniel Oh (Red Hat, Community Moderator)][a]
|
||||||
|
选题:[lujun9972][b]
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://opensource.com/users/daniel-oh/users/daniel-oh
|
||||||
|
[b]: https://github.com/lujun9972
|
||||||
|
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kubernetes.png?itok=PqDGb6W7 (Kubernetes)
|
||||||
|
[2]: https://opensource.com/article/18/11/open-source-serverless-platforms
|
||||||
|
[3]: https://opensource.com/article/18/11/developing-functions-service-apache-openwhisk
|
||||||
|
[4]: https://kubernetes.io/
|
||||||
|
[5]: https://landscape.cncf.io/format=serverless
|
||||||
|
[6]: https://opensource.com/article/18/8/what-cicd
|
||||||
|
[7]: https://openwhisk.apache.org/
|
||||||
|
[8]: https://projectriff.io/
|
||||||
|
[9]: https://kubeless.io/
|
||||||
|
[10]: https://cloud.google.com/knative/
|
||||||
|
[11]: https://github.com/knative/build
|
||||||
|
[12]: https://github.com/knative/serving
|
||||||
|
[13]: https://github.com/knative/eventing
|
||||||
|
[14]: https://start.spring.io/
|
||||||
|
[15]: https://thorntail.io/generator/
|
||||||
|
[16]: https://12factor.net/
|
||||||
|
[17]: https://opensource.com/sites/default/files/uploads/spring_300.png (Spring Initializr screenshot)
|
||||||
|
[18]: https://opensource.com/sites/default/files/uploads/springboot_300.png (Thorntail Project Generator screenshot)
|
||||||
|
[19]: https://istio.io/
|
||||||
|
[20]: https://kafka.apache.org/
|
||||||
|
[21]: https://nats.io/
|
@ -0,0 +1,101 @@
|
|||||||
|
[#]: collector: (lujun9972)
|
||||||
|
[#]: translator: ( )
|
||||||
|
[#]: reviewer: ( )
|
||||||
|
[#]: publisher: ( )
|
||||||
|
[#]: url: ( )
|
||||||
|
[#]: subject: (How we built a Linux desktop app with Electron)
|
||||||
|
[#]: via: (https://opensource.com/article/19/4/linux-desktop-electron)
|
||||||
|
[#]: author: (Nils Ganther https://opensource.com/users/nils-ganther)
|
||||||
|
|
||||||
|
How we built a Linux desktop app with Electron
|
||||||
|
======
|
||||||
|
A story of building an open source email service that runs natively on
|
||||||
|
Linux desktops, thanks to the Electron framework.
|
||||||
|
![document sending][1]
|
||||||
|
|
||||||
|
[Tutanota][2] is a secure, open source email service that's been available as an app for the browser, iOS, and Android. The client code is published under GPLv3 and the Android app is available on [F-Droid][3] to enable everyone to use a completely Google-free version.
|
||||||
|
|
||||||
|
Because Tutanota focuses on open source and develops on Linux clients, we wanted to release a desktop app for Linux and other platforms. Being a small team, we quickly ruled out building native apps for Linux, Windows, and MacOS and decided to adapt our app using [Electron][4].
|
||||||
|
|
||||||
|
Electron is the go-to choice for anyone who wants to ship visually consistent, cross-platform applications, fast—especially if there's already a web app that needs to be freed from the shackles of the browser API. Tutanota is exactly such a case.
|
||||||
|
|
||||||
|
Tutanota is based on [SystemJS][5] and [Mithril][6] and aims to offer simple, secure email communications for everybody. As such, it has to provide a lot of the standard features users expect from any email client.
|
||||||
|
|
||||||
|
Some of these features, like basic push notifications, search for text and contacts, and support for two-factor authentication are easy to offer in the browser thanks to modern APIs and standards. Other features (such as automatic backups or IMAP support without involving our servers) need less-restricted access to system resources, which is exactly what the Electron framework provides.
|
||||||
|
|
||||||
|
While some criticize Electron as "just a basic wrapper," it has obvious benefits:
|
||||||
|
|
||||||
|
* Electron enables you to adapt a web app quickly for Linux, Windows, and MacOS desktops. In fact, most Linux desktop apps are built with Electron.
|
||||||
|
* Electron enables you to easily bring the desktop client to feature parity with the web app.
|
||||||
|
* Once you've published the desktop app, you can use free development capacity to add desktop-specific features that enhance usability and security.
|
||||||
|
* And last but certainly not least, it's a great way to make the app feel native and integrated into the user's system while maintaining its identity.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Meeting users' needs
|
||||||
|
|
||||||
|
At Tutanota, we do not rely on big investor money, rather we are a community-driven project. We grow our team organically based on the increasing number of users upgrading to our freemium service's paid plans. Listening to what users want is not only important to us, it is essential to our success.
|
||||||
|
|
||||||
|
Offering a desktop client was users' [most-wanted feature][7] in Tutanota, and we are proud that we can now offer free beta desktop clients to all of our users. (We also implemented another highly requested feature—[search on encrypted data][8]—but that's a topic for another time.)
|
||||||
|
|
||||||
|
We liked the idea of providing users with signed versions of Tutanota and enabling functions that are impossible in the browser, such as push notifications via a background process. Now we plan to add more desktop-specific features, such as IMAP support without depending on our servers to act as a proxy, automatic backups, and offline availability.
|
||||||
|
|
||||||
|
We chose Electron because its combination of Chromium and Node.js promised to be the best fit for our small development team, as it required only minimal changes to our web app. It was particularly helpful to use the browser APIs for everything as we got started, slowly replacing those components with more native versions as we progressed. This approach was especially handy with attachment downloads and notifications.
|
||||||
|
|
||||||
|
### Tuning security
|
||||||
|
|
||||||
|
We were aware that some people cite security problems with Electron, but we found Electron's options for fine-tuning access in the web app quite satisfactory. You can use resources like the Electron's [security documentation][9] and Luca Carettoni's [Electron Security Checklist][10] to help prevent catastrophic mishaps with untrusted content in your web app.
|
||||||
|
|
||||||
|
### Achieving feature parity
|
||||||
|
|
||||||
|
The Tutanota web client was built from the start with a solid protocol for interprocess communication. We utilize web workers to keep user interface (UI) rendering responsive while encrypting and requesting data. This came in handy when we started implementing our mobile apps, which use the same protocol to communicate between the native part and the web view.
|
||||||
|
|
||||||
|
That's why when we started building the desktop clients, a lot of bindings for things like native push notifications, opening mailboxes, and working with the filesystem were already there, so only the native (node) side had to be implemented.
|
||||||
|
|
||||||
|
Another convenience was our build process using the [Babel transpiler][11], which allows us to write the entire codebase in modern ES6 JavaScript and mix-and-match utility modules between the different environments. This enabled us to speedily adapt the code for the Electron-based desktop apps. However, we encountered some challenges.
|
||||||
|
|
||||||
|
### Overcoming challenges
|
||||||
|
|
||||||
|
While Electron allows us to integrate with the different platforms' desktop environments pretty easily, you can't underestimate the time investment to get things just right! In the end, it was these little things that took up much more time than we expected but were also crucial to finish the desktop client project.
|
||||||
|
|
||||||
|
The places where platform-specific code was necessary caused most of the friction:
|
||||||
|
|
||||||
|
* Window management and the tray, for example, are still handled in subtly different ways on the three platforms.
|
||||||
|
* Registering Tutanota as the default mail program and setting up autostart required diving into the Windows Registry while making sure to prompt the user for admin access in a [UAC][12]-compatible way.
|
||||||
|
* We needed to use Electron's API for shortcuts and menus to offer even standard features like copy, paste, undo, and redo.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
This process was complicated a bit by users' expectations of certain, sometimes not directly compatible behavior of the apps on different platforms. Making the three versions feel native required some iteration and even some modest additions to the web app to offer a text search similar to the one in the browser.
|
||||||
|
|
||||||
|
### Wrapping up
|
||||||
|
|
||||||
|
Our experience with Electron was largely positive, and we completed the project in less than four months. Despite some rather time-consuming features, we were surprised about the ease with which we could ship a beta version of the [Tutanota desktop client for Linux][13]. If you're interested, you can dive into the source code on [GitHub][14].
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/19/4/linux-desktop-electron
|
||||||
|
|
||||||
|
作者:[Nils Ganther][a]
|
||||||
|
选题:[lujun9972][b]
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://opensource.com/users/nils-ganther
|
||||||
|
[b]: https://github.com/lujun9972
|
||||||
|
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ (document sending)
|
||||||
|
[2]: https://tutanota.com/
|
||||||
|
[3]: https://f-droid.org/en/packages/de.tutao.tutanota/
|
||||||
|
[4]: https://electronjs.org/
|
||||||
|
[5]: https://github.com/systemjs/systemjs
|
||||||
|
[6]: https://mithril.js.org/
|
||||||
|
[7]: https://tutanota.uservoice.com/forums/237921-general/filters/top?status_id=1177482
|
||||||
|
[8]: https://tutanota.com/blog/posts/first-search-encrypted-data/
|
||||||
|
[9]: https://electronjs.org/docs/tutorial/security
|
||||||
|
[10]: https://www.blackhat.com/docs/us-17/thursday/us-17-Carettoni-Electronegativity-A-Study-Of-Electron-Security-wp.pdf
|
||||||
|
[11]: https://babeljs.io/
|
||||||
|
[12]: https://en.wikipedia.org/wiki/User_Account_Control
|
||||||
|
[13]: https://tutanota.com/blog/posts/desktop-clients/
|
||||||
|
[14]: https://www.github.com/tutao/tutanota
|
94
sources/tech/20190410 Managing Partitions with sgdisk.md
Normal file
94
sources/tech/20190410 Managing Partitions with sgdisk.md
Normal file
@ -0,0 +1,94 @@
|
|||||||
|
[#]: collector: (lujun9972)
|
||||||
|
[#]: translator: ( )
|
||||||
|
[#]: reviewer: ( )
|
||||||
|
[#]: publisher: ( )
|
||||||
|
[#]: url: ( )
|
||||||
|
[#]: subject: (Managing Partitions with sgdisk)
|
||||||
|
[#]: via: (https://fedoramagazine.org/managing-partitions-with-sgdisk/)
|
||||||
|
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
|
||||||
|
|
||||||
|
Managing Partitions with sgdisk
|
||||||
|
======
|
||||||
|
|
||||||
|
![][1]
|
||||||
|
|
||||||
|
[Roderick W. Smith][2]‘s _sgdisk_ command can be used to manage the partitioning of your hard disk drive from the command line. The basics that you need to get started with it are demonstrated below.
|
||||||
|
|
||||||
|
The following six parameters are all that you need to know to make use of sgdisk’s most basic features:
|
||||||
|
|
||||||
|
1. **-p**
|
||||||
|
_Print_ the partition table:
|
||||||
|
### sgdisk -p /dev/sda
|
||||||
|
2. **-d x**
|
||||||
|
_Delete_ partition x:
|
||||||
|
### sgdisk -d 1 /dev/sda
|
||||||
|
3. **-n x:y:z**
|
||||||
|
Create a _new_ partition numbered x, starting at y and ending at z:
|
||||||
|
### sgdisk -n 1:1MiB:2MiB /dev/sda
|
||||||
|
4. **-c x:y**
|
||||||
|
_Change_ the name of partition x to y:
|
||||||
|
### sgdisk -c 1:grub /dev/sda
|
||||||
|
5. **-t x:y**
|
||||||
|
Change the _type_ of partition x to y:
|
||||||
|
### sgdisk -t 1:ef02 /dev/sda
|
||||||
|
6. **–list-types**
|
||||||
|
List the partition type codes:
|
||||||
|
### sgdisk --list-types
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
![The SGDisk Command][3]
|
||||||
|
|
||||||
|
As you can see in the above examples, most of the commands require that the [device file name][4] of the hard disk drive to operate on be specified as the last parameter.
|
||||||
|
|
||||||
|
The parameters shown above can be combined so that you can completely define a partition with a single run of the sgdisk command:
|
||||||
|
|
||||||
|
### sgdisk -n 1:1MiB:2MiB -t 1:ef02 -c 1:grub /dev/sda
|
||||||
|
|
||||||
|
Relative values can be specified for some fields by prefixing the value with a **+** or **–** symbol. If you use a relative value, sgdisk will do the math for you. For example, the above example could be written as:
|
||||||
|
|
||||||
|
### sgdisk -n 1:1MiB:+1MiB -t 1:ef02 -c 1:grub /dev/sda
|
||||||
|
|
||||||
|
The value **0** has a special-case meaning for several of the fields:
|
||||||
|
|
||||||
|
* In the _partition number_ field, 0 indicates that the next available number should be used (numbering starts at 1).
|
||||||
|
* In the _starting address_ field, 0 indicates that the start of the largest available block of free space should be used. Some space at the start of the hard drive is always reserved for the partition table itself.
|
||||||
|
* In the _ending address_ field, 0 indicates that the end of the largest available block of free space should be used.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
By using **0** and relative values in the appropriate fields, you can create a series of partitions without having to pre-calculate any absolute values. For example, the following sequence of sgdisk commands would create all the basic partitions that are needed for a typical Linux installation if in run sequence against a blank hard drive:
|
||||||
|
|
||||||
|
### sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub /dev/sda
|
||||||
|
### sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot /dev/sda
|
||||||
|
### sgdisk -n 0:0:+4GiB -t 0:8200 -c 0:swap /dev/sda
|
||||||
|
### sgdisk -n 0:0:0 -t 0:8300 -c 0:root /dev/sda
|
||||||
|
|
||||||
|
The above example shows how to partition a hard disk for a BIOS-based computer. The [grub partition][5] is not needed on a UEFI-based computer. Because sgdisk is calculating all the absolute values for you in the above example, you can just skip running the first command on a UEFI-based computer and the remaining commands can be run without modification. Likewise, you could skip creating the swap partition and the remaining commands would not need to be modified.
|
||||||
|
|
||||||
|
There is also a short-cut for deleting all the partitions from a hard disk with a single command:
|
||||||
|
|
||||||
|
### sgdisk --zap-all /dev/sda
|
||||||
|
|
||||||
|
For the most up-to-date and detailed information, check the man page:
|
||||||
|
|
||||||
|
$ man sgdisk
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://fedoramagazine.org/managing-partitions-with-sgdisk/
|
||||||
|
|
||||||
|
作者:[Gregory Bartholomew][a]
|
||||||
|
选题:[lujun9972][b]
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://fedoramagazine.org/author/glb/
|
||||||
|
[b]: https://github.com/lujun9972
|
||||||
|
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/managing-partitions-816x345.png
|
||||||
|
[2]: https://www.rodsbooks.com/
|
||||||
|
[3]: https://fedoramagazine.org/wp-content/uploads/2019/04/sgdisk.jpg
|
||||||
|
[4]: https://en.wikipedia.org/wiki/Device_file
|
||||||
|
[5]: https://en.wikipedia.org/wiki/BIOS_boot_partition
|
135
sources/tech/20190411 Be your own certificate authority.md
Normal file
135
sources/tech/20190411 Be your own certificate authority.md
Normal file
@ -0,0 +1,135 @@
|
|||||||
|
[#]: collector: (lujun9972)
|
||||||
|
[#]: translator: ( )
|
||||||
|
[#]: reviewer: ( )
|
||||||
|
[#]: publisher: ( )
|
||||||
|
[#]: url: ( )
|
||||||
|
[#]: subject: (Be your own certificate authority)
|
||||||
|
[#]: via: (https://opensource.com/article/19/4/certificate-authority)
|
||||||
|
[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez/users/elenajon123)
|
||||||
|
|
||||||
|
Be your own certificate authority
|
||||||
|
======
|
||||||
|
Create a simple, internal CA for your microservice architecture or
|
||||||
|
integration testing.
|
||||||
|
![][1]
|
||||||
|
|
||||||
|
The Transport Layer Security ([TLS][2]) model, which is sometimes referred to by the older name SSL, is based on the concept of [certificate authorities][3] (CAs). These authorities are trusted by browsers and operating systems and, in turn, _sign_ servers' certificates to validate their ownership.
|
||||||
|
|
||||||
|
However, for an intranet, a microservice architecture, or integration testing, it is sometimes useful to have a _local CA_ : one that is trusted only internally and, in turn, signs local servers' certificates.
|
||||||
|
|
||||||
|
This especially makes sense for integration tests. Getting certificates can be a burden because the servers will be up for minutes. But having an "ignore certificate" option in the code could allow it to be activated in production, leading to a security catastrophe.
|
||||||
|
|
||||||
|
A CA certificate is not much different from a regular server certificate; what matters is that it is trusted by local code. For example, in the **requests** library, this can be done by setting the **REQUESTS_CA_BUNDLE** variable to a directory containing this certificate.
|
||||||
|
|
||||||
|
In the example of creating a certificate for integration tests, there is no need for a _long-lived_ certificate: if your integration tests take more than a day, you have already failed.
|
||||||
|
|
||||||
|
So, calculate **yesterday** and **tomorrow** as the validity interval:
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
>>> import datetime
|
||||||
|
>>> one_day = datetime.timedelta(days=1)
|
||||||
|
>>> today = datetime.date.today()
|
||||||
|
>>> yesterday = today - one_day
|
||||||
|
>>> tomorrow = today - one_day
|
||||||
|
```
|
||||||
|
|
||||||
|
Now you are ready to create a simple CA certificate. You need to generate a private key, create a public key, set up the "parameters" of the CA, and then self-sign the certificate: a CA certificate is _always_ self-signed. Finally, write out both the certificate file as well as the private key file.
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
from cryptography.hazmat.primitives.asymmetric import rsa
|
||||||
|
from cryptography.hazmat.primitives import hashes, serialization
|
||||||
|
from cryptography import x509
|
||||||
|
from cryptography.x509.oid import NameOID
|
||||||
|
|
||||||
|
private_key = rsa.generate_private_key(
|
||||||
|
public_exponent=65537,
|
||||||
|
key_size=2048,
|
||||||
|
backend=default_backend()
|
||||||
|
)
|
||||||
|
public_key = private_key.public_key()
|
||||||
|
builder = x509.CertificateBuilder()
|
||||||
|
builder = builder.subject_name(x509.Name([
|
||||||
|
x509.NameAttribute(NameOID.COMMON_NAME, 'Simple Test CA'),
|
||||||
|
]))
|
||||||
|
builder = builder.issuer_name(x509.Name([
|
||||||
|
x509.NameAttribute(NameOID.COMMON_NAME, 'Simple Test CA'),
|
||||||
|
]))
|
||||||
|
builder = builder.not_valid_before(yesterday)
|
||||||
|
builder = builder.not_valid_after(tomorrow)
|
||||||
|
builder = builder.serial_number(x509.random_serial_number())
|
||||||
|
builder = builder.public_key(public_key)
|
||||||
|
builder = builder.add_extension(
|
||||||
|
x509.BasicConstraints(ca=True, path_length=None),
|
||||||
|
critical=True)
|
||||||
|
certificate = builder.sign(
|
||||||
|
private_key=private_key, algorithm=hashes.SHA256(),
|
||||||
|
backend=default_backend()
|
||||||
|
)
|
||||||
|
private_bytes = private_key.private_bytes(
|
||||||
|
encoding=serialization.Encoding.PEM,
|
||||||
|
format=serialization.PrivateFormat.TraditionalOpenSSL,
|
||||||
|
encryption_algorithm=serialization.NoEncrption())
|
||||||
|
public_bytes = certificate.public_bytes(
|
||||||
|
encoding=serialization.Encoding.PEM)
|
||||||
|
with open("ca.pem", "wb") as fout:
|
||||||
|
fout.write(private_bytes + public_bytes)
|
||||||
|
with open("ca.crt", "wb") as fout:
|
||||||
|
fout.write(public_bytes)
|
||||||
|
```
|
||||||
|
|
||||||
|
In general, a real CA will expect a [certificate signing request][4] (CSR) to sign a certificate. However, when you are your own CA, you can make your own rules! Just go ahead and sign what you want.
|
||||||
|
|
||||||
|
Continuing with the integration test example, you can create the private keys and sign the corresponding public keys right then. Notice **COMMON_NAME** needs to be the "server name" in the **https** URL. If you've configured name lookup, the needed server will respond on **service.test.local**.
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
service_private_key = rsa.generate_private_key(
|
||||||
|
public_exponent=65537,
|
||||||
|
key_size=2048,
|
||||||
|
backend=default_backend()
|
||||||
|
)
|
||||||
|
service_public_key = service_private_key.public_key()
|
||||||
|
builder = x509.CertificateBuilder()
|
||||||
|
builder = builder.subject_name(x509.Name([
|
||||||
|
x509.NameAttribute(NameOID.COMMON_NAME, 'service.test.local')
|
||||||
|
]))
|
||||||
|
builder = builder.not_valid_before(yesterday)
|
||||||
|
builder = builder.not_valid_after(tomorrow)
|
||||||
|
builder = builder.public_key(public_key)
|
||||||
|
certificate = builder.sign(
|
||||||
|
private_key=private_key, algorithm=hashes.SHA256(),
|
||||||
|
backend=default_backend()
|
||||||
|
)
|
||||||
|
private_bytes = service_private_key.private_bytes(
|
||||||
|
encoding=serialization.Encoding.PEM,
|
||||||
|
format=serialization.PrivateFormat.TraditionalOpenSSL,
|
||||||
|
encryption_algorithm=serialization.NoEncrption())
|
||||||
|
public_bytes = certificate.public_bytes(
|
||||||
|
encoding=serialization.Encoding.PEM)
|
||||||
|
with open("service.pem", "wb") as fout:
|
||||||
|
fout.write(private_bytes + public_bytes)
|
||||||
|
```
|
||||||
|
|
||||||
|
Now the **service.pem** file has a private key and a certificate that is "valid": it has been signed by your local CA. The file is in a format that can be given to, say, Nginx, HAProxy, or most other HTTPS servers.
|
||||||
|
|
||||||
|
By applying this logic to testing scripts, it's easy to create servers that look like authentic HTTPS servers, as long as the client is configured to trust the right CA.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/19/4/certificate-authority
|
||||||
|
|
||||||
|
作者:[Moshe Zadka (Community Moderator)][a]
|
||||||
|
选题:[lujun9972][b]
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://opensource.com/users/moshez/users/elenajon123
|
||||||
|
[b]: https://github.com/lujun9972
|
||||||
|
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_commun_4604_02_mech_connections_rhcz0.5x.png?itok=YPPU4dMj
|
||||||
|
[2]: https://en.wikipedia.org/wiki/Transport_Layer_Security
|
||||||
|
[3]: https://en.wikipedia.org/wiki/Certificate_authority
|
||||||
|
[4]: https://en.wikipedia.org/wiki/Certificate_signing_request
|
@ -0,0 +1,73 @@
|
|||||||
|
[#]: collector: (lujun9972)
|
||||||
|
[#]: translator: ( )
|
||||||
|
[#]: reviewer: ( )
|
||||||
|
[#]: publisher: ( )
|
||||||
|
[#]: url: ( )
|
||||||
|
[#]: subject: (How do you contribute to open source without code?)
|
||||||
|
[#]: via: (https://opensource.com/article/19/4/contribute-without-code)
|
||||||
|
[#]: author: (Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen/users/don-watkins/users/greg-p/users/petercheer)
|
||||||
|
|
||||||
|
How do you contribute to open source without code?
|
||||||
|
======
|
||||||
|
|
||||||
|
![Dandelion held out over water][1]
|
||||||
|
|
||||||
|
My earliest open source contributions date back to the mid-1980s when our organization first connected to [UseNet][2] where we discovered the contributed code and the opportunities to share in its development and support.
|
||||||
|
|
||||||
|
Today there are endless contribution opportunities, from contributing code to making how-to videos.
|
||||||
|
|
||||||
|
I'm going to step right over the whole issue of contributing code, other than pointing out that many of us who write code but don't consider ourselves developers can still [contribute code][3]. Instead, I'd like to remind everyone that there are lots of [non-code ways to contribute to open source][4] and talk about three alternatives.
|
||||||
|
|
||||||
|
### Filing bug reports
|
||||||
|
|
||||||
|
One important and concrete kind of contribution could best be described as "not being afraid to file a decent bug report" and [all the consequences related to that][5]. Sometimes it's quite challenging to [file a decent bug report][6]. For example:
|
||||||
|
|
||||||
|
* A bug may be difficult to record or describe. A long and complicated message with all sorts of unrecognizable codes may flash by as the computer is booting, or there may just be some "odd behavior" on the screen with no error messages produced.
|
||||||
|
* A bug may be difficult to reproduce. It may occur only on certain hardware/software configurations, or it may be rarely triggered, or the precise problem area may not be apparent.
|
||||||
|
* A bug may be linked to a very specific development environment configuration that is too big, messy, and complicated to share, requiring laborious creation of a stripped-down example.
|
||||||
|
* When reporting a bug to a distro, the maintainers may suggest filing the bug upstream instead, which can sometimes lead to a lot of work when the version supported by the distro is not the primary version of interest to the upstream community. (This can happen when the version provided in the distro lags the officially supported release and development version.)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Nevertheless, I exhort would-be bug reporters (including me) to press on and try to get bugs fully recorded and acknowledged.
|
||||||
|
|
||||||
|
One way to get started is to use your favorite search tool to look for similar bug reports, see how they are described, where they are filed, and so on. Another important thing to know is the formal mechanism defined for bug reporting by your distro (for example, [Fedora's is here][7]; [openSUSE's is here][8]; [Ubuntu's is here][9]) or software package ([LibreOffice's is here][10]; [Mozilla's seems to be here][11]).
|
||||||
|
|
||||||
|
### Answering user's questions
|
||||||
|
|
||||||
|
I lurk and occasionally participate in various mailing lists and forums, such as the [Ubuntu quality control team][12] and [forums][13], [LinuxQuestions.org][14], and the [ALSA users' mailing list][15]. Here, the contributions may relate less to bugs and more to documenting complex use cases. It's a great feeling for everyone to see someone jumping in to help a person sort out their trouble with a particular issue.
|
||||||
|
|
||||||
|
### Writing about open source
|
||||||
|
|
||||||
|
Finally, another area where I really enjoy contributing is [_writing_][16] about using open source software; whether it's a how-to guide, a comparative evaluation of different solutions to a particular problem, or just generally exploring an area of interest (in my case, using open source music-playing software to enjoy music). A similar option is making an instructional video; it's easy to [record the desktop][17] while demonstrating some fiendishly difficult desktop maneuver, such as creating a splashy logo with GIMP. And those of you who are bi- or multi-lingual can also consider translating existing how-to articles or videos to another language.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/19/4/contribute-without-code
|
||||||
|
|
||||||
|
作者:[Chris Hermansen (Community Moderator)][a]
|
||||||
|
选题:[lujun9972][b]
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://opensource.com/users/clhermansen/users/don-watkins/users/greg-p/users/petercheer
|
||||||
|
[b]: https://github.com/lujun9972
|
||||||
|
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dandelion_blue_water_hand.jpg?itok=QggW8Wnw (Dandelion held out over water)
|
||||||
|
[2]: https://en.wikipedia.org/wiki/Usenet
|
||||||
|
[3]: https://opensource.com/article/19/2/open-science-git
|
||||||
|
[4]: https://opensource.com/life/16/1/8-ways-contribute-open-source-without-writing-code
|
||||||
|
[5]: https://producingoss.com/en/bug-tracker.html
|
||||||
|
[6]: https://opensource.com/article/19/3/bug-reporting
|
||||||
|
[7]: https://docs.fedoraproject.org/en-US/quick-docs/howto-file-a-bug/
|
||||||
|
[8]: https://en.opensuse.org/openSUSE:Submitting_bug_reports
|
||||||
|
[9]: https://help.ubuntu.com/stable/ubuntu-help/report-ubuntu-bug.html.en
|
||||||
|
[10]: https://wiki.documentfoundation.org/QA/BugReport
|
||||||
|
[11]: https://developer.mozilla.org/en-US/docs/Mozilla/QA/Bug_writing_guidelines
|
||||||
|
[12]: https://wiki.ubuntu.com/QATeam
|
||||||
|
[13]: https://ubuntuforums.org/
|
||||||
|
[14]: https://www.linuxquestions.org/
|
||||||
|
[15]: https://www.alsa-project.org/wiki/Mailing-lists
|
||||||
|
[16]: https://opensource.com/users/clhermansen
|
||||||
|
[17]: https://opensource.com/education/16/10/simplescreenrecorder-and-kazam
|
@ -0,0 +1,135 @@
|
|||||||
|
[#]: collector: (lujun9972)
|
||||||
|
[#]: translator: ( )
|
||||||
|
[#]: reviewer: ( )
|
||||||
|
[#]: publisher: ( )
|
||||||
|
[#]: url: ( )
|
||||||
|
[#]: subject: (Installing Ubuntu MATE on a Raspberry Pi)
|
||||||
|
[#]: via: (https://itsfoss.com/ubuntu-mate-raspberry-pi/)
|
||||||
|
[#]: author: (Chinmay https://itsfoss.com/author/chinmay/)
|
||||||
|
|
||||||
|
Installing Ubuntu MATE on a Raspberry Pi
|
||||||
|
======
|
||||||
|
|
||||||
|
_**Brief: This quick tutorial shows you how to install Ubuntu MATE on Raspberry Pi devices.**_
|
||||||
|
|
||||||
|
[Raspberry Pi][1] is by far the most popular SBC (Single Board Computer) and the go-to board for makers. [Raspbian][2] which is based on Debian is the official operating system for the Pi. It is lightweight, comes bundled with educational tools and gets the job done for most scenarios.
|
||||||
|
|
||||||
|
[Installing Raspbian][3] is easy as well but the problem with [Debian][4] is its slow upgrade cycles and older packages.
|
||||||
|
|
||||||
|
Running Ubuntu on the Raspberry Pi gives you a richer experience and up to date software. We have a few options when it comes to running Ubuntu on your Pi.
|
||||||
|
|
||||||
|
1. [Ubuntu MATE][5] : Ubuntu MATE is the only distribution which natively supports the Raspberry Pi with a complete desktop environment.
|
||||||
|
2. [Ubuntu Server 18.04][6] \+ Installing a desktop environment manually.
|
||||||
|
3. Using Images built by the [Ubuntu Pi Flavor Maker][7] community, _these images only support the Raspberry Pi 2B and 3B variants_ and are **not** updated to the latest LTS release.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
The first option is the easiest and the quickest to set up while the second option gives you the freedom to install the desktop environment of your choice. I recommend going with either of the first two options.
|
||||||
|
|
||||||
|
Here are the links to download the Disc Images. In this article I’ll be covering Ubuntu MATE installation only.
|
||||||
|
|
||||||
|
### Installing Ubuntu MATE on Raspberry Pi
|
||||||
|
|
||||||
|
Go to the download page of Ubuntu MATE and get the recommended images.
|
||||||
|
|
||||||
|
![][8]
|
||||||
|
|
||||||
|
The experimental ARM64 version should only be used if you need to run 64-bit only applications like MongoDB on a Raspberry Pi server.
|
||||||
|
|
||||||
|
[Download Ubuntu MATE for Raspberry Pi][9]
|
||||||
|
|
||||||
|
#### Step 1: Setting Up the SD Card
|
||||||
|
|
||||||
|
The image file needs to be decompressed once downloaded. You can simply right click on it to extract it.
|
||||||
|
|
||||||
|
Alternatively, the following command will do the job.
|
||||||
|
|
||||||
|
```
|
||||||
|
xz -d ubuntu-mate***.img.xz
|
||||||
|
```
|
||||||
|
|
||||||
|
Alternatively you can use [7-zip][10] if you are on Windows.
|
||||||
|
|
||||||
|
Install **[Balena Etcher][11]** , we’ll use this tool to write the image to the SD card. Make sure that your SD card is at least 8 GB capacity.
|
||||||
|
|
||||||
|
Launch Etcher and select the image file and your SD card.
|
||||||
|
|
||||||
|
![][12]
|
||||||
|
|
||||||
|
Once the flashing process is complete the SD card is ready.
|
||||||
|
|
||||||
|
#### Step 2: Setting Up the Raspberry Pi
|
||||||
|
|
||||||
|
You probably already know that you need a few things to get started with Raspberry Pi such as a mouse, keyboard, HDMI cable etc. You can also [install Raspberry Pi headlessly without keyboard and mouse][13] but this tutorial is not about that.
|
||||||
|
|
||||||
|
* Plug in a mouse and a keyboard.
|
||||||
|
* Connect the HDMI cable.
|
||||||
|
* Insert the SD card into the SD card slot.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Power it on by plugging in the power cable. Make sure you have a good power supply (5V, 3A minimum). A bad power supply can reduce the performance.
|
||||||
|
|
||||||
|
#### Ubuntu MATE installation
|
||||||
|
|
||||||
|
Once you power on the Raspberry Pi, you’ll be greeted with a very familiar Ubuntu installation process. The process is pretty much straight forward from here.
|
||||||
|
|
||||||
|
![Select your keyboard layout][14]
|
||||||
|
|
||||||
|
![Select Your Timezone][15]
|
||||||
|
|
||||||
|
Select your WiFi network and enter the password in the network connection screen.
|
||||||
|
|
||||||
|
![Add Username and Password][16]
|
||||||
|
|
||||||
|
After setting the keyboard layout, timezone and user credentials you’ll be taken to the login screen after a few minutes. And voila! you are almost done.
|
||||||
|
|
||||||
|
![][17]
|
||||||
|
|
||||||
|
Once logged in, the first thing you should do is to [update Ubuntu][18]. You can use the command line for that.
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo apt update
|
||||||
|
sudo apt upgrade
|
||||||
|
```
|
||||||
|
|
||||||
|
You can also use the Software Updater.
|
||||||
|
|
||||||
|
![][19]
|
||||||
|
|
||||||
|
Once the updates are finished installing you are good to go. You can also go ahead and install Raspberry Pi specific packages for GPIO and other I/O depending on your needs.
|
||||||
|
|
||||||
|
What made you think about installing Ubuntu on the Raspberry and how has your experience been with Raspbian? Let me know in the comments below.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://itsfoss.com/ubuntu-mate-raspberry-pi/
|
||||||
|
|
||||||
|
作者:[Chinmay][a]
|
||||||
|
选题:[lujun9972][b]
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://itsfoss.com/author/chinmay/
|
||||||
|
[b]: https://github.com/lujun9972
|
||||||
|
[1]: https://www.raspberrypi.org/
|
||||||
|
[2]: https://www.raspberrypi.org/downloads/
|
||||||
|
[3]: https://itsfoss.com/tutorial-how-to-install-raspberry-pi-os-raspbian-wheezy/
|
||||||
|
[4]: https://www.debian.org/
|
||||||
|
[5]: https://ubuntu-mate.org/
|
||||||
|
[6]: https://wiki.ubuntu.com/ARM/RaspberryPi#Recovering_a_system_using_the_generic_kernel
|
||||||
|
[7]: https://ubuntu-pi-flavour-maker.org/download/
|
||||||
|
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/ubuntu-mate-raspberry-pi-download.jpg?ssl=1
|
||||||
|
[9]: https://ubuntu-mate.org/download/
|
||||||
|
[10]: https://www.7-zip.org/download.html
|
||||||
|
[11]: https://www.balena.io/etcher/
|
||||||
|
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/Screenshot-from-2019-04-08-01-36-16.png?ssl=1
|
||||||
|
[13]: https://linuxhandbook.com/raspberry-pi-headless-setup/
|
||||||
|
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/Keyboard-layout-ubuntu.jpg?fit=800%2C467&ssl=1
|
||||||
|
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/select-time-zone-ubuntu.jpg?fit=800%2C468&ssl=1
|
||||||
|
[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/Credentials-ubuntu.jpg?fit=800%2C469&ssl=1
|
||||||
|
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/Desktop-ubuntu.jpg?fit=800%2C600&ssl=1
|
||||||
|
[18]: https://itsfoss.com/update-ubuntu/
|
||||||
|
[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/update-software.png?ssl=1
|
@ -0,0 +1,103 @@
|
|||||||
|
[#]: collector: (lujun9972)
|
||||||
|
[#]: translator: ( )
|
||||||
|
[#]: reviewer: ( )
|
||||||
|
[#]: publisher: ( )
|
||||||
|
[#]: url: ( )
|
||||||
|
[#]: subject: (Managed, enabled, empowered: 3 dimensions of leadership in an open organization)
|
||||||
|
[#]: via: (https://opensource.com/open-organization/19/4/managed-enabled-empowered)
|
||||||
|
[#]: author: (Heidi Hess von Ludewig (Red Hat) https://opensource.com/users/heidi-hess-von-ludewig/users/amatlack)
|
||||||
|
|
||||||
|
Managed, enabled, empowered: 3 dimensions of leadership in an open organization
|
||||||
|
======
|
||||||
|
Different types of work call for different types of engagement. Should
|
||||||
|
open leaders always aim for empowerment?
|
||||||
|
![][1]
|
||||||
|
|
||||||
|
"Empowerment" seems to be the latest people management [buzzword][2]. And it's an important consideration for open organizations, too. After all, we like to think these open organizations thrive when the people inside them are equipped to take initiative to do their best work as they see fit. Shouldn't an open leader's goal be complete and total empowerment of everyone, in all parts of the organization, doing all types of work?
|
||||||
|
|
||||||
|
Not necessarily.
|
||||||
|
|
||||||
|
Before we jump on the employee [empowerment bandwagon][3], we should explore the important connections between empowerment and innovation. That requires placing empowerment in context.
|
||||||
|
|
||||||
|
As Allison Matlack has already demonstrated, employee investment in an organization's mission and activities—and employee _autonomy_ relative to those things—[can take several forms][4], from "managed" to "enabled" to "empowered." Sometimes, complete and total empowerment _isn't_ the most desirable type of investment an open leader would like to activate in a contributor. Projects are always changing. New challenges are always arising. As a result, the _type_ or _degree_ of involvement leaders can expect in different situations is always shifting. "Managed," "enabled," and "empowered," contributors exist simultaneously and dynamically, depending on the work they're performing (and that work's desired outcomes).
|
||||||
|
|
||||||
|
So before we head down to the community center to win a game of buzzword bingo, let's examine the different types of work, how they function, and how they contribute to the overall innovation of a company. Let's refine what we mean by "managed," "enabled," and "empowered" work, and discuss why we need all three.
|
||||||
|
|
||||||
|
### Managed, enabled, empowered
|
||||||
|
|
||||||
|
First, let's consider and define each type of work activity.
|
||||||
|
|
||||||
|
"Managed" work involves tasks that are coordinated using guidance, supervision, and direction in order to achieve specific outcomes. When someone works to coordinate _every_ part of _every_ task, we colloquially call that behavior "micro-managing." "Enabled" associates have the ability to direct themselves while working within boundaries (guidance), and they have access to the materials and resources (information, people, technologies, etc.) they require to problem-solve as they see fit. Lastly, "empowered" individuals _direct themselves_ within organizational limits, have access materials and resources, and also have the authority to represent their team or organization and make decisions about work on behalf using their best judgement, based on the former elements.
|
||||||
|
|
||||||
|
Most important here is the idea that these concepts are _nested_ (see Figure 1). Because each level builds on the one before it, one cannot have the full benefit of "empowered" associates without also having clear guidance and direction ("managed"), and transparency of information and resources ("enabled"). What changes from level to level is the amount of managed or enabled activity that comes before it.
|
||||||
|
|
||||||
|
Let's dive more deeply into the nature of those activities and discuss the roles leaders should play in each.
|
||||||
|
|
||||||
|
#### Managed work
|
||||||
|
|
||||||
|
"Managed" work is just that: work activity supervised and directed to some degree. The amount of management occurring in a situation is dynamic and depends on the activity itself. For instance, in the manufacturing economy, managed work is prominent. I'll call this "widget" work, the point of which is producing a widget the same way, every time. People need to perform this work according to consistent processes with consistent, standardized outcomes.
|
||||||
|
|
||||||
|
Before we jump on the employee empowerment bandwagon, we should explore the important connections between empowerment and innovation. That requires placing empowerment in context.
|
||||||
|
|
||||||
|
Because this work requires consistency, it typically proceeds via explicit guidelines and policies (rules about cost, schedule, quality, quantity, process, and so on—characteristics applicable to all work to a greater or lesser degree). We can find examples of it in a variety of roles across many industries. Quite often, _any_ role in _any_ industry requires _some_ amount of this type of work. Examples include manufacturing precision machine parts, answering a customer support case within a specified timeframe for contractual reasons and with a friendly greeting, etc. In the software industry, a role that's _entirely_ like this would be a rarity, yet even these roles require some work of the "managed" type. For instance, consider the way a support engineer must respond to a case using a set of quality standards (friendliness, perhaps with a professional written tone, a branded signature line, adherence to a participat contractual agreement, usually responding within a particular time frame, etc.).
|
||||||
|
|
||||||
|
"Management" is the best strategy when _work requirements include adhering to a consistent schedule, process, and quality._
|
||||||
|
|
||||||
|
#### Enabled work
|
||||||
|
|
||||||
|
As the amount of creativity a role requires _increases_ , the amount of directed and "managed" work we find in that role _decreases_. Guidelines get broader, processes looser, schedules lengthened (I wish!). This is because what's required to "be creative" involves other types of work (and new degrees of transparency and authority along with them). Ron McFarland explains this in [his article on adaptive leadership][5]: Many challenges challenges are ambiguous, as opposed to technical, and therefore require specific kinds of leadership.
|
||||||
|
|
||||||
|
To take this idea one step further, we might say open leaders need to be _adaptive_ to how they view and implement the different kinds of work on their teams or in their organizations. "Enabling" associates means growing their skills and knowledge so they can manage themselves. The foundation for this type of activity is information—access to it, sharing it, and opportunities to independently use it to complete work activity. This is the kind of work Peter Drucker was referring to when he coined the term "knowledge work."
|
||||||
|
|
||||||
|
Enabled work liberates associates from the constraints of managed work, though it still involves leaders providing considerable direction and guidance. Outcomes of this work might be familiar and normalized, but the _paths to achieving them_ are more open-ended than in managed work. Methods are more flexible and inclusive of individual preference and capability.
|
||||||
|
|
||||||
|
"Enablement" is the best strategy when _objectives are well-defined and the outcomes are aligned with past outcomes and results_.
|
||||||
|
|
||||||
|
#### Empowered work
|
||||||
|
|
||||||
|
In "[Beyond Engagement][4]," Allison describes empowerment as a state in which employees have "access to all the information, training, tools, and connections to people and others teams that they need to do their best work, as well as a safe environment in which to do that work so they feel comfortable making their own decisions." In other words, empowerment is enablement with the opportunity for associates to _act using their own best judgment as it relates to shared understanding of team and organizational guidelines and objectives._
|
||||||
|
|
||||||
|
"Empowerment" is the best strategy when _objectives and methods for achieving them are unclear and creative flexibility is necessary for defining them._ Often this work is focused on activities where problem definition and possible solutions (i.e. investigation, planning, and execution) are not well-defined.
|
||||||
|
|
||||||
|
Any role in any organization involves these three types of work occurring at various moments and in various situations. No job requires just one.
|
||||||
|
|
||||||
|
### Supporting innovation through managed, enabled, and empowered work
|
||||||
|
|
||||||
|
The labels "managed," enabled," and "empowered" apply to different work at different times, and _all three_ are embedded in work activity at different times and in different tasks. That means leaders should be paying more attention to the work contributors are doing: the kind of work, its purpose, and its desired outcomes. We're now in a position to consider how _innovation_ factors into this equation.
|
||||||
|
|
||||||
|
Frequently, people discuss the different modes of work by way of _contrast_. Most language about them connotes negativity: managed work is "the worst," while empowered work is "the best." The goal of any leadership practice should be to "move people along the scale"—to create empowered contributors.
|
||||||
|
|
||||||
|
However, just as types of work are located on a continuum that doesn't include this element of negation, so too should our understanding of the types of work. Rather than seeing work as, for example " _always empowered"_ or _"always managed_ ," we should recognize that any role is a function of _of all three types of work at the same time_ , each to a varying degree. Think of the equation this way:
|
||||||
|
|
||||||
|
> _Work = managed (x) + enabled (x) + empowered (x)_
|
||||||
|
|
||||||
|
Note here that the more enabled and empowered the work is, the more potential there is for creativity when doing that work. This is because creativity (and the creative individual) requires information—consistently updated and "fresh" sources of information—used in conjunction with individual judgment and capacity for interpreting how to _use_ and _combine_ that information to define problems, ideate, and solve problems. Enabled and empowered work can increase inclusivity—that is, draw more closely on an individual's unique skills, perspectives, and talents because, by definition, those kinds of work are less managed and more guided. Open leadership clearly supports hiring for diversity exactly for the reason that it makes inclusivity so much richer. The ambiguity that's characteristic of the challenges we face in modern workplaces means that the work we do is ripe with potential for innovation—if we embrace risk and adapt our leadership styles to liberate it.
|
||||||
|
|
||||||
|
In other words:
|
||||||
|
|
||||||
|
> _Innovation = enabled (x) + empowered (x) / managed (x)_
|
||||||
|
>
|
||||||
|
> _The more enabled and empowered the work is, the more potential for innovation._
|
||||||
|
|
||||||
|
Focusing on the importance of enabled work and empowered work is not to devalue managed work in any way. I would say that managed work creates a stable foundation on which creative (enabled and empowered) work can blossom. Imagine if all the work we did was empowered; our organizations would be completely chaotic, undefined, and ambiguous. Organizations need a degree of managed work in order to ensure some direction, some understanding of priorities, and some definition of "quality."
|
||||||
|
|
||||||
|
Any role in any organization involves these three types of work occurring at various moments and in various situations. No job requires just one. As open leaders, we must recognize that work isn't an all-or-nothing, one-type-of-work-alone equation. We have to get better at understanding work in _these three different ways_ and using each one to the organization's advantage, depending on the situation.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/open-organization/19/4/managed-enabled-empowered
|
||||||
|
|
||||||
|
作者:[Heidi Hess von Ludewig (Red Hat)][a]
|
||||||
|
选题:[lujun9972][b]
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://opensource.com/users/heidi-hess-von-ludewig/users/amatlack
|
||||||
|
[b]: https://github.com/lujun9972
|
||||||
|
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_ControlNotDesirable.png?itok=nrXwSkv7
|
||||||
|
[2]: https://www.entrepreneur.com/article/288340
|
||||||
|
[3]: https://www.forbes.com/sites/lisaquast/2011/02/28/6-ways-to-empower-others-to-succeed/#5c860b365c62
|
||||||
|
[4]: https://opensource.com/open-organization/18/10/understanding-engagement-and-empowerment
|
||||||
|
[5]: https://opensource.com/open-organization/19/3/adaptive-leadership-review
|
@ -0,0 +1,57 @@
|
|||||||
|
[#]: collector: (lujun9972)
|
||||||
|
[#]: translator: ( )
|
||||||
|
[#]: reviewer: ( )
|
||||||
|
[#]: publisher: ( )
|
||||||
|
[#]: url: ( )
|
||||||
|
[#]: subject: (Testing Small Scale Scrum in the real world)
|
||||||
|
[#]: via: (https://opensource.com/article/19/4/next-steps-small-scale-scrum)
|
||||||
|
[#]: author: (Agnieszka Gancarczyk (Red Hat)Leigh Griffin (Red Hat) https://opensource.com/users/agagancarczyk/users/lgriffin/users/agagancarczyk/users/lgriffin)
|
||||||
|
|
||||||
|
Testing Small Scale Scrum in the real world
|
||||||
|
======
|
||||||
|
We plan to test the Small Scale Scrum framework in real-world projects
|
||||||
|
involving small teams.
|
||||||
|
![Green graph of measurements][1]
|
||||||
|
|
||||||
|
Scrum is built on the three pillars of inspection, adaptation, and transparency. Our empirical research is really the starting point in bringing scrum, one of the most popular agile implementations, to smaller teams. As presented in the diagram below, we are now taking time to inspect this framework and principles by testing them in real-world projects.
|
||||||
|
|
||||||
|
![small-scale-scrum-inspection.png][2]
|
||||||
|
|
||||||
|
Progress in empirical process control
|
||||||
|
|
||||||
|
We plan to implement Small Scale Scrum in several upcoming projects. Our test candidates are customers with real projects where teams of one to three people will undertake short-lived projects (ranging from a few weeks to three months) with an emphasis on quality and outputs. Individual projects, such as final-year projects (over 24 weeks) that are a capstone project after four years in a degree program, are almost exclusively completed by a single person. In projects of this nature, there is an emphasis on the project plan and structure and on maximizing the outputs that a single person can achieve.
|
||||||
|
|
||||||
|
We plan to metricize and publish the results of these projects and hold several retrospectives with the teams involved. We are particularly interested in metrics centered around quality, with a particular emphasis on quality in a software engineering context and management, both project management through the lifecycle with a customer and management of the day-to-day team activities and the delivery, release, handover, and signoff process.
|
||||||
|
|
||||||
|
Ultimately, we will retrospectively analyze the overall framework and principles and see if the Manifesto we envisioned holds up to the reality of executing a project with small numbers. From this data, we will produce the second version of Small Scale Scrum and begin a cyclic pattern of inspecting the model in new projects and adapting it again.
|
||||||
|
|
||||||
|
We want to do all of this transparently. This series of articles is one window into the data, the insights, the experiences, and the reality of running scrum for small teams whose everyday challenges include context switching, communication, and the need for a quality delivery. A follow-up series of articles is planned to examine the outputs and help develop the second edition of Small Scale Scrum entirely in the community.
|
||||||
|
|
||||||
|
We also plan to attend conferences and share our knowledge with the Agile community. Our first conference will be Agile 2019 where the evolution of Small Scale Scrum will be further explored as an Experience Report. We are advising colleges and sharing our structure and approach to managing and executing final-year projects. All our outputs will be freely available in the open source way.
|
||||||
|
|
||||||
|
Given the changes to recommended team sizes in the Scrum Guide, our long-term goal and vision is to have the Scrum Guide reflect that teams of one or more people occupying one or more roles within a project are capable of following scrum.
|
||||||
|
|
||||||
|
* * *
|
||||||
|
|
||||||
|
_Leigh Griffin will present Small Scale Scrum at Agile 2019 in Washington, August 5-9, 2019 as an Experience Report. An expanded paper will be published on[Agile Alliance][3] to accompany this._
|
||||||
|
|
||||||
|
* * *
|
||||||
|
|
||||||
|
* * *
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/19/4/next-steps-small-scale-scrum
|
||||||
|
|
||||||
|
作者:[Agnieszka Gancarczyk (Red Hat)Leigh Griffin (Red Hat)][a]
|
||||||
|
选题:[lujun9972][b]
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://opensource.com/users/agagancarczyk/users/lgriffin/users/agagancarczyk/users/lgriffin
|
||||||
|
[b]: https://github.com/lujun9972
|
||||||
|
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_lead-steps-measure.png?itok=DG7rFZPk (Green graph of measurements)
|
||||||
|
[2]: https://opensource.com/sites/default/files/small-scale-scrum-inspection.png (small-scale-scrum-inspection.png)
|
||||||
|
[3]: https://www.agilealliance.org/
|
@ -1,58 +1,61 @@
|
|||||||
关于 /dev/urandom 的流言终结
|
关于 /dev/urandom 的流言终结
|
||||||
======
|
======
|
||||||
|
|
||||||
有很多关于 /dev/urandom 和 /dev/random 的流言在坊间不断流传。流言终究是流言。
|
有很多关于 `/dev/urandom` 和 `/dev/random` 的流言在坊间不断流传。然而流言终究是流言。
|
||||||
本篇文章里针对的都是今年的 Linux 操作系统,其他类 Unix 操作系统不在讨论范围内。
|
|
||||||
|
|
||||||
### /dev/urandom 不安全。加密用途必须使用 /dev/random。
|
> 本篇文章里针对的都是近来的 Linux 操作系统,其它类 Unix 操作系统不在讨论范围内。
|
||||||
|
|
||||||
事实:/dev/urandom 才是类 Unix 操作系统下推荐的加密种子。
|
**`/dev/urandom` 不安全。加密用途必须使用 `/dev/random`**
|
||||||
|
|
||||||
### /dev/urandom 是伪随机数生成器(PRND),而 /dev/random 是“真”随机数生成器。
|
事实:`/dev/urandom` 才是类 Unix 操作系统下推荐的加密种子。
|
||||||
|
|
||||||
事实:他们两者本质上用的是同一种 CSPRNG (一种密码学伪随机数生成器)。他们之间细微的差别和“真”不“真”随机完全无关
|
**`/dev/urandom` 是<ruby>伪随机数生成器<rt>pseudo random number generator</rt></ruby>(PRND),而 `/dev/random` 是“真”随机数生成器。**
|
||||||
|
|
||||||
### /dev/random 在任何情况下都是密码学应用更好地选择。即便 /dev/urandom 也同样安全,我们还是不应该用 urandom。
|
事实:它们两者本质上用的是同一种 CSPRNG (一种密码学伪随机数生成器)。它们之间细微的差别和“真”不“真”随机完全无关
|
||||||
|
|
||||||
事实:/dev/random 有个很恶心人的问题:它是阻塞的。(译者:意味着请求都得逐个执行,等待前一个事件完成)
|
**`/dev/random` 在任何情况下都是密码学应用更好地选择。即便 `/dev/urandom` 也同样安全,我们还是不应该用它。**
|
||||||
|
|
||||||
### 但阻塞不是好事吗!/dev/random 只会给出电脑收集的信息熵足以支持的随机量。/dev/urandom 在用完了所有熵的情况下还会不断吐不安全的随机数给你。
|
事实:`/dev/random` 有个很恶心人的问题:它是阻塞的。(LCTT 译注:意味着请求都得逐个执行,等待前一个请求完成)
|
||||||
|
|
||||||
事实:这是误解。就算我们不去考虑应用层面后续对随机种子的用法,“用完信息熵池”这个概念本身就不存在。仅仅 256 bits 的熵就足以生成计算上安全的随机数很长,很长一段时间了。
|
**但阻塞不是好事吗!`/dev/random` 只会给出电脑收集的信息熵足以支持的随机量。`/dev/urandom` 在用完了所有熵的情况下还会不断吐不安全的随机数给你。**
|
||||||
|
|
||||||
问题的关键还在后头:/dev/random 怎么知道有系统会多少可用的信息熵?接着看!
|
事实:这是误解。就算我们不去考虑应用层面后续对随机种子的用法,“用完信息熵池”这个概念本身就不存在。仅仅 256 位的熵就足以生成计算上安全的随机数很长、很长的一段时间了。
|
||||||
|
|
||||||
### 但密码学家老是讨论重新选种子(re-seeding)。这难道不和上一条冲突吗?
|
问题的关键还在后头:`/dev/random` 怎么知道有系统会*多少*可用的信息熵?接着看!
|
||||||
|
|
||||||
|
**但密码学家老是讨论重新选种子(re-seeding)。这难道不和上一条冲突吗?**
|
||||||
|
|
||||||
事实:你说的也没错!某种程度上吧。确实,随机数生成器一直在使用系统信息熵的状态重新选种。但这么做(一部分)是因为别的原因。
|
事实:你说的也没错!某种程度上吧。确实,随机数生成器一直在使用系统信息熵的状态重新选种。但这么做(一部分)是因为别的原因。
|
||||||
|
|
||||||
这样说吧,我没有说引入新的信息熵是坏的。更多的熵肯定更好。我只是说在熵池低的时候阻塞是没必要的。
|
这样说吧,我没有说引入新的信息熵是坏的。更多的熵肯定更好。我只是说在熵池低的时候阻塞是没必要的。
|
||||||
|
|
||||||
### 好,就算你说的都对,但是 /dev/(u)random 的 man 页面和你说的也不一样啊!到底有没有专家同意你说的这堆啊?
|
**好,就算你说的都对,但是 `/dev/(u)random` 的 man 页面和你说的也不一样啊!到底有没有专家同意你说的这堆啊?**
|
||||||
|
|
||||||
事实:其实 man 页面和我说的不冲突。它看似好像在说 /dev/urandom 对密码学用途来说不安全,但如果你真的理解这堆密码学术语你就知道他说的并不是这个意思。
|
事实:其实 man 页面和我说的不冲突。它看似好像在说 `/dev/urandom` 对密码学用途来说不安全,但如果你真的理解这堆密码学术语你就知道它说的并不是这个意思。
|
||||||
|
|
||||||
man 页面确实说在一些情况下推荐使用 /dev/random (我觉得也没问题,但绝对不是说必要的),但它也推荐在大多数“一般”的密码学应用下使用 /dev/urandom 。
|
man 页面确实说在一些情况下推荐使用 `/dev/random` (我觉得也没问题,但绝对不是说必要的),但它也推荐在大多数“一般”的密码学应用下使用 `/dev/urandom` 。
|
||||||
|
|
||||||
虽然诉诸权威一般来说不是好事,但在密码学这么严肃的事情上,和专家统一意见是很有必要的。
|
虽然诉诸权威一般来说不是好事,但在密码学这么严肃的事情上,和专家统一意见是很有必要的。
|
||||||
|
|
||||||
所以说呢,还确实有一些专家和我的一件事一致的:/dev/urandom 就应该是类 UNIX 操作系统下密码学应用的首选。显然的,是他们的观点说服了我而不是反过来的。
|
所以说呢,还确实有一些*专家*和我的一件事一致的:`/dev/urandom` 就应该是类 UNIX 操作系统下密码学应用的首选。显然的,是他们的观点说服了我而不是反过来的。
|
||||||
|
|
||||||
|
------
|
||||||
|
|
||||||
难以相信吗?觉得我肯定错了?读下去看我能不能说服你。
|
难以相信吗?觉得我肯定错了?读下去看我能不能说服你。
|
||||||
|
|
||||||
我尝试不讲太高深的东西,但是有两点内容必须先提一下才能让我们接着论证观点。
|
我尝试不讲太高深的东西,但是有两点内容必须先提一下才能让我们接着论证观点。
|
||||||
|
|
||||||
首当其冲的,什么是随机性,或者更准确地:我们在探讨什么样的随机性?
|
首当其冲的,*什么是随机性*,或者更准确地:我们在探讨什么样的随机性?
|
||||||
|
|
||||||
另外一点很重要的是,我没有尝试以说教的态度对你们写这段话。我写这篇文章是为了日后可以在讨论起的时候指给别人看。比 140 字长(译者:推特长度)。这样我就不用一遍遍重复我的观点了。能把论点磨炼成一篇文章本身就很有助于将来的讨论。
|
另外一点很重要的是,我*没有尝试以说教的态度*对你们写这段话。我写这篇文章是为了日后可以在讨论起的时候指给别人看。比 140 字长(LCTT 译注:推特长度)。这样我就不用一遍遍重复我的观点了。能把论点磨炼成一篇文章本身就很有助于将来的讨论。
|
||||||
|
|
||||||
并且我非常乐意听到不一样的观点。但我只是认为单单地说 /dev/urandom 坏是不够的。你得能指出到底有什么问题,并且剖析他们。
|
并且我非常乐意听到不一样的观点。但我只是认为单单地说 `/dev/urandom` 坏是不够的。你得能指出到底有什么问题,并且剖析它们。
|
||||||
|
|
||||||
### 你是在说我笨?!
|
### 你是在说我笨?!
|
||||||
|
|
||||||
绝对没有!
|
绝对没有!
|
||||||
|
|
||||||
事实上我自己也相信了 “/dev/urandom 不安全的” 好些年。这几乎不是我们的错,因为那么德高望重的人在 Usenet,论坛,推特上根我们重复这个观点。甚至连 man page 都似是而非地说着。我们当年怎么可能打发诸如“信息熵太低了”这种看上去就很让人信服的观点呢?
|
事实上我自己也相信了 “`/dev/urandom` 是不安全的” 好些年。这几乎不是我们的错,因为那么德高望重的人在 Usenet、论坛、推特上跟我们重复这个观点。甚至*连 man 手册*都似是而非地说着。我们当年怎么可能鄙视诸如“信息熵太低了”这种看上去就很让人信服的观点呢?
|
||||||
|
|
||||||
整个流言之所以如此广为流传不是因为人们太蠢,而是因为但凡有点关于信息熵和密码学概念的人都会觉得这个说法很有道理。直觉似乎都在告诉我们这流言讲的很有道理。很不幸直觉在密码学里通常不管用,这次也一样。
|
整个流言之所以如此广为流传不是因为人们太蠢,而是因为但凡有点关于信息熵和密码学概念的人都会觉得这个说法很有道理。直觉似乎都在告诉我们这流言讲的很有道理。很不幸直觉在密码学里通常不管用,这次也一样。
|
||||||
|
|
||||||
@ -62,9 +65,9 @@ man 页面确实说在一些情况下推荐使用 /dev/random (我觉得也没
|
|||||||
|
|
||||||
我不想搞的太复杂以至于变成哲学范畴的东西。这种讨论很容易走偏因为随机模型大家见仁见智,讨论很快变得毫无意义。
|
我不想搞的太复杂以至于变成哲学范畴的东西。这种讨论很容易走偏因为随机模型大家见仁见智,讨论很快变得毫无意义。
|
||||||
|
|
||||||
在我看来真随机的“试金石”是量子效应。一个光子穿过或不穿过一个50%的半透镜。或者观察一个放射性粒子衰变。这类东西是现实世界最接近真随机的东西。当然,有些人也不相信这类过程是真随机的,或者这个世界根本不存在任何随机性。这个就百家争鸣我也不好多说什么了。
|
在我看来“真随机”的“试金石”是量子效应。一个光子穿过或不穿过一个半透镜。或者观察一个放射性粒子衰变。这类东西是现实世界最接近真随机的东西。当然,有些人也不相信这类过程是真随机的,或者这个世界根本不存在任何随机性。这个就百家争鸣了,我也不好多说什么了。
|
||||||
|
|
||||||
密码学家一般都会通过不去讨论什么是“真随机”来避免这种争论。他们更关心的是不可预测性。只要没有任何方法能猜出下一个随机数就可以了。所以当你以密码学应用为前提讨论一个随机数好不好的时候,在我看来这才是最重要的。
|
密码学家一般都会通过不去讨论什么是“真随机”来避免这种争论。它们更关心的是<ruby>不可预测性<rt> unpredictability</rt></ruby>。只要没有*任何*方法能猜出下一个随机数就可以了。所以当你以密码学应用为前提讨论一个随机数好不好的时候,在我看来这才是最重要的。
|
||||||
|
|
||||||
无论如何,我不怎么关心“哲学上安全”的随机数,这也包括别人嘴里的“真”随机数。
|
无论如何,我不怎么关心“哲学上安全”的随机数,这也包括别人嘴里的“真”随机数。
|
||||||
|
|
||||||
@ -72,25 +75,25 @@ man 页面确实说在一些情况下推荐使用 /dev/random (我觉得也没
|
|||||||
|
|
||||||
但就让我们退一步说,你有了一个“真”随机变量。你下一步做什么呢?
|
但就让我们退一步说,你有了一个“真”随机变量。你下一步做什么呢?
|
||||||
|
|
||||||
你把他们打印出来然后挂在墙上来战士量子宇宙的美与和谐?牛逼!我很理解你。
|
你把它们打印出来然后挂在墙上来展示量子宇宙的美与和谐?牛逼!我很理解你。
|
||||||
|
|
||||||
但是等等,你说你要用他们?做密码学用途?额,那这就废了,因为这事情就有点复杂了。
|
但是等等,你说你要*用*它们?做密码学用途?额,那这就废了,因为这事情就有点复杂了。
|
||||||
|
|
||||||
事情是这样的,你的真随机,量子力学加护的随机数即将被用进不理想的现实世界程序里。
|
事情是这样的,你的真随机,量子力学加护的随机数即将被用进不理想的现实世界程序里。
|
||||||
|
|
||||||
因为我们使用的大多数算法并不是 ### 理论信息学上安全的。**他们只能提供** 计算意义上的安全。我能想到为数不多的例外就只有 Shamir 密钥分享 和 One-time pad 算法。并且就算前者是名副其实的(如果你实际打算用的话),后者则毫无可行性可言。
|
因为我们使用的大多数算法并不是<ruby>理论信息学<rt>information-theoretic</rt></ruby>上安全的。它们“只能”提供 **计算意义上的安全**。我能想到为数不多的例外就只有 Shamir 密钥分享 和 One-time pad 算法。并且就算前者是名副其实的(如果你实际打算用的话),后者则毫无可行性可言。
|
||||||
|
|
||||||
但所有那些大名鼎鼎的密码学算法,AES,RSA,Diffie-Hellman, 椭圆曲线,还有所有那些加密软件包,OpenSSL,GnuTLS,Keyczar,你的操作系统的加密 API,都仅仅是计算意义上的安全的。
|
但所有那些大名鼎鼎的密码学算法,AES、RSA、Diffie-Hellman、椭圆曲线,还有所有那些加密软件包,OpenSSL、GnuTLS、Keyczar、你的操作系统的加密 API,都仅仅是计算意义上的安全的。
|
||||||
|
|
||||||
那区别是什么呢?理论信息学上的安全肯定是安全的,句号。其他那些的算法都可能在理论上被拥有无限计算力的穷举破解。我们依然愉快地使用他们因为全世界的计算机加起来都不可能在宇宙年龄的时间里破解,至少现在是这样。而这就是我们文章里说的“不安全”。
|
那区别是什么呢?理论信息学上的安全肯定是安全的,绝对是,其它那些的算法都可能在理论上被拥有无限计算力的穷举破解。我们依然愉快地使用它们因为全世界的计算机加起来都不可能在宇宙年龄的时间里破解,至少现在是这样。而这就是我们文章里说的“不安全”。
|
||||||
|
|
||||||
除非哪个聪明的家伙破解了算法本身——在只需要极少量计算力的情况下。这也是每个密码学家梦寐以求的圣杯:破解 AES 本身,破解 RSA 算法本身。
|
除非哪个聪明的家伙破解了算法本身——在只需要极少量计算力的情况下。这也是每个密码学家梦寐以求的圣杯:破解 AES 本身、破解 RSA 本身等等。
|
||||||
|
|
||||||
所以现在我们来到了更底层的东西:随机数生成器,你坚持要“真随机”而不是“伪随机”。但是没过一会儿你的真随机数就被喂进了你极为鄙视的伪随机算法里了!
|
所以现在我们来到了更底层的东西:随机数生成器,你坚持要“真随机”而不是“伪随机”。但是没过一会儿你的真随机数就被喂进了你极为鄙视的伪随机算法里了!
|
||||||
|
|
||||||
真相是,如果我们最先进的 hash 算法被破解了,或者最先进的块加密被破解了,你得到这些那些“哲学上不安全的”甚至无所谓了,因为反正你也没有安全的应用方法了。
|
真相是,如果我们最先进的 hash 算法被破解了,或者最先进的块加密被破解了,你得到这些那些“哲学上不安全的”甚至无所谓了,因为反正你也没有安全的应用方法了。
|
||||||
|
|
||||||
所以喂计算性上安全的随机数给你仅仅是计算性上安全的算法就可以了,换而言之,用 /dev/urandom。
|
所以把计算性上安全的随机数喂给你的仅仅是计算性上安全的算法就可以了,换而言之,用 `/dev/urandom`。
|
||||||
|
|
||||||
### Linux 随机数生成器的构架
|
### Linux 随机数生成器的构架
|
||||||
|
|
||||||
@ -100,19 +103,19 @@ man 页面确实说在一些情况下推荐使用 /dev/random (我觉得也没
|
|||||||
|
|
||||||
![image: mythical structure of the kernel's random number generator][1]
|
![image: mythical structure of the kernel's random number generator][1]
|
||||||
|
|
||||||
“真随机数”,尽管可能有点瑕疵,进入操作系统然后它的熵立刻被加入内部熵计数器。然后经过去 bias 和“漂白”之后它进入内核的熵池,然后 /dev/random 和 /dev/urandom 从里面生成随机数。
|
“真随机数”,尽管可能有点瑕疵,进入操作系统然后它的熵立刻被加入内部熵计数器。然后经过“矫偏”和“漂白”之后它进入内核的熵池,然后 `/dev/random` 和 `/dev/urandom` 从里面生成随机数。
|
||||||
|
|
||||||
“真”随机数生成器,/dev/random,直接从池里选出随机数,如果熵计数器表示能满足需要的数字大小,那就吐出数字并且减少熵计数。如果不够的话,他会阻塞程序直至有足够的熵进入和系统。
|
“真”随机数生成器,`/dev/random`,直接从池里选出随机数,如果熵计数器表示能满足需要的数字大小,那就吐出数字并且减少熵计数。如果不够的话,它会阻塞程序直至有足够的熵进入系统。
|
||||||
|
|
||||||
这里很重要一环是 /dev/random 几乎直接把那些进入系统的随机性吐了出来,不经扭曲。
|
这里很重要一环是 `/dev/random` 几乎只是仅经过必要的“漂白”后就直接把那些进入系统的随机性吐了出来,不经扭曲。
|
||||||
|
|
||||||
而对 /dev/urandom 来说,事情是一样的。除了当没有足够的熵的时候,它不会阻塞,而会从一直在运行的伪随机数生成器里吐出“底质量”的随机数。这个 CSPRNG 只会用“真随机数”生成种子一次(或者好几次,这不重要),但你不能特别相信它。
|
而对 `/dev/urandom` 来说,事情是一样的。除了当没有足够的熵的时候,它不会阻塞,而会从一直在运行的伪随机数生成器(当然,是密码学安全的,CSPRNG)里吐出“低质量”的随机数。这个 CSPRNG 只会用“真随机数”生成种子一次(或者好几次,这不重要),但你不能特别相信它。
|
||||||
|
|
||||||
在这种对随机数生成的理解下,很多人会觉得在 Linux 下尽量避免 /dev/urandom 看上去有那么点道理。
|
在这种对随机数生成的理解下,很多人会觉得在 Linux 下尽量避免 `/dev/urandom` 看上去有那么点道理。
|
||||||
|
|
||||||
因为要么你有足够多的熵,你会相当于用了 /dev/random。要么没有,那你就会从几乎没有高熵输入的 CSPRNG 那里得到一个低质量的随机数。
|
因为要么你有足够多的熵,你会相当于用了 `/dev/random`。要么没有,那你就会从几乎没有高熵输入的 CSPRNG 那里得到一个低质量的随机数。
|
||||||
|
|
||||||
看上去很邪恶是吧?很不幸的是这种看法是完全错误的。实际上,随机数生成器的构架更像是这样的。
|
看上去很邪恶是吧?很不幸的是这种看法是完全错误的。实际上,随机数生成器的构架更像是下面这样的。
|
||||||
|
|
||||||
#### 更好地简化
|
#### 更好地简化
|
||||||
|
|
||||||
@ -120,66 +123,65 @@ man 页面确实说在一些情况下推荐使用 /dev/random (我觉得也没
|
|||||||
|
|
||||||
![image: actual structure of the kernel's random number generator before Linux 4.8][2]
|
![image: actual structure of the kernel's random number generator before Linux 4.8][2]
|
||||||
|
|
||||||
这是个很粗糙的简化。实际上不仅有一个,而是三个熵池。一个主池,另一个给 /dev/random,还有一个给 /dev/urandom,后两者依靠从主池里获取熵。这三个池都有各自的熵计数器,但二级池(后两个)的计数器基本都在0附近,而“新鲜”的熵总在需要的时候从主池流过来。同时还有好多混合和回流进系统在同时进行。整个过程对于这篇文档来说都过于复杂了我们跳过。
|
> 这是个很粗糙的简化。实际上不仅有一个,而是三个熵池。一个主池,另一个给 `/dev/random`,还有一个给 `/dev/urandom`,后两者依靠从主池里获取熵。这三个池都有各自的熵计数器,但二级池(后两个)的计数器基本都在 0 附近,而“新鲜”的熵总在需要的时候从主池流过来。同时还有好多混合和回流进系统在同时进行。整个过程对于这篇文档来说都过于复杂了我们跳过。
|
||||||
|
|
||||||
但你看到最大的区别了吗? CSPRNG 并不是和随机数生成器一起跑用来填充 /dev/urandom 需要输出但熵不够的时候。CSPRNG 是整个随机数生成过程的内部组件之一。从来就没有什么 /dev/random 直接从池里输出纯纯的随机性。每个随机源的输入都在 CSPRNG 里充分混合和 hash 过了,这一切都发生在实际变成一个随机数,被/dev/urandom 或者 /dev/random 吐出去之前。
|
你看到最大的区别了吗?CSPRNG 并不是和随机数生成器一起跑的,以 `/dev/urandom` 需要输出但熵不够的时候进行填充。CSPRNG 是整个随机数生成过程的内部组件之一。从来就没有什么 `/dev/random` 直接从池里输出纯纯的随机性。每个随机源的输入都在 CSPRNG 里充分混合和散列过了,这一切都发生在实际变成一个随机数,被 `/dev/urandom` 或者 `/dev/random` 吐出去之前。
|
||||||
|
|
||||||
另外一个重要的区别是是这里没有熵计数器的任何事情,只有预估。一个源给你的熵的量并不是什么很明确能直接得到的数字。你得预估它。注意,如果你太乐观地预估了它,那 /dev/random 最重要的特性——只给出熵允许的随机量——就荡然无存了。很不幸的,预估熵的量是很困难的。
|
另外一个重要的区别是这里没有熵计数器的任何事情,只有预估。一个源给你的熵的量并不是什么很明确能直接得到的数字。你得预估它。注意,如果你太乐观地预估了它,那 `/dev/random` 最重要的特性——只给出熵允许的随机量——就荡然无存了。很不幸的,预估熵的量是很困难的。
|
||||||
|
|
||||||
Linux 内核只使用事件的到达时间来预估熵的量。它通过多项式插值,某种模型,来预估实际的到达时间有多“出乎意料”。这种多项式插值的方法到底是不是好的预估熵量的方法本身就是个问题。同时硬件情况会不会以某种特定的方式影响到达时间也是个问题。而所有硬件的取样率也是个问题,因为这基本上就直接决定了随机数到达时间的颗粒度。
|
Linux 内核只使用事件的到达时间来预估熵的量。它通过多项式插值,某种模型,来预估实际的到达时间有多“出乎意料”。这种多项式插值的方法到底是不是好的预估熵量的方法本身就是个问题。同时硬件情况会不会以某种特定的方式影响到达时间也是个问题。而所有硬件的取样率也是个问题,因为这基本上就直接决定了随机数到达时间的颗粒度。
|
||||||
|
|
||||||
说到最后,至少现在看来,内核的熵预估还是不错的。这也意味着它比较保守。有些人会具体地讨论它有多好,这都超出我的脑容量了。就算这样,如果你坚持不想在没有足够多的熵的情况下吐出随机数,那你看到这里可能还会有一丝紧张。我睡的就很香了,因为我不关心熵预估什么的。
|
说到最后,至少现在看来,内核的熵预估还是不错的。这也意味着它比较保守。有些人会具体地讨论它有多好,这都超出我的脑容量了。就算这样,如果你坚持不想在没有足够多的熵的情况下吐出随机数,那你看到这里可能还会有一丝紧张。我睡的就很香了,因为我不关心熵预估什么的。
|
||||||
|
|
||||||
最后强调一下终点:/dev/random 和 /dev/urandom 都是被同一个 CSPRNG 喂的输入。只有他们在用完各自熵池(根据某种预估标准)的时候,他们的行为会不同:/dev/random 阻塞,/dev/urandom 不阻塞。
|
最后强调一下终点:`/dev/random` 和 `/dev/urandom` 都是被同一个 CSPRNG 喂的输入。只有它们在用完各自熵池(根据某种预估标准)的时候,它们的行为会不同:`/dev/random` 阻塞,`/dev/urandom` 不阻塞。
|
||||||
|
|
||||||
##### Linux 4.8 以后
|
##### Linux 4.8 以后
|
||||||
|
|
||||||
在 Linux 4.8 里,/dev/random 和 /dev/urandom 的等价性被放弃了。现在 /dev/urandom 的输出不来自于熵池,而是直接从 CSPRNG 来。
|
在 Linux 4.8 里,`/dev/random` 和 `/dev/urandom` 的等价性被放弃了。现在 `/dev/urandom` 的输出不来自于熵池,而是直接从 CSPRNG 来。
|
||||||
|
|
||||||
![image: actual structure of the kernel's random number generator from Linux 4.8 onward][3]
|
![image: actual structure of the kernel's random number generator from Linux 4.8 onward][3]
|
||||||
|
|
||||||
我们很快会理解为什么这不是一个安全问题。
|
*我们很快会理解*为什么这不是一个安全问题。
|
||||||
|
|
||||||
### 阻塞有什么问题?
|
### 阻塞有什么问题?
|
||||||
|
|
||||||
你有没有需要等着 /dev/random 来吐随机数?比如在虚拟机里生成一个 PGP 密钥?或者访问一个在生成会话密钥的网站?
|
你有没有需要等着 `/dev/random` 来吐随机数?比如在虚拟机里生成一个 PGP 密钥?或者访问一个在生成会话密钥的网站?
|
||||||
|
|
||||||
这些都是问题。阻塞本质上会降低可用性。换而言之你的系统不干你让它干的事情。不用我说,这是不好的。要是它不 work 你干嘛搭建它呢?
|
这些都是问题。阻塞本质上会降低可用性。换而言之你的系统不干你让它干的事情。不用我说,这是不好的。要是它不干活你干嘛搭建它呢?
|
||||||
|
|
||||||
我在工厂自动化里做过和安全相关的系统。猜猜看安全系统失效的主要原因是什么?被错误操作。就这么简单。很多安全措施的流程让工人恼火了。比如时间太长,或者太不方便。你要知道人很会找捷径来“解决”问题。
|
> 我在工厂自动化里做过和安全相关的系统。猜猜看安全系统失效的主要原因是什么?被错误操作。就这么简单。很多安全措施的流程让工人恼火了。比如时间太长,或者太不方便。你要知道人很会找捷径来“解决”问题。
|
||||||
|
|
||||||
但其实有个更深刻的问题:人们不喜欢被打断。他们会找一些绕过的方法,把一些诡异的东西接在一起仅仅因为这样能用。一般人根本不知道什么密码学什么乱七八糟的,至少正常的人是这样吧。
|
但其实有个更深刻的问题:人们不喜欢被打断。它们会找一些绕过的方法,把一些诡异的东西接在一起仅仅因为这样能用。一般人根本不知道什么密码学什么乱七八糟的,至少正常的人是这样吧。
|
||||||
|
|
||||||
为什么不禁止调用 `random()`?为什么不随便在论坛上找个人告诉你用写奇异的 ioctl 来增加熵计数器呢?为什么不干脆就把 SSL 加密给关了算了呢?
|
为什么不禁止调用 `random()`?为什么不随便在论坛上找个人告诉你用写奇异的 ioctl 来增加熵计数器呢?为什么不干脆就把 SSL 加密给关了算了呢?
|
||||||
|
|
||||||
到头来如果东西太难用的话,你的用户就会被迫开始做一些降低系统安全性的事情——你甚至不知道他们会做些什么。
|
到头来如果东西太难用的话,你的用户就会被迫开始做一些降低系统安全性的事情——你甚至不知道它们会做些什么。
|
||||||
|
|
||||||
我们很容易会忽视可用性之类的重要性。毕竟安全第一对吧?所以比起牺牲安全,不可用,难用,不方便都是次要的?
|
我们很容易会忽视可用性之类的重要性。毕竟安全第一对吧?所以比起牺牲安全,不可用,难用,不方便都是次要的?
|
||||||
|
|
||||||
这种二元对立的想法是错的。阻塞不一定就安全了。正如我们看到的,/dev/urandom 直接从 CSPRNG 里给你一样好的随机数。用它不好吗!
|
这种二元对立的想法是错的。阻塞不一定就安全了。正如我们看到的,`/dev/urandom` 直接从 CSPRNG 里给你一样好的随机数。用它不好吗!
|
||||||
|
|
||||||
### CSPRNG 没问题
|
### CSPRNG 没问题
|
||||||
|
|
||||||
现在情况听上去很沧桑。如果连高质量的 /dev/random 都是从一个 CSPRNG 里来的,我们怎么敢在高安全性的需求上使用它呢?
|
现在情况听上去很沧桑。如果连高质量的 `/dev/random` 都是从一个 CSPRNG 里来的,我们怎么敢在高安全性的需求上使用它呢?
|
||||||
|
|
||||||
实际上,“看上去随机”是现存大多数密码学算法的更集。如果你观察一个密码学 hash 的输出,它得和随机的字符串不可区分,密码学家才会认可这个算法。如果你生成一个块加密,它的输出(在你不知道密钥的情况下)也必须和随机数据不可区分才行。
|
实际上,“看上去随机”是现存大多数密码学基础组件的基本要求。如果你观察一个密码学哈希的输出,它一定得和随机的字符串不可区分,密码学家才会认可这个算法。如果你生成一个块加密,它的输出(在你不知道密钥的情况下)也必须和随机数据不可区分才行。
|
||||||
|
|
||||||
如果任何人能比暴力穷举要更有效地破解一个加密,比如它利用了某些 CSPRNG 伪随机的弱点,那这就又是老一套了:一切都废了,也别谈后面的了。块加密,hash,一切都是基于某个数学算法,比如 CSPRNG。所以别害怕,到头来都一样。
|
如果任何人能比暴力穷举要更有效地破解一个加密,比如它利用了某些 CSPRNG 伪随机的弱点,那这就又是老一套了:一切都废了,也别谈后面的了。块加密、哈希,一切都是基于某个数学算法,比如 CSPRNG。所以别害怕,到头来都一样。
|
||||||
|
|
||||||
### 那熵池快空了的情况呢?
|
### 那熵池快空了的情况呢?
|
||||||
|
|
||||||
毫无影响。
|
毫无影响。
|
||||||
|
|
||||||
加密算法的根基建立在攻击者不能预测输出上,只要最一开始有足够的随机性(熵)就行了。一般的下限是 256 bits,不需要更多了。
|
加密算法的根基建立在攻击者不能预测输出上,只要最一开始有足够的随机性(熵)就行了。一般的下限是 256 位,不需要更多了。
|
||||||
|
|
||||||
介于我们一直在很随意的使用“熵”这个概念,我用 bits 来量化随机性希望读者不要太在意细节。像我们之前讨论的那样,内核的随机数生成器甚至没法精确地知道进入系统的熵的量。只有一个预估。而且这个预估的准确性到底怎么样也没人知道。
|
介于我们一直在很随意的使用“熵”这个概念,我用“位”来量化随机性希望读者不要太在意细节。像我们之前讨论的那样,内核的随机数生成器甚至没法精确地知道进入系统的熵的量。只有一个预估。而且这个预估的准确性到底怎么样也没人知道。
|
||||||
It doesn't matter.
|
|
||||||
|
|
||||||
### 重新选种
|
### 重新选种
|
||||||
|
|
||||||
但如果熵这么不重要,为什么还要有新的熵一直被收进随机数生成器里呢?
|
但如果熵这么不重要,为什么还要有新的熵一直被收进随机数生成器里呢?
|
||||||
|
|
||||||
djb [提到][4] 太多的熵甚至可能会起到反效果。
|
> djb [提到][4] 太多的熵甚至可能会起到反效果。
|
||||||
|
|
||||||
首先,一般不会这样。如果你有很多随机性可以拿来用,用就对了!
|
首先,一般不会这样。如果你有很多随机性可以拿来用,用就对了!
|
||||||
|
|
||||||
@ -189,92 +191,90 @@ djb [提到][4] 太多的熵甚至可能会起到反效果。
|
|||||||
|
|
||||||
你已经凉了,因为攻击者可以计算出所有未来会被输出的随机数了。
|
你已经凉了,因为攻击者可以计算出所有未来会被输出的随机数了。
|
||||||
|
|
||||||
但是,如果不断有新的熵被混进系统,那内部状态会在一次变得随机起来。所以随机数生成器被设计成这样有些“自愈”能力。
|
但是,如果不断有新的熵被混进系统,那内部状态会再一次变得随机起来。所以随机数生成器被设计成这样有些“自愈”能力。
|
||||||
|
|
||||||
但这是在给内部状态引入新的熵,这和阻塞输出没有任何关系。
|
但这是在给内部状态引入新的熵,这和阻塞输出没有任何关系。
|
||||||
|
|
||||||
|
|
||||||
### random 和 urandom 的 man 页面
|
### random 和 urandom 的 man 页面
|
||||||
|
|
||||||
这两个 man 页面在吓唬程序员方面很有建树:
|
这两个 man 页面在吓唬程序员方面很有建树:
|
||||||
|
|
||||||
> 从 /dev/urandom 读取数据不会因为需要更多熵而阻塞。这样的结果是,如果熵池里没有足够多的熵,取决于驱动使用的算法,返回的数值在理论上有被密码学攻击的可能性。发动这样攻击的步骤并没有出现在任何公开文献当中,但这样的攻击从理论上讲是可能存在的。如果你的应用担心这类情况,你应该使用 /dev/random。
|
> 从 `/dev/urandom` 读取数据不会因为需要更多熵而阻塞。这样的结果是,如果熵池里没有足够多的熵,取决于驱动使用的算法,返回的数值在理论上有被密码学攻击的可能性。发动这样攻击的步骤并没有出现在任何公开文献当中,但这样的攻击从理论上讲是可能存在的。如果你的应用担心这类情况,你应该使用 `/dev/random`。
|
||||||
|
|
||||||
没有“公开的文献”描述,但是 NSA 的小卖部里肯定卖这种攻击手段是吧?如果你真的真的很担心(你应该很担心),那就用 /dev/random 然后所有问题都没了?
|
>> 实际上已经有 `/dev/random` 和 `/dev/urandom` 的 Linux 内核 man 页面的更新版本。不幸的是,随便一个网络搜索出现我在结果顶部的仍然是旧的、有缺陷的版本。此外,许多 Linux 发行版仍在发布旧的 man 页面。所以不幸的是,这一节需要在这篇文章中保留更长的时间。我很期待删除这一节!
|
||||||
|
|
||||||
然而事实是,可能什么情报局有这种攻击,或者什么邪恶黑客组织找到了方法。但如果我们就直接假设这种攻击一定存在也是不合理的。
|
没有“公开的文献”描述,但是 NSA 的小卖部里肯定卖这种攻击手段是吧?如果你真的真的很担心(你应该很担心),那就用 `/dev/random` 然后所有问题都没了?
|
||||||
|
|
||||||
而且就算你想给自己一个安心,我要给你泼个冷水:AES,SHA-3 或者其他什么常见的加密算法也没有“公开文献记述”的攻击手段。难道你也不用这几个加密算法了?这显然是可笑的。
|
然而事实是,可能某个什么情报局有这种攻击,或者某个什么邪恶黑客组织找到了方法。但如果我们就直接假设这种攻击一定存在也是不合理的。
|
||||||
|
|
||||||
我们在回到 man 页面说:“使用 /dev/random”。我们已经知道了,虽然 /dev/urandom 不阻塞,但是它的随机数和 /dev/random 都是从同一个 CSPRNG 里来的。
|
而且就算你想给自己一个安心,我要给你泼个冷水:AES、SHA-3 或者其它什么常见的加密算法也没有“公开文献记述”的攻击手段。难道你也不用这几个加密算法了?这显然是可笑的。
|
||||||
|
|
||||||
如果你真的需要信息论理论上安全的随机数(你不需要的相信我),那才有可能成为唯一一个你需要等足够熵进入 CSPRNG 的理由。而且你也不能用 /dev/random。
|
我们在回到 man 页面说:“使用 `/dev/random`”。我们已经知道了,虽然 `/dev/urandom` 不阻塞,但是它的随机数和 `/dev/random` 都是从同一个 CSPRNG 里来的。
|
||||||
|
|
||||||
|
如果你真的需要信息论理论上安全的随机数(你不需要的,相信我),那才有可能成为唯一一个你需要等足够熵进入 CSPRNG 的理由。而且你也不能用 `/dev/random`。
|
||||||
|
|
||||||
man 页面有毒,就这样。但至少它还稍稍挽回了一下自己:
|
man 页面有毒,就这样。但至少它还稍稍挽回了一下自己:
|
||||||
> 如果你不确定该用 /dev/random 还是 /dev/urandom ,那你可能应该用后者。通常来说,除了需要长期使用的 GPG/SSL/SSH 密钥以外,你总该使用/dev/urandom 。
|
|
||||||
|
|
||||||
行。我觉得没必要,但如果你真的要用 /dev/random 来生成 “长期使用的密钥”,用就是了也没人拦着!你可能需要等几秒钟或者敲几下键盘来增加熵,但没什么问题。
|
> 如果你不确定该用 `/dev/random` 还是 `/dev/urandom` ,那你可能应该用后者。通常来说,除了需要长期使用的 GPG/SSL/SSH 密钥以外,你总该使用`/dev/urandom` 。
|
||||||
|
|
||||||
|
>> 该手册页的[当前更新版本](http://man7.org/linux/man-pages/man4/random.4.html)毫不含糊地说:
|
||||||
|
|
||||||
|
>> `/dev/random` 接口被认为是遗留接口,并且 `/dev/urandom` 在所有用例中都是首选和足够的,除了在启动早期需要随机性的应用程序;对于这些应用程序,必须替代使用 `getrandom(2)`,因为它将阻塞,直到熵池初始化完成。
|
||||||
|
|
||||||
|
行。我觉得没必要,但如果你真的要用 `/dev/random` 来生成 “长期使用的密钥”,用就是了也没人拦着!你可能需要等几秒钟或者敲几下键盘来增加熵,但这没什么问题。
|
||||||
|
|
||||||
但求求你们,不要就因为“你想更安全点”就让连个邮件服务器要挂起半天。
|
但求求你们,不要就因为“你想更安全点”就让连个邮件服务器要挂起半天。
|
||||||
|
|
||||||
### 正道
|
### 正道
|
||||||
|
|
||||||
本篇文章里的观点显然在互联网上是“小众”的。但如果问问一个真正的密码学家,你很难找到一个认同阻塞 /dev/random 的人。
|
本篇文章里的观点显然在互联网上是“小众”的。但如果问问一个真正的密码学家,你很难找到一个认同阻塞 `/dev/random` 的人。
|
||||||
|
|
||||||
比如我们看看 [Daniel Bernstein][5] djb:
|
比如我们看看 [Daniel Bernstein][5](即著名的 djb)的看法:
|
||||||
|
|
||||||
> 我们密码学家对这种胡乱迷信行为表示不负责。你想想,写 /dev/random man 页面的人好像同时相信:
|
> 我们密码学家对这种胡乱迷信行为表示不负责。你想想,写 `/dev/random` man 页面的人好像同时相信:
|
||||||
>
|
>
|
||||||
> * (1) 我们不知道如何用一个 256-bit 长的 /dev/random 的输出来生成一个无限长的随机密钥串流(这是我们需要 /dev/urandom 吐出来的),但与此同时
|
> * (1) 我们不知道如何用一个 256 位长的 `/dev/random` 的输出来生成一个无限长的随机密钥串流(这是我们需要 `/dev/urandom` 吐出来的),但与此同时
|
||||||
> * (2) 我们却知道怎么用单个密钥来加密一条消息(这是 SSL,PGP 之类干的事情)
|
> * (2) 我们却知道怎么用单个密钥来加密一条消息(这是 SSL,PGP 之类干的事情)
|
||||||
>
|
|
||||||
>
|
|
||||||
|
|
||||||
>
|
>
|
||||||
> 对密码学家来说这甚至都不好笑了
|
> 对密码学家来说这甚至都不好笑了
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
或早 [Thomas Pornin][6],他也是我在 stackexchange 上见过最乐于助人的一位:
|
或者 [Thomas Pornin][6] 的看法,他也是我在 stackexchange 上见过最乐于助人的一位:
|
||||||
|
|
||||||
> 简单来说,是的。展开说,答案还是一样。/dev/urandom 生成的数据可以说和真随机完全无法区分,至少在现有科技水平下。使用比 /dev/urandom “更好的“随机性毫无意义,除非你在使用极为罕见的“信息论安全”的加密算法。这肯定不是你的情况,不然你早就说了。
|
> 简单来说,是的。展开说,答案还是一样。`/dev/urandom` 生成的数据可以说和真随机完全无法区分,至少在现有科技水平下。使用比 `/dev/urandom` “更好的“随机性毫无意义,除非你在使用极为罕见的“信息论安全”的加密算法。这肯定不是你的情况,不然你早就说了。
|
||||||
>
|
>
|
||||||
> urandom 的 man 页面多多少少有些误导人,或者干脆可以说是错的——特别是当它说 /dev/urandom 会“用完熵”以及 “/dev/random 是更好的”那几句话;
|
> urandom 的 man 页面多多少少有些误导人,或者干脆可以说是错的——特别是当它说 `/dev/urandom` 会“用完熵”以及 “`/dev/random` 是更好的”那几句话;
|
||||||
|
|
||||||
|
或者 [Thomas Ptacek][7] 的看法,他不设计密码算法或者密码学系统,但他是一家名声在外的安全咨询公司的创始人,这家公司负责很多渗透和破解烂密码学算法的测试:
|
||||||
|
|
||||||
或者 [Thomas Ptacek][7],他不设计密码算法或者密码学系统,但他是一家名声在外的安全咨询公司的创始人,这家公司负责很多渗透和破解烂密码学算法的测试:
|
|
||||||
|
|
||||||
> 用 urandom。用 urandom。用 urandom。用 urandom。用 urandom。
|
> 用 urandom。用 urandom。用 urandom。用 urandom。用 urandom。
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### 没有完美
|
### 没有完美
|
||||||
|
|
||||||
/dev/urandom 不是完美的,问题分两层:
|
`/dev/urandom` 不是完美的,问题分两层:
|
||||||
|
|
||||||
在 Linux 上,不像 FreeBSD,/dev/urandom 永远不阻塞。记得安全性取决于某个最一开始决定的随机性?种子?
|
在 Linux 上,不像 FreeBSD,`/dev/urandom` 永远不阻塞。记得安全性取决于某个最一开始决定的随机性?种子?
|
||||||
|
|
||||||
Linux 的 /dev/urandom 会很乐意给你吐点不怎么随机的随机数,甚至在内核有机会收集一丁点熵之前。什么时候有这种情况?当你系统刚刚启动的时候。
|
Linux 的 `/dev/urandom` 会很乐意给你吐点不怎么随机的随机数,甚至在内核有机会收集一丁点熵之前。什么时候有这种情况?当你系统刚刚启动的时候。
|
||||||
|
|
||||||
FreeBSD 的行为更正确点:/dev/random 和 /dev/urandom 是一样的,在系统启动的时候 /dev/random 会阻塞到有足够的熵为止,然后他们都再也不阻塞了。
|
FreeBSD 的行为更正确点:`/dev/random` 和 `/dev/urandom` 是一样的,在系统启动的时候 `/dev/random` 会阻塞到有足够的熵为止,然后它们都再也不阻塞了。
|
||||||
|
|
||||||
与此同时 Linux 实行了一个新的 syscall,最早由 OpenBSD 引入叫 getentrypy(2),在 Linux 下这个叫 getrandom(2)。这个 syscall 有着上述正确的行为:阻塞到有足够的熵为止,然后再也不阻塞了。当然,这是个 syscall,而不是一个字节设备(译者:指不在 /dev/ 下),所以它在 shell 或者别的脚本语言里没那么容易获取。这个 syscall 自 Linux 3.17 起存在。
|
> 与此同时 Linux 实行了一个新的<ruby>系统调用<rt>syscall</rt></ruby>,最早由 OpenBSD 引入叫 `getentrypy(2)`,在 Linux 下这个叫 `getrandom(2)`。这个系统调用有着上述正确的行为:阻塞到有足够的熵为止,然后再也不阻塞了。当然,这是个系统调用,而不是一个字节设备(LCTT 译注:指不在 `/dev/` 下),所以它在 shell 或者别的脚本语言里没那么容易获取。这个系统调用 自 Linux 3.17 起存在。
|
||||||
|
|
||||||
在 Linux 上其实这个问题不太大,因为 Linux 发行版会在启动的过程中储蓄一点随机数(这发生在已经有一些熵之后,因为启动程序不会在按下电源的一瞬间就开始运行)到一个种子文件,以便系统下次启动的时候读取。所以每次启动的时候系统都会从上一次会话里带一点随机性过来。
|
在 Linux 上其实这个问题不太大,因为 Linux 发行版会在启动的过程中储蓄一点随机数(这发生在已经有一些熵之后,因为启动程序不会在按下电源的一瞬间就开始运行)到一个种子文件中,以便系统下次启动的时候读取。所以每次启动的时候系统都会从上一次会话里带一点随机性过来。
|
||||||
|
|
||||||
显然这比不上在关机脚本里写入一些随机种子,因为这样的显然就有更多熵可以操作了。但这样做显而易见的好处就是它不关心系统是不是正确关机了,比如可能你系统崩溃了。
|
显然这比不上在关机脚本里写入一些随机种子,因为这样的显然就有更多熵可以操作了。但这样做显而易见的好处就是它不用关心系统是不是正确关机了,比如可能你系统崩溃了。
|
||||||
|
|
||||||
而且这种做法在你真正第一次启动系统的时候也没法帮你随机,不过好在系统安装器一般会写一个种子文件,所以基本上问题不大。
|
而且这种做法在你真正第一次启动系统的时候也没法帮你随机,不过好在系统安装器一般会写一个种子文件,所以基本上问题不大。
|
||||||
|
|
||||||
虚拟机是另外一层问题。因为用户喜欢克隆他们,或者恢复到某个之前的状态。这种情况下那个种子文件就帮不到你了。
|
虚拟机是另外一层问题。因为用户喜欢克隆它们,或者恢复到某个之前的状态。这种情况下那个种子文件就帮不到你了。
|
||||||
|
|
||||||
但解决方案依然和用 /dev/random 没关系,而是你应该正确的给每个克隆或者恢复的的镜像重新生成种子文件,之类的。
|
但解决方案依然和用 `/dev/random` 没关系,而是你应该正确的给每个克隆或者恢复的镜像重新生成种子文件。
|
||||||
|
|
||||||
### 太长不看;
|
### 太长不看
|
||||||
|
|
||||||
别问,问就是用 /dev/urandom !
|
|
||||||
|
|
||||||
|
别问,问就是用 `/dev/urandom` !
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
@ -282,14 +282,14 @@ via: https://www.2uo.de/myths-about-urandom/
|
|||||||
|
|
||||||
作者:[Thomas Hühn][a]
|
作者:[Thomas Hühn][a]
|
||||||
译者:[Moelf](https://github.com/Moelf)
|
译者:[Moelf](https://github.com/Moelf)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
[a]:https://www.2uo.de/
|
[a]:https://www.2uo.de/
|
||||||
[1]:https://www.2uo.de/myths-about-urandom/structure-no.png
|
[1]:https://www.2uo.de/_media/wiki:structure-no.png
|
||||||
[2]:https://www.2uo.de/myths-about-urandom/structure-yes.png
|
[2]:https://www.2uo.de/_media/wiki:structure-yes.png
|
||||||
[3]:https://www.2uo.de/myths-about-urandom/structure-new.png
|
[3]:https://www.2uo.de/_media/wiki:structure-new.png
|
||||||
[4]:http://blog.cr.yp.to/20140205-entropy.html
|
[4]:http://blog.cr.yp.to/20140205-entropy.html
|
||||||
[5]:http://www.mail-archive.com/cryptography@randombit.net/msg04763.html
|
[5]:http://www.mail-archive.com/cryptography@randombit.net/msg04763.html
|
||||||
[6]:http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key/3939#3939
|
[6]:http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key/3939#3939
|
||||||
|
@ -1,360 +0,0 @@
|
|||||||
[#]: collector: (lujun9972)
|
|
||||||
[#]: translator: (liujing97)
|
|
||||||
[#]: reviewer: ( )
|
|
||||||
[#]: publisher: ( )
|
|
||||||
[#]: url: ( )
|
|
||||||
[#]: subject: (How To Understand And Identify File types in Linux)
|
|
||||||
[#]: via: (https://www.2daygeek.com/how-to-understand-and-identify-file-types-in-linux/)
|
|
||||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
|
||||||
|
|
||||||
怎样理解和识别 Linux 中的文件类型
|
|
||||||
======
|
|
||||||
|
|
||||||
众所周知,在 Linux 中一切皆为文件,包括硬盘和显卡等。
|
|
||||||
|
|
||||||
在 Linux 中导航时,大部分的文件都普通文件和目录文件。
|
|
||||||
|
|
||||||
但是它也有其他的类型,对应 5 类不同的作用。
|
|
||||||
|
|
||||||
所以,理解 Linux 中的文件类型在许多方面都是非常重要的。
|
|
||||||
|
|
||||||
如果你不相信,那只需要浏览全文,就会发现它有多重要。
|
|
||||||
|
|
||||||
如果你不能理解文件类型,就不能够毫无畏惧的做任意的修改。
|
|
||||||
|
|
||||||
如果你做了一些错误的修改,以至于毁坏了你的文件系统,那么当你操作的时候请小心一点。
|
|
||||||
|
|
||||||
在 Linux 系统中文件是非常重要的,因为所有的设备和守护进程都被存储为文件。
|
|
||||||
|
|
||||||
### 在 Linux 中有多少种可用类型?
|
|
||||||
|
|
||||||
据我所知,在 Linux 中总共有 7 种可用类型的文件,分为 3 大类。细节如下。
|
|
||||||
|
|
||||||
* 普通文件
|
|
||||||
* 目录文件
|
|
||||||
* 特殊文件(该类有五个类型的文件)
|
|
||||||
* 链接文件
|
|
||||||
* 字符设备文件
|
|
||||||
* Socket 文件
|
|
||||||
* 命名管道文件
|
|
||||||
* 块文件
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
参考下面的表可以更好地理解 Linux 中的文件类型。
|
|
||||||
| 符号 | 意义 |
|
|
||||||
| – | 普通文件。以下划线 "_" 开头。 |
|
|
||||||
| d | 目录文件。以英文字母 "d" 开头。 |
|
|
||||||
| l | 链接文件。以英文字母 "l" 开头。 |
|
|
||||||
| c | 字符设备文件。以英文字母 "c" 开头。 |
|
|
||||||
| s | Socket 文件。以英文字母 "s" 开头。 |
|
|
||||||
| p | 命名管道文件。以英文字母 "p" 开头。 |
|
|
||||||
| b | 块文件。以英文字母 "b" 开头。 |
|
|
||||||
|
|
||||||
|
|
||||||
### 方法1:手动识别 Linux 中的文件类型
|
|
||||||
|
|
||||||
如果你很了解 Linux,那么你可以借助上表很容易地识别文件类型。
|
|
||||||
|
|
||||||
#### 在 Linux 中如何查看普通文件?
|
|
||||||
|
|
||||||
在 Linux 中使用下面的命令去查看普通文件。在 Linux 文件系统中普通文件是随处可用的。
|
|
||||||
普通文件的颜色是`白色`
|
|
||||||
|
|
||||||
```
|
|
||||||
# ls -la | grep ^-
|
|
||||||
-rw-------. 1 mageshm mageshm 1394 Jan 18 15:59 .bash_history
|
|
||||||
-rw-r--r--. 1 mageshm mageshm 18 May 11 2012 .bash_logout
|
|
||||||
-rw-r--r--. 1 mageshm mageshm 176 May 11 2012 .bash_profile
|
|
||||||
-rw-r--r--. 1 mageshm mageshm 124 May 11 2012 .bashrc
|
|
||||||
-rw-r--r--. 1 root root 26 Dec 27 17:55 liks
|
|
||||||
-rw-r--r--. 1 root root 104857600 Jan 31 2006 test100.dat
|
|
||||||
-rw-r--r--. 1 root root 104874307 Dec 30 2012 test100.zip
|
|
||||||
-rw-r--r--. 1 root root 11536384 Dec 30 2012 test10.zip
|
|
||||||
-rw-r--r--. 1 root root 61 Dec 27 19:05 test2-bzip2.txt
|
|
||||||
-rw-r--r--. 1 root root 61 Dec 31 14:24 test3-bzip2.txt
|
|
||||||
-rw-r--r--. 1 root root 60 Dec 27 19:01 test-bzip2.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何查看目录文件?
|
|
||||||
|
|
||||||
在 Linux 中使用下面的命令去查看目录文件。在 Linux 文件系统中目录文件是随处可用的。目录文件的颜色是`蓝色`
|
|
||||||
|
|
||||||
```
|
|
||||||
# ls -la | grep ^d
|
|
||||||
drwxr-xr-x. 3 mageshm mageshm 4096 Dec 31 14:24 links/
|
|
||||||
drwxrwxr-x. 2 mageshm mageshm 4096 Nov 16 15:44 perl5/
|
|
||||||
drwxr-xr-x. 2 mageshm mageshm 4096 Nov 16 15:37 public_ftp/
|
|
||||||
drwxr-xr-x. 3 mageshm mageshm 4096 Nov 16 15:37 public_html/
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何查看链接文件?
|
|
||||||
|
|
||||||
在 Linux 中使用下面的命令去查看链接文件。在 Linux 文件系统中链接文件是随处可用的。
|
|
||||||
链接文件有两种可用类型,软连接和硬链接。链接文件的颜色是`浅绿宝石色`
|
|
||||||
|
|
||||||
```
|
|
||||||
# ls -la | grep ^l
|
|
||||||
lrwxrwxrwx. 1 root root 31 Dec 7 15:11 s-link-file -> /links/soft-link/test-soft-link
|
|
||||||
lrwxrwxrwx. 1 root root 38 Dec 7 15:12 s-link-folder -> /links/soft-link/test-soft-link-folder
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何查看字符设备文件?
|
|
||||||
|
|
||||||
在 Linux 中使用下面的命令查看字符设备文件。字符设备文件仅在特定位置是可用的。
|
|
||||||
|
|
||||||
它在目录 `/dev` 下是可用的。字符设备文件的颜色是`黄色`
|
|
||||||
|
|
||||||
```
|
|
||||||
# ls -la | grep ^c
|
|
||||||
crw-------. 1 root root 5, 1 Jan 28 14:05 console
|
|
||||||
crw-rw----. 1 root root 10, 61 Jan 28 14:05 cpu_dma_latency
|
|
||||||
crw-rw----. 1 root root 10, 62 Jan 28 14:05 crash
|
|
||||||
crw-rw----. 1 root root 29, 0 Jan 28 14:05 fb0
|
|
||||||
crw-rw-rw-. 1 root root 1, 7 Jan 28 14:05 full
|
|
||||||
crw-rw-rw-. 1 root root 10, 229 Jan 28 14:05 fuse
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何查看块文件?
|
|
||||||
|
|
||||||
在 Linux 中使用下面的命令查看块文件。块文件仅在特定位置是可用的。
|
|
||||||
它在目录 `/dev` 中可用。块文件的颜色是`黄色`
|
|
||||||
|
|
||||||
```
|
|
||||||
# ls -la | grep ^b
|
|
||||||
brw-rw----. 1 root disk 7, 0 Jan 28 14:05 loop0
|
|
||||||
brw-rw----. 1 root disk 7, 1 Jan 28 14:05 loop1
|
|
||||||
brw-rw----. 1 root disk 7, 2 Jan 28 14:05 loop2
|
|
||||||
brw-rw----. 1 root disk 7, 3 Jan 28 14:05 loop3
|
|
||||||
brw-rw----. 1 root disk 7, 4 Jan 28 14:05 loop4
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何查看 Socket 文件?
|
|
||||||
|
|
||||||
在 Linux 中使用下面的命令查看 Socket 文件。Socket 文件仅在特定位置是可用的。
|
|
||||||
Scoket 文件的颜色是`粉色`
|
|
||||||
|
|
||||||
```
|
|
||||||
# ls -la | grep ^s
|
|
||||||
srw-rw-rw- 1 root root 0 Jan 5 16:36 system_bus_socket
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何查看命名管道文件?
|
|
||||||
|
|
||||||
在 Linux 中使用下面的命令查看命名管道文件。命名管道文件仅在特定位置是可用的。命名管道文件的颜色是`黄色`
|
|
||||||
|
|
||||||
```
|
|
||||||
# ls -la | grep ^p
|
|
||||||
prw-------. 1 root root 0 Jan 28 14:06 replication-notify-fifo|
|
|
||||||
prw-------. 1 root root 0 Jan 28 14:06 stats-mail|
|
|
||||||
```
|
|
||||||
|
|
||||||
### 方法2:在 Linux 中如何使用 file 命令识别文件类型
|
|
||||||
|
|
||||||
在 Linux 中 file 命令允许我们去定义不同的文件类型。这里有三个测试集,按此顺序进行三组测试:文件系统测试,magic 测试和用于识别文件类型的语言测试。
|
|
||||||
|
|
||||||
#### 在 Linux 中如何使用 file 命令查看普通文件
|
|
||||||
|
|
||||||
在你的终端简单地输入 file 命令,接着输入普通文件。file 命令将会读取提供的文件内容并且准确地显示文件的类型。
|
|
||||||
|
|
||||||
这就是我们看到对于每个普通文件有不同结果的原因。参考下面普通文件的不同结果。
|
|
||||||
|
|
||||||
```
|
|
||||||
# file 2daygeek_access.log
|
|
||||||
2daygeek_access.log: ASCII text, with very long lines
|
|
||||||
|
|
||||||
# file powertop.html
|
|
||||||
powertop.html: HTML document, ASCII text, with very long lines
|
|
||||||
|
|
||||||
# file 2g-test
|
|
||||||
2g-test: JSON data
|
|
||||||
|
|
||||||
# file powertop.txt
|
|
||||||
powertop.txt: HTML document, UTF-8 Unicode text, with very long lines
|
|
||||||
|
|
||||||
# file 2g-test-05-01-2019.tar.gz
|
|
||||||
2g-test-05-01-2019.tar.gz: gzip compressed data, last modified: Sat Jan 5 18:22:20 2019, from Unix, original size 450560
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何使用 file 命令查看目录文件?
|
|
||||||
|
|
||||||
在你的终端简单地输入 file 命令,接着输入目录文件。参阅下面的结果。
|
|
||||||
|
|
||||||
```
|
|
||||||
# file Pictures/
|
|
||||||
Pictures/: directory
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何使用 file 命令查看链接文件?
|
|
||||||
|
|
||||||
在你的终端简单地输入 file 命令,接着输入链接文件。参阅下面的结果。
|
|
||||||
|
|
||||||
```
|
|
||||||
# file log
|
|
||||||
log: symbolic link to /run/systemd/journal/dev-log
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何使用 file 命令查看字符设备文件?
|
|
||||||
|
|
||||||
在你的终端简单地输入 file 命令,接着输入字符设备文件。参阅下面的结果。
|
|
||||||
|
|
||||||
```
|
|
||||||
# file vcsu
|
|
||||||
vcsu: character special (7/64)
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何使用 file 命令查看块文件?
|
|
||||||
|
|
||||||
在你的终端简单地输入 file 命令,接着输入块文件。参阅下面的结果。
|
|
||||||
|
|
||||||
```
|
|
||||||
# file sda1
|
|
||||||
sda1: block special (8/1)
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何使用 file 命令查看 Socket 文件?
|
|
||||||
|
|
||||||
在你的终端简单地输入 file 命令,接着输入 Socket 文件。参阅下面的结果。
|
|
||||||
|
|
||||||
```
|
|
||||||
# file system_bus_socket
|
|
||||||
system_bus_socket: socket
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何使用 file 命令查看命名管道文件?
|
|
||||||
|
|
||||||
在你的终端简单地输入 file 命令,接着输入命名管道文件。参阅下面的结果。
|
|
||||||
|
|
||||||
```
|
|
||||||
# file pipe-test
|
|
||||||
pipe-test: fifo (named pipe)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 方法 3:在 Linux 中如何使用 stat 命令识别文件类型?
|
|
||||||
|
|
||||||
stat 命令允许我们去查看文件类型或文件系统状态。该实用程序比 file 命令提供更多的信息。它显示文件的大量信息,例如大小、块大小、IO 块大小、节点值、链接、文件权限、UID, GID, 文件访问、更新和修改的时间详细信息。
|
|
||||||
|
|
||||||
#### 在 Linux 中如何使用 stat 命令查看普通文件?
|
|
||||||
|
|
||||||
在你的终端简单地输入 stat 命令,接着输入普通文件。参阅下面的结果。
|
|
||||||
|
|
||||||
```
|
|
||||||
# stat 2daygeek_access.log
|
|
||||||
File: 2daygeek_access.log
|
|
||||||
Size: 14406929 Blocks: 28144 IO Block: 4096 regular file
|
|
||||||
Device: 10301h/66305d Inode: 1727555 Links: 1
|
|
||||||
Access: (0644/-rw-r--r--) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
|
|
||||||
Access: 2019-01-03 14:05:26.430328867 +0530
|
|
||||||
Modify: 2019-01-03 14:05:26.460328868 +0530
|
|
||||||
Change: 2019-01-03 14:05:26.460328868 +0530
|
|
||||||
Birth: -
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何使用 stat 命令查看目录文件?
|
|
||||||
|
|
||||||
在你的终端简单地输入 stat 命令,接着输入目录文件。参阅下面的结果。
|
|
||||||
|
|
||||||
```
|
|
||||||
# stat Pictures/
|
|
||||||
File: Pictures/
|
|
||||||
Size: 4096 Blocks: 8 IO Block: 4096 directory
|
|
||||||
Device: 10301h/66305d Inode: 1703982 Links: 3
|
|
||||||
Access: (0755/drwxr-xr-x) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
|
|
||||||
Access: 2018-11-24 03:22:11.090000828 +0530
|
|
||||||
Modify: 2019-01-05 18:27:01.546958817 +0530
|
|
||||||
Change: 2019-01-05 18:27:01.546958817 +0530
|
|
||||||
Birth: -
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何使用 stat 命令查看链接文件?
|
|
||||||
|
|
||||||
在你的终端简单地输入 stat 命令,接着输入链接文件。参阅下面的结果。
|
|
||||||
|
|
||||||
```
|
|
||||||
# stat /dev/log
|
|
||||||
File: /dev/log -> /run/systemd/journal/dev-log
|
|
||||||
Size: 28 Blocks: 0 IO Block: 4096 symbolic link
|
|
||||||
Device: 6h/6d Inode: 278 Links: 1
|
|
||||||
Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)
|
|
||||||
Access: 2019-01-05 16:36:31.033333447 +0530
|
|
||||||
Modify: 2019-01-05 16:36:30.766666768 +0530
|
|
||||||
Change: 2019-01-05 16:36:30.766666768 +0530
|
|
||||||
Birth: -
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何使用 stat 命令查看字符设备文件?
|
|
||||||
|
|
||||||
在你的终端简单地输入 stat 命令,接着输入字符设备文件。参阅下面的结果。
|
|
||||||
|
|
||||||
```
|
|
||||||
# stat /dev/vcsu
|
|
||||||
File: /dev/vcsu
|
|
||||||
Size: 0 Blocks: 0 IO Block: 4096 character special file
|
|
||||||
Device: 6h/6d Inode: 16 Links: 1 Device type: 7,40
|
|
||||||
Access: (0660/crw-rw----) Uid: ( 0/ root) Gid: ( 5/ tty)
|
|
||||||
Access: 2019-01-05 16:36:31.056666781 +0530
|
|
||||||
Modify: 2019-01-05 16:36:31.056666781 +0530
|
|
||||||
Change: 2019-01-05 16:36:31.056666781 +0530
|
|
||||||
Birth: -
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何使用 stat 命令查看块文件?
|
|
||||||
|
|
||||||
在你的终端简单地输入 stat 命令,接着输入块文件。参阅下面的结果。
|
|
||||||
|
|
||||||
```
|
|
||||||
# stat /dev/sda1
|
|
||||||
File: /dev/sda1
|
|
||||||
Size: 0 Blocks: 0 IO Block: 4096 block special file
|
|
||||||
Device: 6h/6d Inode: 250 Links: 1 Device type: 8,1
|
|
||||||
Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 994/ disk)
|
|
||||||
Access: 2019-01-05 16:36:31.596666806 +0530
|
|
||||||
Modify: 2019-01-05 16:36:31.596666806 +0530
|
|
||||||
Change: 2019-01-05 16:36:31.596666806 +0530
|
|
||||||
Birth: -
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何使用 stat 命令查看 Socket 文件?
|
|
||||||
|
|
||||||
在你的终端简单地输入 stat 命令,接着输入 Socket 文件。参阅下面的结果。
|
|
||||||
|
|
||||||
```
|
|
||||||
# stat /var/run/dbus/system_bus_socket
|
|
||||||
File: /var/run/dbus/system_bus_socket
|
|
||||||
Size: 0 Blocks: 0 IO Block: 4096 socket
|
|
||||||
Device: 15h/21d Inode: 576 Links: 1
|
|
||||||
Access: (0666/srw-rw-rw-) Uid: ( 0/ root) Gid: ( 0/ root)
|
|
||||||
Access: 2019-01-05 16:36:31.823333482 +0530
|
|
||||||
Modify: 2019-01-05 16:36:31.810000149 +0530
|
|
||||||
Change: 2019-01-05 16:36:31.810000149 +0530
|
|
||||||
Birth: -
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 Linux 中如何使用 stat 命令查看命名管道文件?
|
|
||||||
|
|
||||||
在你的终端简单地输入 stat 命令,接着输入命名管道文件。参阅下面的结果。
|
|
||||||
|
|
||||||
```
|
|
||||||
# stat pipe-test
|
|
||||||
File: pipe-test
|
|
||||||
Size: 0 Blocks: 0 IO Block: 4096 fifo
|
|
||||||
Device: 10301h/66305d Inode: 1705583 Links: 1
|
|
||||||
Access: (0644/prw-r--r--) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
|
|
||||||
Access: 2019-01-06 02:00:03.040394731 +0530
|
|
||||||
Modify: 2019-01-06 02:00:03.040394731 +0530
|
|
||||||
Change: 2019-01-06 02:00:03.040394731 +0530
|
|
||||||
Birth: -
|
|
||||||
```
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://www.2daygeek.com/how-to-understand-and-identify-file-types-in-linux/
|
|
||||||
|
|
||||||
作者:[Magesh Maruthamuthu][a]
|
|
||||||
选题:[lujun9972][b]
|
|
||||||
译者:[liujing97](https://github.com/liujing97)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]: https://www.2daygeek.com/author/magesh/
|
|
||||||
[b]: https://github.com/lujun9972
|
|
@ -0,0 +1,159 @@
|
|||||||
|
[#]: collector: (lujun9972)
|
||||||
|
[#]: translator: (liujing97)
|
||||||
|
[#]: reviewer: ( )
|
||||||
|
[#]: publisher: ( )
|
||||||
|
[#]: url: ( )
|
||||||
|
[#]: subject: (7 Methods To Identify Disk Partition/FileSystem UUID On Linux)
|
||||||
|
[#]: via: (https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-unique-identifier-linux/)
|
||||||
|
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||||
|
|
||||||
|
Linux 中获取硬盘分区或文件系统的 UUID 的七种方法
|
||||||
|
======
|
||||||
|
|
||||||
|
作为一个 Linux 系统管理员,你应该知道如何去查看分区的 UUID 或文件系统的 UUID。
|
||||||
|
|
||||||
|
因为大多数的 Linux 系统使用 UUID 挂载分区。在 `/etc/fstab` 文件中可以验证此的内容。
|
||||||
|
|
||||||
|
有许多可用的实用程序可以查看 UUID。本文我们将会向你展示多种查看 UUID 的方法,并且你可以选择一种适合于你的方法。
|
||||||
|
|
||||||
|
### 何为 UUID?
|
||||||
|
|
||||||
|
UUID 代表着通用唯一识别码,它帮助 Linux 系统去识别一个磁盘驱动分区而不是块设备文件。
|
||||||
|
|
||||||
|
libuuid 是内核 2.15.1 中 util-linux-ng 包中的一部分,它被默认安装在 Linux 系统中。
|
||||||
|
|
||||||
|
UUID 由该库生成,可以合理地认为它在一个系统中是唯一的,并且在所有系统中也是唯一的。
|
||||||
|
|
||||||
|
在计算机系统中使用了 128 位数字去标识信息。UUID 最初被用在 Apollo 网络计算机系统(NCS)中,之后 UUID 被开放软件基金会(OSF)标准化,成为分布式计算环境(DCE)的一部分。
|
||||||
|
|
||||||
|
UUID 以 32 个十六进制(基数为 16)的数字表示,被连字符分割为 5 组显示,总共的 36 个字符格式为 8-4-4-4-12(32 个字母或数字和 4 个连字符)。
|
||||||
|
|
||||||
|
例如:d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
|
||||||
|
|
||||||
|
我的 /etc/fstab 文件示例。
|
||||||
|
|
||||||
|
```
|
||||||
|
# cat /etc/fstab
|
||||||
|
|
||||||
|
# /etc/fstab: static file system information.
|
||||||
|
#
|
||||||
|
# Use 'blkid' to print the universally unique identifier for a device; this may
|
||||||
|
# be used with UUID= as a more robust way to name devices that works even if
|
||||||
|
# disks are added and removed. See fstab(5).
|
||||||
|
#
|
||||||
|
#
|
||||||
|
UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f / ext4 defaults,noatime 0 1
|
||||||
|
UUID=a2092b92-af29-4760-8e68-7a201922573b swap swap defaults,noatime 0 2
|
||||||
|
```
|
||||||
|
|
||||||
|
我们可以使用下面的 7 个命令来查看。
|
||||||
|
|
||||||
|
* **`blkid 命令:`** 定位或打印块设备的属性。
|
||||||
|
* **`lsblk 命令:`** lsblk 列出所有可用的或指定的块设备的信息。
|
||||||
|
* **`hwinfo 命令:`** hwinfo 表示硬件信息工具,是另外一个很好的实用工具,用于查询系统中已存在硬件。
|
||||||
|
* **`udevadm 命令:`** udev 管理工具
|
||||||
|
* **`tune2fs 命令:`** 调整 ext2/ext3/ext4 文件系统上的可调文件系统参数。
|
||||||
|
* **`dumpe2fs 命令:`** 查询 ext2/ ext3/ext4 文件系统的信息。
|
||||||
|
* **`使用 by-uuid 路径:`** 该目录下包含有 UUID 和实际的块设备文件,UUID 与实际的块设备文件链接在一起。
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Linux 中如何使用 blkid 命令查看磁盘分区或文件系统的 UUID?
|
||||||
|
|
||||||
|
blkid 是定位或打印块设备属性的命令行实用工具。它利用 libblkid 库在 Linux 系统中获得到磁盘分区的 UUID。
|
||||||
|
|
||||||
|
```
|
||||||
|
# blkid
|
||||||
|
/dev/sda1: UUID="d92fa769-e00f-4fd7-b6ed-ecf7224af7fa" TYPE="ext4" PARTUUID="eab59449-01"
|
||||||
|
/dev/sdc1: UUID="d17e3c31-e2c9-4f11-809c-94a549bc43b7" TYPE="ext2" PARTUUID="8cc8f9e5-01"
|
||||||
|
/dev/sdc3: UUID="ca307aa4-0866-49b1-8184-004025789e63" TYPE="ext4" PARTUUID="8cc8f9e5-03"
|
||||||
|
/dev/sdc5: PARTUUID="8cc8f9e5-05"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Linux 中如何使用 lsblk 命令查看磁盘分区或文件系统的 UUID?
|
||||||
|
|
||||||
|
lsblk 列出所有有关可用或指定块设备的信息。lsblk 命令读取 sysfs 文件系统和 udev 数据库以收集信息。
|
||||||
|
|
||||||
|
如果 udev 数据库是不可用的或者编译的 lsblk 是不支持 udev 的,它会试图从块设备中读取 LABEL,UUID 和文件系统类型。这种情况下,必须为 root 身份。该命令默认会以类似于树的格式打印出所有的块设备(RAM 盘除外)。
|
||||||
|
|
||||||
|
```
|
||||||
|
# lsblk -o name,mountpoint,size,uuid
|
||||||
|
NAME MOUNTPOINT SIZE UUID
|
||||||
|
sda 30G
|
||||||
|
└─sda1 / 20G d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
|
||||||
|
sdb 10G
|
||||||
|
sdc 10G
|
||||||
|
├─sdc1 1G d17e3c31-e2c9-4f11-809c-94a549bc43b7
|
||||||
|
├─sdc3 1G ca307aa4-0866-49b1-8184-004025789e63
|
||||||
|
├─sdc4 1K
|
||||||
|
└─sdc5 1G
|
||||||
|
sdd 10G
|
||||||
|
sde 10G
|
||||||
|
sr0 1024M
|
||||||
|
```
|
||||||
|
|
||||||
|
### Linux 中如何使用 by-uuid 路径查看磁盘分区或文件系统的 UUID?
|
||||||
|
|
||||||
|
该目录包含了 UUID 和实际的块设备文件,UUID 与实际的块设备文件链接在一起。
|
||||||
|
|
||||||
|
```
|
||||||
|
# ls -lh /dev/disk/by-uuid/
|
||||||
|
total 0
|
||||||
|
lrwxrwxrwx 1 root root 10 Jan 29 08:34 ca307aa4-0866-49b1-8184-004025789e63 -> ../../sdc3
|
||||||
|
lrwxrwxrwx 1 root root 10 Jan 29 08:34 d17e3c31-e2c9-4f11-809c-94a549bc43b7 -> ../../sdc1
|
||||||
|
lrwxrwxrwx 1 root root 10 Jan 29 08:34 d92fa769-e00f-4fd7-b6ed-ecf7224af7fa -> ../../sda1
|
||||||
|
```
|
||||||
|
|
||||||
|
### Linux 中如何使用 hwinfo 命令查看磁盘分区或文件系统的 UUID?
|
||||||
|
|
||||||
|
**[hwinfo][1]** 表示硬件信息工具,是另外一种很好的实用工具。它被用来检测系统中已存在的硬件,并且以可读的格式显示各种硬件组件的细节信息。
|
||||||
|
|
||||||
|
```
|
||||||
|
# hwinfo --block | grep by-uuid | awk '{print $3,$7}'
|
||||||
|
/dev/sdc1, /dev/disk/by-uuid/d17e3c31-e2c9-4f11-809c-94a549bc43b7
|
||||||
|
/dev/sdc3, /dev/disk/by-uuid/ca307aa4-0866-49b1-8184-004025789e63
|
||||||
|
/dev/sda1, /dev/disk/by-uuid/d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
|
||||||
|
```
|
||||||
|
|
||||||
|
### Linux 中如何使用 udevadm 命令查看磁盘分区或文件系统的 UUID?
|
||||||
|
|
||||||
|
udevadm 需要命令和命令特定的操作。它控制 systemd-udevd 的运行时的行为,请求内核事件、管理事件队列并且提供简单的调试机制。
|
||||||
|
|
||||||
|
```
|
||||||
|
udevadm info -q all -n /dev/sdc1 | grep -i by-uuid | head -1
|
||||||
|
S: disk/by-uuid/d17e3c31-e2c9-4f11-809c-94a549bc43b7
|
||||||
|
```
|
||||||
|
|
||||||
|
### Linux 中如何使用 tune2fs 命令查看磁盘分区或文件系统的 UUID?
|
||||||
|
|
||||||
|
tune2fs 允许系统管理员在 Linux 的 ext2, ext3, ext4 文件系统中调整各种可调的文件系统参数。这些选项的当前值可以使用选项 -l 显示。
|
||||||
|
|
||||||
|
```
|
||||||
|
# tune2fs -l /dev/sdc1 | grep UUID
|
||||||
|
Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7
|
||||||
|
```
|
||||||
|
|
||||||
|
### Linux 中如何使用 dumpe2fs 命令查看磁盘分区或文件系统的 UUID?
|
||||||
|
|
||||||
|
dumpe2fs 打印出现在设备文件系统中的超级块和块组的信息。
|
||||||
|
|
||||||
|
```
|
||||||
|
# dumpe2fs /dev/sdc1 | grep UUID
|
||||||
|
dumpe2fs 1.43.5 (04-Aug-2017)
|
||||||
|
Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7
|
||||||
|
```
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-unique-identifier-linux/
|
||||||
|
|
||||||
|
作者:[Magesh Maruthamuthu][a]
|
||||||
|
选题:[lujun9972][b]
|
||||||
|
译者:[liujing97](https://github.com/liujing97)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://www.2daygeek.com/author/magesh/
|
||||||
|
[b]: https://github.com/lujun9972
|
||||||
|
[1]: https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/
|
@ -0,0 +1,74 @@
|
|||||||
|
[#]: collector: (lujun9972)
|
||||||
|
[#]: translator: (geekpi)
|
||||||
|
[#]: reviewer: ( )
|
||||||
|
[#]: publisher: ( )
|
||||||
|
[#]: url: ( )
|
||||||
|
[#]: subject: (Ubuntu 14.04 is Reaching the End of Life. Here are Your Options)
|
||||||
|
[#]: via: (https://itsfoss.com/ubuntu-14-04-end-of-life/)
|
||||||
|
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||||
|
|
||||||
|
Ubuntu 14.04 即将结束支持。下面是你的选择
|
||||||
|
======
|
||||||
|
|
||||||
|
Ubuntu 14.04 即将于 2019 年 4 月 30 日结束支持。这意味着在此日期之后 Ubuntu 14.04 用户将无法获得安全和维护更新。
|
||||||
|
|
||||||
|
你甚至不会获得已安装应用的更新,并且不手动修改 sources.list 则无法使用 apt 命令或软件中心安装新应用。
|
||||||
|
|
||||||
|
Ubuntu 14.04 大约在五年前发布。这是 Ubuntu 长期支持版本。
|
||||||
|
|
||||||
|
[检查 Ubuntu 版本][1]并查看你是否仍在使用 Ubuntu 14.04。如果是桌面或服务器版,你可能想知道在这种情况下你应该怎么做。
|
||||||
|
|
||||||
|
我来帮助你。告诉你在这种情况下你有些什么选择。
|
||||||
|
|
||||||
|
![][2]
|
||||||
|
|
||||||
|
### 升级到 Ubuntu 16.04 LTS(最简单的方式)
|
||||||
|
|
||||||
|
如果你可以连接互联网,你可以从 Ubuntu 14.04 升级到 Ubuntu 16.04 LTS。
|
||||||
|
|
||||||
|
Ubuntu 16.04 也是一个长期支持版本,它将支持到 2021 年 4 月。这意味着下次升级前你还有两年的时间。
|
||||||
|
|
||||||
|
我建议阅读这个[升级 Ubuntu 版本][3]的教程。它最初是为了将 Ubuntu 16.04 升级到 Ubuntu 18.04 而编写的,但这些步骤也适用于你的情况。
|
||||||
|
|
||||||
|
### 做好备份,全新安装 Ubuntu 18.04 LTS(非常适合桌面用户)
|
||||||
|
|
||||||
|
另一个选择是备份你的文档、音乐、图片、下载和其他任何你不想丢失数据的文件夹。
|
||||||
|
|
||||||
|
我说的备份指的是将这些文件夹复制到外部 USB 盘。换句话说,你应该有办法将数据复制回计算机,因为你将格式化你的系统。
|
||||||
|
|
||||||
|
我建议桌面用户使用此选项。 Ubuntu 18.04 是目前的长期支持版本,它将至少在 2023 年 4 月之前得到支持。在你被迫进行下次升级之前,你将有四年的时间。
|
||||||
|
|
||||||
|
### 支付扩展安全维护费用并继续使用 Ubuntu 14.04
|
||||||
|
|
||||||
|
这适用于企业客户。Canonical 是 Ubuntu 的母公司,它提供 Ubuntu Advantage 计划,客户可以支付电话电子邮件支持和其他益处。
|
||||||
|
|
||||||
|
Ubuntu Advantage 计划用户还有[扩展安全维护][4](ESM)功能。即使给定版本的生命周期结束后,此计划也会提供安全更新。
|
||||||
|
|
||||||
|
这需要付出金钱。服务器用户每个物理节点每年花费 225 美元。对于桌面用户,价格为每年 150 美元。你可以在[此处][5]了解 Ubuntu Advantage 计划的详细定价。
|
||||||
|
|
||||||
|
### 还在使用 Ubuntu 14.04 吗?
|
||||||
|
|
||||||
|
如果你还在使用 Ubuntu 14.04,那么你应该开始了解这些选择,因为你还有不到两个月的时间。
|
||||||
|
|
||||||
|
在任何情况下,你都不能在 2019 年 4 月 30 日之后使用 Ubuntu 14.04,因为你的系统由于缺乏安全更新而容易受到攻击。无法安装新应用将是一个额外的痛苦。
|
||||||
|
|
||||||
|
那么,你会做什么选择?升级到 Ubuntu 16.04 或 18.04 或付费 ESM?
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://itsfoss.com/ubuntu-14-04-end-of-life/
|
||||||
|
|
||||||
|
作者:[Abhishek Prakash][a]
|
||||||
|
选题:[lujun9972][b]
|
||||||
|
译者:[geekpi](https://github.com/geekpi)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://itsfoss.com/author/abhishek/
|
||||||
|
[b]: https://github.com/lujun9972
|
||||||
|
[1]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
|
||||||
|
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/ubuntu-14-04-end-of-life-featured.png?resize=800%2C450&ssl=1
|
||||||
|
[3]: https://itsfoss.com/upgrade-ubuntu-version/
|
||||||
|
[4]: https://www.ubuntu.com/esm
|
||||||
|
[5]: https://www.ubuntu.com/support/plans-and-pricing
|
@ -1,5 +1,5 @@
|
|||||||
[#]: collector: (lujun9972)
|
[#]: collector: (lujun9972)
|
||||||
[#]: translator: ( )
|
[#]: translator: (MjSeven)
|
||||||
[#]: reviewer: ( )
|
[#]: reviewer: ( )
|
||||||
[#]: publisher: ( )
|
[#]: publisher: ( )
|
||||||
[#]: url: ( )
|
[#]: url: ( )
|
||||||
@ -7,69 +7,68 @@
|
|||||||
[#]: via: (https://www.2daygeek.com/how-to-install-and-enable-flatpak-support-on-linux/)
|
[#]: via: (https://www.2daygeek.com/how-to-install-and-enable-flatpak-support-on-linux/)
|
||||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||||
|
|
||||||
How To Install And Enable Flatpak Support On Linux?
|
如何在 Linux 上安装并启用 Flatpak 支持?
|
||||||
======
|
======
|
||||||
|
|
||||||
Currently we all are using distributions official package managers to install any required packages.
|
<to 校正:之前似乎发表过跟这个类似的一篇 https://linux.cn/article-10459-1.html>
|
||||||
|
|
||||||
in Linux, it’s doing good without any issues. (It’s doing their job nicely without any compromise)
|
目前,我们都在使用 Linux 发行版的官方软件包管理器来安装所需的软件包。
|
||||||
|
|
||||||
There was some limitation on this that will think about other alternative solutions to fix it.
|
在 Linux 中,它做得很好,没有任何问题。(它很好地完成了它应该做的工作,同时它没有任何妥协)
|
||||||
|
|
||||||
Yes, by default we won’t get the latest version of packages from the distributions official package managers because as these were built while building a current OS version. And they offers only security updates until the next major release comes.
|
在一些方面它也有一些限制,所以会让我们考虑其他替代解决方案来解决。
|
||||||
|
|
||||||
So, what will be the solution for this?
|
是的,默认情况下,我们不会从发行版官方软件包管理器获取最新版本的软件包,因为这些软件包是在构建当前 OS 版本时构建的。它们只会提供安全更新,直到下一个主要版本发布。
|
||||||
|
|
||||||
Yes, we have multiple solutions for this and most of us were already started using few of those.
|
那么,这种情况有什么解决办法吗?
|
||||||
|
|
||||||
What it is and what is the benefit of those?
|
是的,我们有多种解决方案,而且我们大多数人已经开始使用其中的一些了。
|
||||||
|
|
||||||
* **For Ubuntu based systems:** PPAs
|
有些什么呢,它们有什么好处?
|
||||||
* **For RHEL based systems:** [EPEL Repository][1], [ELRepo Repository][2], [nux-dextop Repository][3], [IUS Community Repo][4], [RPMfusion Repository][5] and [Remi Repository][6].
|
|
||||||
|
* **对于基于 Ubuntu 的系统:** PPAs
|
||||||
|
* **对于基于 RHEL 的系统:** [EPEL Repository][1]、[ELRepo Repository][2]、[nux-dextop Repository][3]、[IUS Community Repo][4]、[RPMfusion Repository][5] 和 [Remi Repository][6]
|
||||||
|
|
||||||
|
|
||||||
|
使用上面的仓库,我们将获得最新的软件包。这些软件包通常都得到了很好的维护,还有大多数社区的建议。但这对于操作系统来说应该是适当的,因为它们可能并不安全。
|
||||||
|
|
||||||
Using the above repos, we will be getting the latest packages for the distribution. And these are usually well maintained and suggested by most of the community. But these are advisable by the OS and may not always be safe.
|
近年来,出现了一下通用软件包封装格式,并且得到了广泛的应用。
|
||||||
|
|
||||||
In recent years, the following universal packaging formats were come out and gained a lot of popularity.
|
* **`Flatpak:`** 它是独立于发行版的包格式,主要贡献者是 Fedora 项目团队。大多数主要的 Linux 发行版都采用了 Flatpak 框架。
|
||||||
|
* **`Snaps:`** Snappy 是一种通用的软件包封装格式,最初由 Canonical 为 Ubuntu 手机及其操作系统设计和构建的。后来,大多数发行版都进行了改编。
|
||||||
|
* **`AppImage:`** AppImage 是一种可移植的包格式,可以在不安装或不需要 root 权限的情况下运行。
|
||||||
|
|
||||||
* **`Flatpak:`**`` It’s distribution independent package format and the main contributor is Fedora project team. The Flatpak framework is adopted by most of the major Linux distributions.
|
我们之前已经介绍过 **[Snap 包管理器和包封装格式][7]**。今天我们将讨论 Flatpak 包封装格式。
|
||||||
* **`Snaps:`**`` Snappy is a universal packaging formats originally designed and built by Canonical for the Ubuntu phone and it’s operating system. Later most of the distributions are adapted.
|
|
||||||
* **`AppImage:`**`` AppImage is a portable package format and it can run without installation or the need for root rights.
|
|
||||||
|
|
||||||
|
### 什么是 Flatpak?
|
||||||
|
|
||||||
|
Flatpak(以前称为 X Desktop Group 或 xdg-app)是一个软件实用程序。它提供了一种通用的包封装格式,可以在任何 Linux 发行版中使用。
|
||||||
|
|
||||||
We had already covered about the **[Snap package manager& packaging format][7]** in the past. Today we are going to discuss about Flatpak packing format.
|
它提供了一个沙箱(隔离的)环境来运行应用程序,不会影响其他应用程序和发行版核心软件包。我们还可以安装并运行不同版本的软件包。
|
||||||
|
|
||||||
### What Is Flatpak?
|
Flatpak 的一个缺点是不像 Snap 和 AppImage 那样支持服务器操作系统,它只在少数桌面环境下工作。
|
||||||
|
|
||||||
Flatpak (formerly know as X Desktop Group or xdg-app) is a software utility. It’s offering a universal packaging formats which can be used in any Linux distributions.
|
比如说,如果你想在系统上运行两个版本的 php,那么你可以轻松安装并按照你的意愿运行。
|
||||||
|
|
||||||
It provides a sandbox (isolated) environment to run the app and it doesn’t impact either other apps and distribution core packages. Also we can install and run the different version of same package.
|
这就是现在通用包封装格式非常有名的地方。
|
||||||
|
|
||||||
There is an disadvantage on flatpak is that doesn’t support server OS unlike Snap and AppImage. It’s working only on few desktops environment.
|
### 如何在 Linux 中安装 Flatpak?
|
||||||
|
|
||||||
Say for example. If you would like to run two version of php on your system then you can easily install and run as your wish.
|
大多数 Linux 发行版官方仓库都提供 Flatpak 软件包。因此,可以使用它们来进行安装。
|
||||||
|
|
||||||
That’s where the universal packaging formats are become very famous nowadays.
|
对于 **`Fedora`** 系统,使用 **[DNF 命令][8]** 来安装 flatpak。
|
||||||
|
|
||||||
### How To Install Flatpak On Linux?
|
|
||||||
|
|
||||||
Flatpak package is available in most of the Linux distribution official repository. Hence, it can be installed using those.
|
|
||||||
|
|
||||||
For **`Fedora`** system, use **[DNF Command][8]** to install flatpak.
|
|
||||||
|
|
||||||
```
|
```
|
||||||
$ sudo dnf install flatpak
|
$ sudo dnf install flatpak
|
||||||
```
|
```
|
||||||
|
|
||||||
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][9]** or **[APT Command][10]** to install flatpak.
|
对于 **`Debian/Ubuntu`** 系统,使用 **[APT-GET 命令][9]** 或 **[APT 命令][10]** 来安装 flatpak。
|
||||||
|
|
||||||
```
|
```
|
||||||
$ sudo apt install flatpak
|
$ sudo apt install flatpak
|
||||||
```
|
```
|
||||||
|
|
||||||
For older Ubuntu versions.
|
对于较旧的 Ubuntu 版本:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ sudo add-apt-repository ppa:alexlarsson/flatpak
|
$ sudo add-apt-repository ppa:alexlarsson/flatpak
|
||||||
@ -77,52 +76,52 @@ $ sudo apt update
|
|||||||
$ sudo apt install flatpak
|
$ sudo apt install flatpak
|
||||||
```
|
```
|
||||||
|
|
||||||
For **`Arch Linux`** based systems, use **[Pacman Command][11]** to install flatpak.
|
对于基于 **`Arch Linux`** 的系统,使用 **[Pacman 命令][11]** 来安装 flatpak。
|
||||||
|
|
||||||
```
|
```
|
||||||
$ sudo pacman -S flatpak
|
$ sudo pacman -S flatpak
|
||||||
```
|
```
|
||||||
|
|
||||||
For **`RHEL/CentOS`** systems, use **[YUM Command][12]** to install flatpak.
|
对于 **`RHEL/CentOS`** 系统,使用 **[YUM 命令][12]** 来安装 flatpak。
|
||||||
|
|
||||||
```
|
```
|
||||||
$ sudo yum install flatpak
|
$ sudo yum install flatpak
|
||||||
```
|
```
|
||||||
|
|
||||||
For **`openSUSE Leap`** system, use **[Zypper Command][13]** to install flatpak.
|
对于 **`openSUSE Leap`** 系统,使用 **[Zypper 命令][13]** 来安装 flatpak。
|
||||||
|
|
||||||
```
|
```
|
||||||
$ sudo zypper install flatpak
|
$ sudo zypper install flatpak
|
||||||
```
|
```
|
||||||
|
|
||||||
### How To Enable Flathub Support On Linux?
|
### 如何在 Linux 上启用 Flathub 支持?
|
||||||
|
|
||||||
Flatbub website is a app store for Flatpak packages where you can find them.
|
Flathub 网站是一个应用程序商店,你可以在其中找到 flatpak。
|
||||||
|
|
||||||
It’s a central repository where all the flatpak applications are available for users.
|
它是一个中央仓库,所有的 flatpak 应用程序都可供用户使用。
|
||||||
|
|
||||||
Run the following command to enable Flathub support on Linux.
|
运行以下命令在 Linux 上启用 Flathub 支持:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
|
$ sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
|
||||||
```
|
```
|
||||||
|
|
||||||
Install the Software Flatpak plugin for GNOME Desktop Environment.
|
为 GNOME 桌面环境安装 Software Flatpak 插件。
|
||||||
|
|
||||||
```
|
```
|
||||||
$ sudo apt install gnome-software-plugin-flatpak
|
$ sudo apt install gnome-software-plugin-flatpak
|
||||||
```
|
```
|
||||||
|
|
||||||
Also, you can enable GNOME Repository if you are using GNOME Desktop Environment. It’s containing all GNOME core applications.
|
此外,如果你使用的是 GNOME 桌面环境,则可以启用 GNOME 仓库。它包含所有 GNOME 核心应用程序。
|
||||||
|
|
||||||
```
|
```
|
||||||
$ wget https://sdk.gnome.org/keys/gnome-sdk.gpg
|
$ wget https://sdk.gnome.org/keys/gnome-sdk.gpg
|
||||||
$ sudo flatpak remote-add --gpg-import=gnome-sdk.gpg --if-not-exists gnome-apps https://sdk.gnome.org/repo-apps/
|
$ sudo flatpak remote-add --gpg-import=gnome-sdk.gpg --if-not-exists gnome-apps https://sdk.gnome.org/repo-apps/
|
||||||
```
|
```
|
||||||
|
|
||||||
### How To List configured flatpak repositories?
|
### 如何列出已配置的 flakpak 仓库?
|
||||||
|
|
||||||
Run the following command, if you would like to view list of the configured flatpak repositories on your system.
|
如果要查看系统上已配置的 flatpak 仓库列表,运行以下命令:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ flatpak remotes
|
$ flatpak remotes
|
||||||
@ -131,9 +130,9 @@ flathub system
|
|||||||
gnome-apps system
|
gnome-apps system
|
||||||
```
|
```
|
||||||
|
|
||||||
### How To List The Available Packages In The Configured Repositories?
|
### 如何列出已配置仓库中的可用软件包?
|
||||||
|
|
||||||
Run the following command, if you would like to view list of the available packages (It will display all together like, applications and run times) in the configured repositories.
|
如果要查看已配置仓库中的可用软件包的列表(它将显示所有软件包,如应用程序和运行环境),运行以下命令:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ flatpak remote-ls | head -10
|
$ flatpak remote-ls | head -10
|
||||||
@ -150,21 +149,21 @@ org.gnome.Documents gnome-apps
|
|||||||
org.gnome.Epiphany gnome-apps
|
org.gnome.Epiphany gnome-apps
|
||||||
```
|
```
|
||||||
|
|
||||||
To list only applications not runtimes.
|
仅列出应用程序:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ flatpak remote-ls --app
|
$ flatpak remote-ls --app
|
||||||
```
|
```
|
||||||
|
|
||||||
To list specific repository applications.
|
列出特定的仓库应用程序:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ flatpak remote-ls gnome-apps
|
$ flatpak remote-ls gnome-apps
|
||||||
```
|
```
|
||||||
|
|
||||||
### How To Install A Package From flatpak?
|
### 如何从 flatpak 安装包?
|
||||||
|
|
||||||
Run the following command to install a package from flatpak repository.
|
运行以下命令从 flatpak 仓库安装软件包:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ sudo flatpak install flathub com.github.muriloventuroso.easyssh
|
$ sudo flatpak install flathub com.github.muriloventuroso.easyssh
|
||||||
@ -198,24 +197,24 @@ Installing: com.github.muriloventuroso.easyssh.Locale/x86_64/stable from flathub
|
|||||||
Now at af837356b222.
|
Now at af837356b222.
|
||||||
```
|
```
|
||||||
|
|
||||||
All the installed application will be placed in the following location.
|
所有已安装的应用程序都将放在以下位置:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ ls /var/lib/flatpak/app/
|
$ ls /var/lib/flatpak/app/
|
||||||
com.github.muriloventuroso.easyssh
|
com.github.muriloventuroso.easyssh
|
||||||
```
|
```
|
||||||
|
|
||||||
### How To Run The Installed Application?
|
### 如何运行已安装的应用程序?
|
||||||
|
|
||||||
Run the following command to launch the required application. Make sure, you have to replace with your application name instead.
|
运行以下命令以启动所需的应用程序,确保替换为你的应用程序名称:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ flatpak run com.github.muriloventuroso.easyssh
|
$ flatpak run com.github.muriloventuroso.easyssh
|
||||||
```
|
```
|
||||||
|
|
||||||
### How To View The Installed Application?
|
### 如何查看已安装的应用程序?
|
||||||
|
|
||||||
Run the following command to view the installed application.
|
运行以下命令来查看已安装的应用程序:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ flatpak list
|
$ flatpak list
|
||||||
@ -225,9 +224,9 @@ org.freedesktop.Platform.html5-codecs/x86_64/18.08 system,runtime
|
|||||||
org.gnome.Platform/x86_64/3.30 system,runtime
|
org.gnome.Platform/x86_64/3.30 system,runtime
|
||||||
```
|
```
|
||||||
|
|
||||||
### How To View The Detailed Information About The Installed Application?
|
### 如何查看有关已安装应用程序的详细信息?
|
||||||
|
|
||||||
Run the following command to view the detailed information about the installed application.
|
运行以下命令以查看有关已安装应用程序的详细信息。
|
||||||
|
|
||||||
```
|
```
|
||||||
$ flatpak info com.github.muriloventuroso.easyssh
|
$ flatpak info com.github.muriloventuroso.easyssh
|
||||||
@ -248,29 +247,28 @@ Runtime: org.gnome.Platform/x86_64/3.30
|
|||||||
Sdk: org.gnome.Sdk/x86_64/3.30
|
Sdk: org.gnome.Sdk/x86_64/3.30
|
||||||
```
|
```
|
||||||
|
|
||||||
### How To Update The Installed Application?
|
### 如何更新已安装的应用程序?
|
||||||
|
|
||||||
Run the following command to updated the installed application to latest version.
|
运行以下命令将已安装的应用程序更新到最新版本:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ flatpak update
|
$ flatpak update
|
||||||
```
|
```
|
||||||
|
|
||||||
For specific application, use the following format.
|
对于特定应用程序,使用以下格式:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ flatpak update com.github.muriloventuroso.easyssh
|
$ flatpak update com.github.muriloventuroso.easyssh
|
||||||
```
|
```
|
||||||
|
|
||||||
### How To Remove The Installed Application?
|
### 如何移除已安装的应用程序?
|
||||||
|
|
||||||
Run the following command to remove the installed application.
|
|
||||||
|
|
||||||
|
运行以下命令来移除已安装的应用程序:
|
||||||
```
|
```
|
||||||
$ sudo flatpak uninstall com.github.muriloventuroso.easyssh
|
$ sudo flatpak uninstall com.github.muriloventuroso.easyssh
|
||||||
```
|
```
|
||||||
|
|
||||||
Go to the man page for more details and options.
|
进入 man 页面以获取更多细节和选项:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ flatpak --help
|
$ flatpak --help
|
||||||
@ -282,7 +280,7 @@ via: https://www.2daygeek.com/how-to-install-and-enable-flatpak-support-on-linux
|
|||||||
|
|
||||||
作者:[Magesh Maruthamuthu][a]
|
作者:[Magesh Maruthamuthu][a]
|
||||||
选题:[lujun9972][b]
|
选题:[lujun9972][b]
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
译者:[MjSeven](https://github.com/MjSeven)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,5 +1,5 @@
|
|||||||
[#]: collector: (lujun9972)
|
[#]: collector: (lujun9972)
|
||||||
[#]: translator: ( )
|
[#]: translator: (heguangzhi)
|
||||||
[#]: reviewer: ( )
|
[#]: reviewer: ( )
|
||||||
[#]: publisher: ( )
|
[#]: publisher: ( )
|
||||||
[#]: url: ( )
|
[#]: url: ( )
|
||||||
@ -7,41 +7,41 @@
|
|||||||
[#]: via: (https://www.2daygeek.com/linux-scan-check-open-ports-using-netstat-ss-nmap/)
|
[#]: via: (https://www.2daygeek.com/linux-scan-check-open-ports-using-netstat-ss-nmap/)
|
||||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||||
|
|
||||||
How To Check The List Of Open Ports In Linux?
|
|
||||||
|
如何检查Linux中的开放端口列表?
|
||||||
======
|
======
|
||||||
|
|
||||||
Recently we had written two articles in the same kind of topic.
|
最近,我们就同一主题写了两篇文章。
|
||||||
|
|
||||||
Those articles helps you to check whether the given ports are open or not in the remote servers.
|
这些文章内容帮助您如何检查远程服务器中给定的端口是否打开。
|
||||||
|
|
||||||
If you want to **[check whether a port is open on the remote Linux system][1]** then navigate to this article.
|
如果您想 **[检查远程 Linux 系统上的端口是否打开][1]** 请点击链接浏览。
|
||||||
|
|
||||||
If you want to **[check whether a port is open on multiple remote Linux system][2]** then navigate to this article.
|
如果您想 **[检查多个远程 Linux 系统上的端口是否打开][2]** 请点击链接浏览。
|
||||||
|
|
||||||
If you would like to **[check multiple ports status on multiple remote Linux system][2]** then navigate to this article.
|
如果您想 **[检查多个远程Linux系统上的多个端口状态][2]** 请点击链接浏览。
|
||||||
|
|
||||||
But this article helps you to check the list of open ports on the local system.
|
但是本文帮助您检查本地系统上的开放端口列表。
|
||||||
|
|
||||||
There are few utilities are available in Linux for this purpose.
|
在 Linux 中很少有用于此目的的实用程序。
|
||||||
|
|
||||||
However, I’m including top four Linux commands to check this.
|
然而,我提供了四个最重要的 Linux 命令来检查这一点。
|
||||||
|
|
||||||
It can be done using the following four commands. These are very famous and widely used by Linux admins.
|
您可以使用以下四个命令来完成这个工作。这些命令是非常出名的并被 Linux 管理员广泛使用。
|
||||||
|
|
||||||
* **`netstat:`** netstat (“network statistics”) is a command-line tool that displays network connections related information (both incoming and outgoing) such as routing tables, masquerade connections, multicast memberships and a number of network interface
|
* **`netstat:`** netstat (“network statistics”) 是一个显示网络连接(进和出)相关信息命令行工具,例如:路由表, 伪装连接,多点传送成员和网络端口。
|
||||||
* **`nmap:`** Nmap (“Network Mapper”) is an open source tool for network exploration and security auditing. It was designed to rapidly scan large networks.
|
* **`nmap:`** Nmap (“Network Mapper”) 是一个网络探索与安全审计的开源工具。它旨在快速扫描大型网络。
|
||||||
* **`ss:`** ss is used to dump socket statistics. It allows showing information similar to netstat. It can display more TCP and state information than other tools.
|
* **`ss:`** ss 被用于转储套接字统计信息。它也可以类似 netstat 使用。相比其他工具它可以展示更多的TCP状态信息。
|
||||||
* **`lsof:`** lsof stands for List Open File. It is used to print all the open files which is opened by process.
|
* **`lsof:`** lsof 是 List Open File 的缩写. 它用于输出被某个进程打开的所有文件。
|
||||||
|
|
||||||
|
|
||||||
|
### 如何使用 Linux 命令 netstat 检查系统中的开放端口列表
|
||||||
|
|
||||||
### How To Check The List Of Open Ports In Linux Using netstat Command?
|
netstat 是 Network Statistics 的缩写,是一个显示网络连接(进和出)相关信息命令行工具,例如:路由表, 伪装连接,多点传送成员和网络端口。
|
||||||
|
|
||||||
netstat stands for Network Statistics, is a command-line tool that displays network connections related information (both incoming and outgoing) such as routing tables, masquerade connections, multicast memberships and a number of network interface.
|
它可以列出所有的 tcp, udp 连接 和所有的 unix 套接字连接。
|
||||||
|
|
||||||
It lists out all the tcp, udp socket connections and the unix socket connections.
|
它用于发现发现网络问题,确定网络连接数量。
|
||||||
|
|
||||||
It is used for diagnosing network problems in the network and to determine the amount of traffic on the network as a performance measurement.
|
|
||||||
|
|
||||||
```
|
```
|
||||||
# netstat -tplugn
|
# netstat -tplugn
|
||||||
@ -81,7 +81,7 @@ eth0 1 ff02::1
|
|||||||
eth0 1 ff01::1
|
eth0 1 ff01::1
|
||||||
```
|
```
|
||||||
|
|
||||||
If you would like to check any particular port status then use the following format.
|
您也可以使用下面的命令检查特定的端口。
|
||||||
|
|
||||||
```
|
```
|
||||||
# # netstat -tplugn | grep :22
|
# # netstat -tplugn | grep :22
|
||||||
@ -90,9 +90,9 @@ tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
|
|||||||
tcp6 0 0 :::22 :::* LISTEN 1388/sshd
|
tcp6 0 0 :::22 :::* LISTEN 1388/sshd
|
||||||
```
|
```
|
||||||
|
|
||||||
### How To Check The List Of Open Ports In Linux Using ss Command?
|
### 如何使用 Linux 命令 ss 检查系统中的开放端口列表?
|
||||||
|
|
||||||
ss is used to dump socket statistics. It allows showing information similar to netstat. It can display more TCP and state information than other tools.
|
ss 被用于转储套接字统计信息。它也可以类似 netstat 使用。相比其他工具它可以展示更多的TCP状态信息。
|
||||||
|
|
||||||
```
|
```
|
||||||
# ss -lntu
|
# ss -lntu
|
||||||
@ -121,7 +121,7 @@ tcp LISTEN 0 100 :::25
|
|||||||
tcp LISTEN 0 128 :::22 :::*
|
tcp LISTEN 0 128 :::22 :::*
|
||||||
```
|
```
|
||||||
|
|
||||||
If you would like to check any particular port status then use the following format.
|
您也可以使用下面的命令检查特定的端口。
|
||||||
|
|
||||||
```
|
```
|
||||||
# # ss -lntu | grep ':25'
|
# # ss -lntu | grep ':25'
|
||||||
@ -130,13 +130,14 @@ tcp LISTEN 0 100 *:25 *:*
|
|||||||
tcp LISTEN 0 100 :::25 :::*
|
tcp LISTEN 0 100 :::25 :::*
|
||||||
```
|
```
|
||||||
|
|
||||||
### How To Check The List Of Open Ports In Linux Using nmap Command?
|
### 如何使用 Linux 命令 nmap 检查系统中的开放端口列表?
|
||||||
|
|
||||||
Nmap (“Network Mapper”) is an open source tool for network exploration and security auditing. It was designed to rapidly scan large networks, although it works fine against single hosts.
|
|
||||||
|
|
||||||
Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics.
|
Nmap (“Network Mapper”) 是一个网络探索与安全审计的开源工具。它旨在快速扫描大型网络,当然它也可以工作在独立主机上。
|
||||||
|
|
||||||
While Nmap is commonly used for security audits, many systems and network administrators find it useful for routine tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime.
|
Nmap使用裸 IP 数据包以一种新颖的方式来确定网络上有哪些主机可用,这些主机提供什么服务(应用程序名称和版本),它们运行什么操作系统(操作系统版本),使用什么类型的数据包过滤器/防火墙,以及许多其他特征。
|
||||||
|
|
||||||
|
虽然 Nmap 通常用于安全审计,但许多系统和网络管理员发现它对于日常工作也非常有用,例如网络清点、管理服务升级计划以及监控主机或服务正常运行时间。
|
||||||
|
|
||||||
```
|
```
|
||||||
# nmap -sTU -O localhost
|
# nmap -sTU -O localhost
|
||||||
@ -165,7 +166,9 @@ OS detection performed. Please report any incorrect results at http://nmap.org/s
|
|||||||
Nmap done: 1 IP address (1 host up) scanned in 1.93 seconds
|
Nmap done: 1 IP address (1 host up) scanned in 1.93 seconds
|
||||||
```
|
```
|
||||||
|
|
||||||
If you would like to check any particular port status then use the following format.
|
|
||||||
|
您也可以使用下面的命令检查特定的端口。
|
||||||
|
|
||||||
|
|
||||||
```
|
```
|
||||||
# nmap -sTU -O localhost | grep 123
|
# nmap -sTU -O localhost | grep 123
|
||||||
@ -173,9 +176,10 @@ If you would like to check any particular port status then use the following for
|
|||||||
123/udp open ntp
|
123/udp open ntp
|
||||||
```
|
```
|
||||||
|
|
||||||
### How To Check The List Of Open Ports In Linux Using lsof Command?
|
|
||||||
|
|
||||||
It shows you the list of open files on the system and the processes that opened them. Also shows you other informations related to the files.
|
### 如何使用 Linux 命令 lsof 检查系统中的开放端口列表?
|
||||||
|
|
||||||
|
它向您显示系统上打开的文件列表以及打开它们的进程。还会向您显示与文件相关的其他信息。
|
||||||
|
|
||||||
```
|
```
|
||||||
# lsof -i
|
# lsof -i
|
||||||
@ -210,7 +214,8 @@ httpd 13374 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
|
|||||||
httpd 13375 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
|
httpd 13375 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
|
||||||
```
|
```
|
||||||
|
|
||||||
If you would like to check any particular port status then use the following format.
|
您也可以使用下面的命令检查特定的端口。
|
||||||
|
|
||||||
|
|
||||||
```
|
```
|
||||||
# lsof -i:80
|
# lsof -i:80
|
||||||
@ -230,7 +235,7 @@ via: https://www.2daygeek.com/linux-scan-check-open-ports-using-netstat-ss-nmap/
|
|||||||
|
|
||||||
作者:[Magesh Maruthamuthu][a]
|
作者:[Magesh Maruthamuthu][a]
|
||||||
选题:[lujun9972][b]
|
选题:[lujun9972][b]
|
||||||
译者:[译者ID](https://github.com/译者ID)
|
译者:[heguangzhi](https://github.com/heguangzhi)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
Loading…
Reference in New Issue
Block a user