This commit is contained in:
zgj 2019-04-17 13:33:16 +08:00
commit 9d70d6294a
152 changed files with 18893 additions and 3787 deletions

View File

@ -0,0 +1,901 @@
[#]: collector: (lujun9972)
[#]: translator: (guevaraya)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10700-1.html)
[#]: subject: (Computer Laboratory Raspberry Pi: Lesson 11 Input02)
[#]: via: (https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html)
[#]: author: (Alex Chadwick https://www.cl.cam.ac.uk)
计算机实验室之树莓派:课程 11 输入02
======
课程输入 02 是以课程输入 01 为基础讲解的,通过一个简单的命令行实现用户的命令输入和计算机的处理和显示。本文假设你已经具备 [课程11输入01][1] 的操作系统代码基础。
### 1、终端
几乎所有的操作系统都是以字符终端显示启动的。经典的黑底白字,通过键盘输入计算机要执行的命令,然后会提示你拼写错误,或者恰好得到你想要的执行结果。这种方法有两个主要优点:键盘和显示器可以提供简易、健壮的计算机交互机制,几乎所有的计算机系统都采用这个机制,这个也广泛被系统管理员应用。
> 早期的计算一般是在一栋楼里的一个巨型计算机系统,它有很多可以输命令的'终端'。计算机依次执行不同来源的命令。
让我们分析下真正想要哪些信息:
1. 计算机打开后,显示欢迎信息
2. 计算机启动后可以接受输入标志
3. 用户从键盘输入带参数的命令
4. 用户输入回车键或提交按钮
5. 计算机解析命令后执行可用的命令
6. 计算机显示命令的执行结果,过程信息
7. 循环跳转到步骤 2
这样的终端被定义为标准的输入输出设备。用于显示输入的屏幕和打印输出内容的屏幕是同一个LCTT 译注:最早期的输出打印真是“打印”到打印机/电传机的,而用于输入的终端只是键盘,除非做了回显,否则输出终端是不会显示输入的字符的)。也就是说终端是对字符显示的一个抽象。字符显示中,单个字符是最小的单元,而不是像素。屏幕被划分成固定数量不同颜色的字符。我们可以在现有的屏幕代码基础上,先存储字符和对应的颜色,然后再用方法 `DrawCharacter` 把其推送到屏幕上。一旦我们需要字符显示,就只需要在屏幕上画出一行字符串。
新建文件名为 `terminal.s`,如下:
```
.section .data
.align 4
terminalStart:
.int terminalBuffer
terminalStop:
.int terminalBuffer
terminalView:
.int terminalBuffer
terminalColour:
.byte 0xf
.align 8
terminalBuffer:
.rept 128*128
.byte 0x7f
.byte 0x0
.endr
terminalScreen:
.rept 1024/8 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 768/16
.byte 0x7f
.byte 0x0
.endr
```
这是文件终端的配置数据文件。我们有两个主要的存储变量:`terminalBuffer` 和 `terminalScreen`。`terminalBuffer` 保存所有显示过的字符。它保存 128 行字符文本1 行包含 128 个字符)。每个字符有一个 ASCII 字符和颜色单元组成,初始值为 0x7fASCII 的删除字符)和 0前景色和背景色为黑。`terminalScreen` 保存当前屏幕显示的字符。它保存 128x48 个字符,与 `terminalBuffer` 初始化值一样。你可能会觉得我仅需要 `terminalScreen` 就够了,为什么还要`terminalBuffer`,其实有两个好处:
1. 我们可以很容易看到字符串的变化,只需画出有变化的字符。
2. 我们可以回滚终端显示的历史字符,也就是缓冲的字符(有限制)
这种独特的技巧在低功耗系统里很常见。画屏是很耗时的操作,因此我们仅在不得已的时候才去执行这个操作。在这个系统里,我们可以任意改变 `terminalBuffer`,然后调用一个仅拷贝屏幕上字节变化的方法。也就是说我们不需要持续画出每个字符,这样可以节省一大段跨行文本的操作时间。
> 你总是需要尝试去设计一个高效的系统,如果在很少变化的情况下这个系统会运行的更快。
其他在 `.data` 段的值得含义如下:
* `terminalStart`
写入到 `terminalBuffer` 的第一个字符
* `terminalStop`
写入到 `terminalBuffer` 的最后一个字符
* `terminalView`
表示当前屏幕的第一个字符,这样我们可以控制滚动屏幕
* `temrinalColour`
即将被描画的字符颜色
`terminalStart` 需要保存起来的原因是 `termainlBuffer` 是一个环状缓冲区。意思是当缓冲区变满时,末尾地方会回滚覆盖开始位置,这样最后一个字符变成了第一个字符。因此我们需要将 `terminalStart` 往前推进,这样我们知道我们已经占满它了。如何实现缓冲区检测:如果索引越界到缓冲区的末尾,就将索引指向缓冲区的开始位置。环状缓冲区是一个比较常见的存储大量数据的高明方法,往往这些数据的最近部分比较重要。它允许无限制的写入,只保证最近一些特定数据有效。这个常常用于信号处理和数据压缩算法。这样的情况,可以允许我们存储 128 行终端记录超过128行也不会有问题。如果不是这样当超过第 128 行时,我们需要把 127 行分别向前拷贝一次,这样很浪费时间。
![显示 Hellow world 插入到大小为5的循环缓冲区的示意图。][2]
> 环状缓冲区是**数据结构**一个例子。这是一个组织数据的思路,有时我们通过软件实现这种思路。
之前已经提到过 `terminalColour` 几次了。你可以根据你的想法实现终端颜色,但这个文本终端有 16 个前景色和 16 个背景色(这里相当于有 16^2 = 256 种组合)。[CGA][3]终端的颜色定义如下:
表格 1.1 - CGA 颜色编码
| 序号 | 颜色 (R, G, B) |
| ------ | ------------------------|
| 0 | 黑 (0, 0, 0) |
| 1 | 蓝 (0, 0, ⅔) |
| 2 | 绿 (0, ⅔, 0) |
| 3 | 青色 (0, ⅔, ⅔) |
| 4 | 红色 (⅔, 0, 0) |
| 5 | 品红 (⅔, 0, ⅔) |
| 6 | 棕色 (⅔, ⅓, 0) |
| 7 | 浅灰色 (⅔, ⅔, ⅔) |
| 8 | 灰色 (⅓, ⅓, ⅓) |
| 9 | 淡蓝色 (⅓, ⅓, 1) |
| 10 | 淡绿色 (⅓, 1, ⅓) |
| 11 | 淡青色 (⅓, 1, 1) |
| 12 | 淡红色 (1, ⅓, ⅓) |
| 13 | 浅品红 (1, ⅓, 1) |
| 14 | 黄色 (1, 1, ⅓) |
| 15 | 白色 (1, 1, 1) |
我们将前景色保存到颜色的低字节,背景色保存到颜色高字节。除了棕色,其他这些颜色遵循一种模式如二进制的高位比特代表增加 ⅓ 到每个组件,其他比特代表增加 ⅔ 到各自组件。这样很容易进行 RGB 颜色转换。
> 棕色作为替代色(黑黄色)既不吸引人也没有什么用处。
我们需要一个方法从 `TerminalColour` 读取颜色编码的四个比特,然后用 16 比特等效参数调用 `SetForeColour`。尝试你自己实现。如果你感觉麻烦或者还没有完成屏幕系列课程,我们的实现如下:
```
.section .text
TerminalColour:
teq r0,#6
ldreq r0,=0x02B5
beq SetForeColour
tst r0,#0b1000
ldrne r1,=0x52AA
moveq r1,#0
tst r0,#0b0100
addne r1,#0x15
tst r0,#0b0010
addne r1,#0x540
tst r0,#0b0001
addne r1,#0xA800
mov r0,r1
b SetForeColour
```
### 2、文本显示
我们的终端第一个真正需要的方法是 `TerminalDisplay`,它用来把当前的数据从 `terminalBuffer`拷贝到 `terminalScreen` 和实际的屏幕。如上所述,这个方法必须是最小开销的操作,因为我们需要频繁调用它。它主要比较 `terminalBuffer``terminalDisplay` 的文本,然后只拷贝有差异的字节。请记住 `terminalBuffer` 是以环状缓冲区运行的,这种情况,就是从 `terminalView``terminalStop`,或者 128*48 个字符,要看哪个来的最快。如果我们遇到 `terminalStop`,我们将会假定在这之后的所有字符是 7f<sub>16</sub> (ASCII 删除字符),颜色为 0黑色前景色和背景色
让我们看看必须要做的事情:
1. 加载 `terminalView`、`terminalStop` 和 `terminalDisplay` 的地址。
2. 对于每一行:
1. 对于每一列:
1. 如果 `terminalView` 不等于 `terminalStop`,根据 `terminalView` 加载当前字符和颜色
2. 否则加载 0x7f 和颜色 0
3. 从 `terminalDisplay` 加载当前的字符
4. 如果字符和颜色相同,直接跳转到第 10 步
5. 存储字符和颜色到 `terminalDisplay`
6. 用 `r0` 作为背景色参数调用 `TerminalColour`
7. 用 `r0 = 0x7f`ASCII 删除字符,一个块)、 `r1 = x`、`r2 = y` 调用 `DrawCharacter`
8. 用 `r0` 作为前景色参数调用 `TerminalColour`
9. 用 `r0 = 字符`、`r1 = x`、`r2 = y` 调用 `DrawCharacter`
10. 对位置参数 `terminalDisplay` 累加 2
11. 如果 `terminalView` 不等于 `terminalStop``terminalView` 位置参数累加 2
12. 如果 `terminalView` 位置已经是文件缓冲器的末尾,将它设置为缓冲区的开始位置
13. x 坐标增加 8
2. y 坐标增加 16
尝试去自己实现吧。如果你遇到问题,我们的方案下面给出来了:
1、我这里的变量有点乱。为了方便起见我用 `taddr` 存储 `textBuffer` 的末尾位置。
```
.globl TerminalDisplay
TerminalDisplay:
push {r4,r5,r6,r7,r8,r9,r10,r11,lr}
x .req r4
y .req r5
char .req r6
col .req r7
screen .req r8
taddr .req r9
view .req r10
stop .req r11
ldr taddr,=terminalStart
ldr view,[taddr,#terminalView - terminalStart]
ldr stop,[taddr,#terminalStop - terminalStart]
add taddr,#terminalBuffer - terminalStart
add taddr,#128*128*2
mov screen,taddr
```
2、从 `yLoop` 开始运行。
```
mov y,#0
yLoop$:
```
2.1、
```
mov x,#0
xLoop$:
```
`xLoop` 开始运行。
2.1.1、为了方便起见,我把字符和颜色同时加载到 `char` 变量了
```
teq view,stop
ldrneh char,[view]
```
2.1.2、这行是对上面一行的补充说明:读取黑色的删除字符
```
moveq char,#0x7f
```
2.1.3、为了简便我把字符和颜色同时加载到 `col` 里。
```
ldrh col,[screen]
```
2.1.4、 现在我用 `teq` 指令检查是否有数据变化
```
teq col,char
beq xLoopContinue$
```
2.1.5、我可以容易的保存当前值
```
strh char,[screen]
```
2.1.6、我用比特偏移指令 `lsr``and` 指令从切分 `char` 变量,将颜色放到 `col` 变量,字符放到 `char` 变量,然后再用比特偏移指令 `lsr` 获取背景色后调用 `TerminalColour`
```
lsr col,char,#8
and char,#0x7f
lsr r0,col,#4
bl TerminalColour
```
2.1.7、写入一个彩色的删除字符
```
mov r0,#0x7f
mov r1,x
mov r2,y
bl DrawCharacter
```
2.1.8、用 `and` 指令获取 `col` 变量的低半字节,然后调用 `TerminalColour`
```
and r0,col,#0xf
bl TerminalColour
```
2.1.9、写入我们需要的字符
```
mov r0,char
mov r1,x
mov r2,y
bl DrawCharacter
```
2.1.10、自增屏幕指针
```
xLoopContinue$:
add screen,#2
```
2.1.11、如果可能自增 `view` 指针
```
teq view,stop
addne view,#2
```
2.1.12、很容易检测 `view` 指针是否越界到缓冲区的末尾,因为缓冲区的地址保存在 `taddr` 变量里
```
teq view,taddr
subeq view,#128*128*2
```
2.1.13、 如果还有字符需要显示,我们就需要自增 `x` 变量然后到 `xLoop` 循环执行
```
add x,#8
teq x,#1024
bne xLoop$
```
2.2、 如果还有更多的字符显示我们就需要自增 `y` 变量,然后到 `yLoop` 循环执行
```
add y,#16
teq y,#768
bne yLoop$
```
3、不要忘记最后清除变量
```
pop {r4,r5,r6,r7,r8,r9,r10,r11,pc}
.unreq x
.unreq y
.unreq char
.unreq col
.unreq screen
.unreq taddr
.unreq view
.unreq stop
```
### 3、行打印
现在我有了自己 `TerminalDisplay` 方法,它可以自动显示 `terminalBuffer` 内容到 `terminalScreen`,因此理论上我们可以画出文本。但是实际上我们没有任何基于字符显示的例程。 首先快速容易上手的方法便是 `TerminalClear` 它可以彻底清除终端。这个方法不用循环也很容易实现。可以尝试分析下面的方法应该不难:
```
.globl TerminalClear
TerminalClear:
ldr r0,=terminalStart
add r1,r0,#terminalBuffer-terminalStart
str r1,[r0]
str r1,[r0,#terminalStop-terminalStart]
str r1,[r0,#terminalView-terminalStart]
mov pc,lr
```
现在我们需要构造一个字符显示的基础方法:`Print` 函数。它将保存在 `r0` 的字符串和保存在 `r1` 的字符串长度简单的写到屏幕上。有一些特定字符需要特别的注意,这些特定的操作是确保 `terminalView` 是最新的。我们来分析一下需要做什么:
1. 检查字符串的长度是否为 0如果是就直接返回
2. 加载 `terminalStop``terminalView`
3. 计算出 `terminalStop` 的 x 坐标
4. 对每一个字符的操作:
1. 检查字符是否为新起一行
2. 如果是的话,自增 `bufferStop` 到行末,同时写入黑色删除字符
3. 否则拷贝当前 `terminalColour` 的字符
4. 检查是否在行末
5. 如果是,检查从 `terminalView``terminalStop` 之间的字符数是否大于一屏
6. 如果是,`terminalView` 自增一行
7. 检查 `terminalView` 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
8. 检查 `terminalStop` 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
9. 检查 `terminalStop` 是否等于 `terminalStart` 如果是的话 `terminalStart` 自增一行。
10. 检查 `terminalStart` 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
5. 存取 `terminalStop``terminalView`
试一下自己去实现。我们的方案提供如下:
1、这个是 `Print` 函数开始快速检查字符串为0的代码
```
.globl Print
Print:
teq r1,#0
moveq pc,lr
```
2、这里我做了很多配置。 `bufferStart` 代表 `terminalStart` `bufferStop` 代表`terminalStop` `view` 代表 `terminalView``taddr` 代表 `terminalBuffer` 的末尾地址。
```
push {r4,r5,r6,r7,r8,r9,r10,r11,lr}
bufferStart .req r4
taddr .req r5
x .req r6
string .req r7
length .req r8
char .req r9
bufferStop .req r10
view .req r11
mov string,r0
mov length,r1
ldr taddr,=terminalStart
ldr bufferStop,[taddr,#terminalStop-terminalStart]
ldr view,[taddr,#terminalView-terminalStart]
ldr bufferStart,[taddr]
add taddr,#terminalBuffer-terminalStart
add taddr,#128*128*2
```
3、和通常一样巧妙的对齐技巧让许多事情更容易。由于需要对齐 `terminalBuffer`,每个字符的 x 坐标需要 8 位要除以 2。
```
and x,bufferStop,#0xfe
lsr x,#1
```
4.1、我们需要检查新行
```
charLoop$:
ldrb char,[string]
and char,#0x7f
teq char,#'\n'
bne charNormal$
```
4.2、循环执行值到行末写入 0x7f黑色删除字符
```
mov r0,#0x7f
clearLine$:
strh r0,[bufferStop]
add bufferStop,#2
add x,#1
teq x,#128 blt clearLine$
b charLoopContinue$
```
4.3、存储字符串的当前字符和 `terminalBuffer` 末尾的 `terminalColour` 然后将它和 x 变量自增
```
charNormal$:
strb char,[bufferStop]
ldr r0,=terminalColour
ldrb r0,[r0]
strb r0,[bufferStop,#1]
add bufferStop,#2
add x,#1
```
4.4、检查 x 是否为行末128
```
charLoopContinue$:
cmp x,#128
blt noScroll$
```
4.5、设置 x 为 0 然后检查我们是否已经显示超过 1 屏。请记住,我们是用的循环缓冲区,因此如果 `bufferStop``view` 之前的差是负值,我们实际上是环绕了缓冲区。
```
mov x,#0
subs r0,bufferStop,view
addlt r0,#128*128*2
cmp r0,#128*(768/16)*2
```
4.6、增加一行字节到 `view` 的地址
```
addge view,#128*2
```
4.7、 如果 `view` 地址是缓冲区的末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 `taddr` 为缓冲区的末尾地址。
```
teq view,taddr
subeq view,taddr,#128*128*2
```
4.8、如果 `stop` 的地址在缓冲区末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 `taddr` 为缓冲区的末尾地址。
```
noScroll$:
teq bufferStop,taddr
subeq bufferStop,taddr,#128*128*2
```
4.9、检查 `bufferStop` 是否等于 `bufferStart`。 如果等于增加一行到 `bufferStart`
```
teq bufferStop,bufferStart
addeq bufferStart,#128*2
```
4.10、如果 `start` 的地址在缓冲区的末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 `taddr` 为缓冲区的末尾地址。
```
teq bufferStart,taddr
subeq bufferStart,taddr,#128*128*2
```
循环执行知道字符串结束
```
subs length,#1
add string,#1
bgt charLoop$
```
5、保存变量然后返回
```
charLoopBreak$:
sub taddr,#128*128*2
sub taddr,#terminalBuffer-terminalStart
str bufferStop,[taddr,#terminalStop-terminalStart]
str view,[taddr,#terminalView-terminalStart]
str bufferStart,[taddr]
pop {r4,r5,r6,r7,r8,r9,r10,r11,pc}
.unreq bufferStart
.unreq taddr
.unreq x
.unreq string
.unreq length
.unreq char
.unreq bufferStop
.unreq view
```
这个方法允许我们打印任意字符到屏幕。然而我们用了颜色变量,但实际上没有设置它。一般终端用特性的组合字符去行修改颜色。如 ASCII 转义1b<sub>16</sub>)后面跟着一个 0 - f 的 16 进制的数,就可以设置前景色为 CGA 颜色号。如果你自己想尝试实现;在下载页面有一个我的详细的例子。
### 4、标志输入
现在我们有一个可以打印和显示文本的输出终端。这仅仅是说对了一半,我们需要输入。我们想实现一个方法:`ReadLine`,可以保存文件的一行文本,文本位置由 `r0` 给出,最大的长度由 `r1` 给出,返回 `r0` 里面的字符串长度。棘手的是用户输出字符的时候要回显功能,同时想要退格键的删除功能和命令回车执行功能。它们还需要一个闪烁的下划线代表计算机需要输入。这些完全合理的要求让构造这个方法更具有挑战性。有一个方法完成这些需求就是存储用户输入的文本和文件大小到内存的某个地方。然后当调用 `ReadLine` 的时候,移动 `terminalStop` 的地址到它开始的地方然后调用 `Print`。也就是说我们只需要确保在内存维护一个字符串,然后构造一个我们自己的打印函数。
> 按照惯例,许多编程语言中,任意程序可以访问 stdin 和 stdin它们可以连接到终端的输入和输出流。在图形程序其实也可以进行同样操作但实际几乎不用。
让我们看看 `ReadLine` 做了哪些事情:
1. 如果字符串可保存的最大长度为 0直接返回
2. 检索 `terminalStop``terminalStop` 的当前值
3. 如果字符串的最大长度大约缓冲区的一半,就设置大小为缓冲区的一半
4. 从最大长度里面减去 1 来确保输入的闪烁字符或结束符
5. 向字符串写入一个下划线
6. 写入一个 `terminalView``terminalStop` 的地址到内存
7. 调用 `Print` 打印当前字符串
8. 调用 `TerminalDisplay`
9. 调用 `KeyboardUpdate`
10. 调用 `KeyboardGetChar`
11. 如果是一个新行直接跳转到第 16 步
12. 如果是一个退格键,将字符串长度减 1如果其大于 0
13. 如果是一个普通字符,将它写入字符串(字符串大小确保小于最大值)
14. 如果字符串是以下划线结束,写入一个空格,否则写入下划线
15. 跳转到第 6 步
16. 字符串的末尾写入一个新行字符
17. 调用 `Print``TerminalDisplay`
18. 用结束符替换新行
19. 返回字符串的长度
为了方便读者理解,然后然后自己去实现,我们的实现提供如下:
1. 快速处理长度为 0 的情况
```
.globl ReadLine
ReadLine:
teq r1,#0
moveq r0,#0
moveq pc,lr
```
2、考虑到常见的场景我们初期做了很多初始化动作。`input` 代表 `terminalStop` 的值,`view` 代表 `terminalView`。`Length` 默认为 `0`
```
string .req r4
maxLength .req r5
input .req r6
taddr .req r7
length .req r8
view .req r9
push {r4,r5,r6,r7,r8,r9,lr}
mov string,r0
mov maxLength,r1
ldr taddr,=terminalStart
ldr input,[taddr,#terminalStop-terminalStart]
ldr view,[taddr,#terminalView-terminalStart]
mov length,#0
```
3、我们必须检查异常大的读操作我们不能处理超过 `terminalBuffer` 大小的输入(理论上可行,但是 `terminalStart` 移动越过存储的 terminalStop`,会有很多问题)。
```
cmp maxLength,#128*64
movhi maxLength,#128*64
```
4、由于用户需要一个闪烁的光标我们需要一个备用字符在理想状况在这个字符串后面放一个结束符。
```
sub maxLength,#1
```
5、写入一个下划线让用户知道我们可以输入了。
```
mov r0,#'_'
strb r0,[string,length]
```
6、保存 `terminalStop``terminalView`。这个对重置一个终端很重要,它会修改这些变量。严格讲也可以修改 `terminalStart`,但是不可逆。
```
readLoop$:
str input,[taddr,#terminalStop-terminalStart]
str view,[taddr,#terminalView-terminalStart]
```
7、写入当前的输入。由于下划线因此字符串长度加 1
```
mov r0,string
mov r1,length
add r1,#1
bl Print
```
8、拷贝下一个文本到屏幕
```
bl TerminalDisplay
```
9、获取最近一次键盘输入
```
bl KeyboardUpdate
```
10、检索键盘输入键值
```
bl KeyboardGetChar
```
11、如果我们有一个回车键循环中断。如果有结束符和一个退格键也会同样跳出循环。
```
teq r0,#'\n'
beq readLoopBreak$
teq r0,#0
beq cursor$
teq r0,#'\b'
bne standard$
```
12、从 `length` 里面删除一个字符
```
delete$:
cmp length,#0
subgt length,#1
b cursor$
```
13、写回一个普通字符
```
standard$:
cmp length,maxLength
bge cursor$
strb r0,[string,length]
add length,#1
```
14、加载最近的一个字符如果不是下划线则修改为下换线如果是则修改为空格
```
cursor$:
ldrb r0,[string,length]
teq r0,#'_'
moveq r0,#' '
movne r0,#'_'
strb r0,[string,length]
```
15、循环执行值到用户输入按下
```
b readLoop$
readLoopBreak$:
```
16、在字符串的结尾处存入一个新行字符
```
mov r0,#'\n'
strb r0,[string,length]
```
17、重置 `terminalView``terminalStop` 然后调用 `Print``TerminalDisplay` 显示最终的输入
```
str input,[taddr,#terminalStop-terminalStart]
str view,[taddr,#terminalView-terminalStart]
mov r0,string
mov r1,length
add r1,#1
bl Print
bl TerminalDisplay
```
18、写入一个结束符
```
mov r0,#0
strb r0,[string,length]
```
19、返回长度
```
mov r0,length
pop {r4,r5,r6,r7,r8,r9,pc}
.unreq string
.unreq maxLength
.unreq input
.unreq taddr
.unreq length
.unreq view
```
### 5、终端机器进化
现在我们理论用终端和用户可以交互了。最显而易见的事情就是拿去测试了!删除 `main.s``bl UsbInitialise` 后面的代码后如下:
```
reset$:
mov sp,#0x8000
bl TerminalClear
ldr r0,=welcome
mov r1,#welcomeEnd-welcome
bl Print
loop$:
ldr r0,=prompt
mov r1,#promptEnd-prompt
bl Print
ldr r0,=command
mov r1,#commandEnd-command
bl ReadLine
teq r0,#0
beq loopContinue$
mov r4,r0
ldr r5,=command
ldr r6,=commandTable
ldr r7,[r6,#0]
ldr r9,[r6,#4]
commandLoop$:
ldr r8,[r6,#8]
sub r1,r8,r7
cmp r1,r4
bgt commandLoopContinue$
mov r0,#0
commandName$:
ldrb r2,[r5,r0]
ldrb r3,[r7,r0]
teq r2,r3
bne commandLoopContinue$
add r0,#1
teq r0,r1
bne commandName$
ldrb r2,[r5,r0]
teq r2,#0
teqne r2,#' '
bne commandLoopContinue$
mov r0,r5
mov r1,r4
mov lr,pc
mov pc,r9
b loopContinue$
commandLoopContinue$:
add r6,#8
mov r7,r8
ldr r9,[r6,#4]
teq r9,#0
bne commandLoop$
ldr r0,=commandUnknown
mov r1,#commandUnknownEnd-commandUnknown
ldr r2,=formatBuffer
ldr r3,=command
bl FormatString
mov r1,r0
ldr r0,=formatBuffer
bl Print
loopContinue$:
bl TerminalDisplay
b loop$
echo:
cmp r1,#5
movle pc,lr
add r0,#5
sub r1,#5
b Print
ok:
teq r1,#5
beq okOn$
teq r1,#6
beq okOff$
mov pc,lr
okOn$:
ldrb r2,[r0,#3]
teq r2,#'o'
ldreqb r2,[r0,#4]
teqeq r2,#'n'
movne pc,lr
mov r1,#0
b okAct$
okOff$:
ldrb r2,[r0,#3]
teq r2,#'o'
ldreqb r2,[r0,#4]
teqeq r2,#'f'
ldreqb r2,[r0,#5]
teqeq r2,#'f'
movne pc,lr
mov r1,#1
okAct$:
mov r0,#16
b SetGpio
.section .data
.align 2
welcome: .ascii "Welcome to Alex's OS - Everyone's favourite OS"
welcomeEnd:
.align 2
prompt: .ascii "\n> "
promptEnd:
.align 2
command:
.rept 128
.byte 0
.endr
commandEnd:
.byte 0
.align 2
commandUnknown: .ascii "Command `%s' was not recognised.\n"
commandUnknownEnd:
.align 2
formatBuffer:
.rept 256
.byte 0
.endr
formatEnd:
.align 2
commandStringEcho: .ascii "echo"
commandStringReset: .ascii "reset"
commandStringOk: .ascii "ok"
commandStringCls: .ascii "cls"
commandStringEnd:
.align 2
commandTable:
.int commandStringEcho, echo
.int commandStringReset, reset$
.int commandStringOk, ok
.int commandStringCls, TerminalClear
.int commandStringEnd, 0
```
这块代码集成了一个简易的命令行操作系统。支持命令:`echo`、`reset`、`ok` 和 `cls`。`echo` 拷贝任意文本到终端,`reset` 命令会在系统出现问题的是复位操作系统,`ok` 有两个功能:设置 OK 灯亮灭,最后 `cls` 调用 TerminalClear 清空终端。
试试树莓派的代码吧。如果遇到问题,请参照问题集锦页面吧。
如果运行正常,祝贺你完成了一个操作系统基本终端和输入系列的课程。很遗憾这个教程先讲到这里,但是我希望将来能制作更多教程。有问题请反馈至 awc32@cam.ac.uk。
你已经在建立了一个简易的终端操作系统。我们的代码在 commandTable 构造了一个可用的命令表格。每个表格的入口是一个整型数字,用来表示字符串的地址,和一个整型数字表格代码的执行入口。 最后一个入口是 为 0 的 `commandStringEnd`。尝试实现你自己的命令,可以参照已有的函数,建立一个新的。函数的参数 `r0` 是用户输入的命令地址,`r1` 是其长度。你可以用这个传递你输入值到你的命令。也许你有一个计算器程序,或许是一个绘图程序或国际象棋。不管你的什么点子,让它跑起来!
--------------------------------------------------------------------------------
via: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html
作者:[Alex Chadwick][a]
选题:[lujun9972][b]
译者:[guevaraya](https://github.com/guevaraya)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.cl.cam.ac.uk
[b]: https://github.com/lujun9972
[1]: https://linux.cn/article-10676-1.html
[2]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/images/circular_buffer.png
[3]: https://en.wikipedia.org/wiki/Color_Graphics_Adapter

View File

@ -1,25 +1,26 @@
[#]: collector: (lujun9972)
[#]: translator: (liujing97)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10698-1.html)
[#]: subject: (How To Set Password Policies In Linux)
[#]: via: (https://www.ostechnix.com/how-to-set-password-policies-in-linux/)
[#]: author: (SK https://www.ostechnix.com/author/sk/)
如何在 Linux 系统中设置密码策略
如何设置 Linux 系统的密码策略
======
![](https://www.ostechnix.com/wp-content/uploads/2016/03/How-To-Set-Password-Policies-In-Linux-720x340.jpg)
虽然 Linux 的设计是安全的,但还是存在许多安全漏洞的风险弱密码就是其中之一。作为系统管理员,你必须为用户提供一个强密码。因为大部分的系统漏洞就是由于弱密码而引发的。本教程描述了在基于 DEB 系统的 Linux比如 Debian, Ubuntu, Linux Mint 等和基于 RPM 系统的 Linux比如 RHEL, CentOS, Scientific Linux 等的系统下设置像**密码长度****密码复杂度****密码有效期**等密码策略。  
虽然 Linux 的设计是安全的,但还是存在许多安全漏洞的风险弱密码就是其中之一。作为系统管理员,你必须为用户提供一个强密码。因为大部分的系统漏洞就是由于弱密码而引发的。本教程描述了在基于 DEB 系统的 Linux比如 Debian、Ubuntu、Linux Mint 等和基于 RPM 系统的 Linux比如 RHEL、CentOS、Scientific Linux 等的系统下设置像**密码长度**、**密码复杂度**、**密码有效期**等密码策略。
### 在基于 DEB 的系统中设置密码长度
默认情况下,所有的 Linux 操作系统要求用户**密码长度最少6个字符**。我强烈建议不要低于这个限制。并且不要使用你的真实名称、父母、配偶、孩子的名字,或者你的生日作为密码。即便是一个黑客新手,也可以很快地破解这类密码。一个好的密码必须是至少 6 个字符,并且包含数字大写字母和特殊符号。
默认情况下,所有的 Linux 操作系统要求用户**密码长度最少 6 个字符**。我强烈建议不要低于这个限制。并且不要使用你的真实名称、父母、配偶、孩子的名字,或者你的生日作为密码。即便是一个黑客新手,也可以很快地破解这类密码。一个好的密码必须是至少 6 个字符,并且包含数字大写字母和特殊符号。
通常地,在基于 DEB 的操作系统中,密码和身份认证相关的配置文件被存储在 **/etc/pam.d/** 目录中。
通常地,在基于 DEB 的操作系统中,密码和身份认证相关的配置文件被存储在 `/etc/pam.d/` 目录中。
设置最小密码长度,编辑 **/etc/pam.d/common-password** 文件;
设置最小密码长度,编辑 `/etc/pam.d/common-password` 文件;
```
$ sudo nano /etc/pam.d/common-password
@ -33,7 +34,7 @@ password [success=2 default=ignore] pam_unix.so obscure sha512
![][2]
在末尾添加额外的文字:**minlen=8**。在这里我设置的最小密码长度为 **8**
在末尾添加额外的文字:`minlen=8`。在这里我设置的最小密码长度为 `8`
```
password [success=2 default=ignore] pam_unix.so obscure sha512 minlen=8
@ -43,15 +44,15 @@ password [success=2 default=ignore] pam_unix.so obscure sha512 minlen=8
保存并关闭该文件。这样一来,用户现在不能设置小于 8 个字符的密码。
### 在基于RPM的系统中设置密码长度
### 在基于 RPM 的系统中设置密码长度
**在 RHEL, CentOS, Scientific Linux 7.x** 系统中, 以root身份执行下面的命令来设置密码长度。
**在 RHEL、CentOS、Scientific Linux 7.x** 系统中, 以 root 身份执行下面的命令来设置密码长度。
```
# authconfig --passminlen=8 --update
```
查看最小密码长度, 执行:
查看最小密码长度执行:
```
# grep "^minlen" /etc/security/pwquality.conf
@ -63,7 +64,7 @@ password [success=2 default=ignore] pam_unix.so obscure sha512 minlen=8
minlen = 8
```
**在 RHEL, CentOS, Scientific Linux 6.x** 系统中, 编辑 **/etc/pam.d/system-auth** 文件:
**在 RHEL、CentOS、Scientific Linux 6.x** 系统中,编辑 `/etc/pam.d/system-auth` 文件:
```
# nano /etc/pam.d/system-auth
@ -77,11 +78,11 @@ password requisite pam_cracklib.so try_first_pass retry=3 type= minlen=8
![](https://www.ostechnix.com/wp-content/uploads/2016/03/root@server_003-3.jpg)
在以上所有设置中,最小密码长度是 **8** 个字符。
如上设置中,最小密码长度是 `8` 个字符。
### 在基于DEB的系统中设置密码复杂度
### 在基于 DEB 的系统中设置密码复杂度
此设置会强制要求密码中应该包含多少类型,比如大写字母小写字母和其他字符。
此设置会强制要求密码中应该包含多少类型,比如大写字母小写字母和其他字符。
首先,用下面命令安装密码质量检测库:
@ -89,13 +90,13 @@ password requisite pam_cracklib.so try_first_pass retry=3 type= minlen=8
$ sudo apt-get install libpam-pwquality
```
之后,编辑 **/etc/pam.d/common-password** 文件:
之后,编辑 `/etc/pam.d/common-password` 文件:
```
$ sudo nano /etc/pam.d/common-password
```
为了设置密码中至少有一个**大写字母**,则在下面这行的末尾添加文字 **ucredit=-1**
为了设置密码中至少有一个**大写字母**,则在下面这行的末尾添加文字 `ucredit=-1`
```
password requisite pam_pwquality.so retry=3 ucredit=-1
@ -115,9 +116,9 @@ password requisite pam_pwquality.so retry=3 dcredit=-1
password requisite pam_pwquality.so retry=3 ocredit=-1
```
正如你在上面样例中看到的一样,我们设置了密码中至少含有一个大写字母、一个小写字母和一个特殊字符。你可以设置被最大允许的任意数量的大写字母小写字母和特殊字符。
正如你在上面样例中看到的一样,我们设置了密码中至少含有一个大写字母、一个小写字母和一个特殊字符。你可以设置被最大允许的任意数量的大写字母小写字母和特殊字符。
你还可以设置密码中被允许的最大或最小类型的数量。
你还可以设置密码中被允许的字符类的最大或最小数量。
下面的例子展示了设置一个新密码中被要求的字符类的最小数量:
@ -125,7 +126,7 @@ password requisite pam_pwquality.so retry=3 ocredit=-1
password requisite pam_pwquality.so retry=3 minclass=2
```
### 在基于RPM的系统中设置密码杂度
### 在基于 RPM 的系统中设置密码杂度
**在 RHEL 7.x / CentOS 7.x / Scientific Linux 7.x 中:**
@ -201,7 +202,7 @@ dcredit = -1
ocredit = -1
```
**RHEL 6.x / CentOS 6.x / Scientific Linux 6.x systems** 中,以root身份编辑 **/etc/pam.d/system-auth** 文件:
**RHEL 6.x / CentOS 6.x / Scientific Linux 6.x systems** 中,以 root 身份编辑 `/etc/pam.d/system-auth` 文件:
```
# nano /etc/pam.d/system-auth
@ -212,17 +213,17 @@ ocredit = -1
```
password requisite pam_cracklib.so try_first_pass retry=3 type= minlen=8 dcredit=-1 ucredit=-1 lcredit=-1 ocredit=-1
```
在以上每个设置中,密码必须要至少包含 8 个字符。另外,密码必须至少包含一个大写字母、一个小写字母、一个数字和一个其他字符。
### 在基于DEB的系统中设置密码有效期
如上设置中,密码必须要至少包含 `8` 个字符。另外,密码必须至少包含一个大写字母、一个小写字母、一个数字和一个其他字符。
### 在基于 DEB 的系统中设置密码有效期
现在,我们将要设置下面的策略。
1. 密码被使用的最长天数。
2. 密码更改允许的最小间隔天数。
3. 密码到期之前发出警告的天数。
设置这些策略,编辑:
```
@ -239,7 +240,7 @@ PASS_WARN_AGE 7
![](https://www.ostechnix.com/wp-content/uploads/2016/03/sk@sk-_002-8.jpg)
正如你在上面样例中看到的一样,用户应该每 **100** 天修改一次密码,并且密码到期之前的 **7** 天开始出现警告信息。
正如你在上面样例中看到的一样,用户应该每 `100` 天修改一次密码,并且密码到期之前的 `7` 天开始出现警告信息。
请注意,这些设置将会在新创建的用户中有效。
@ -280,6 +281,7 @@ Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7
```
正如你在上面看到的输出一样,该密码是无限期的。
修改已存在用户的密码有效期,
@ -288,22 +290,23 @@ Number of days of warning before password expires : 7
$ sudo chage -E 24/06/2018 -m 5 -M 90 -I 10 -W 10 sk
```
上面的命令将会设置用户 **sk** 的密码期限是 **24/06/2018**。并且修改密码的最小间隔时间为 5 天,最大间隔时间为 **90** 天。用户账号将会在 **10 天**后被自动锁定而且在到期之前的 **10 天**将会显示警告信息。
上面的命令将会设置用户 `sk` 的密码期限是 `24/06/2018`。并且修改密码的最小间隔时间为 `5` 天,最大间隔时间为 `90` 天。用户账号将会在 `10` 天后被自动锁定,而且在到期之前的 `10` 天前显示警告信息。
### 在基于 RPM 的系统中设置密码效期
这点和基于 DEB 的系统是相同的。
### 在基于 DEB 的系统中禁止使用近期使用过的密码
你可以限制用户去设置一个已经使用过的密码。通俗的讲,就是说用户不能再次使用相同的密码。
为设置这一点,编辑 **/etc/pam.d/common-password** 文件:
为设置这一点,编辑 `/etc/pam.d/common-password` 文件:
```
$ sudo nano /etc/pam.d/common-password
```
找到下面这行并且在末尾添加文字 **remember=5**
找到下面这行并且在末尾添加文字 `remember=5`
```
password        [success=2 default=ignore]      pam_unix.so obscure use_authtok try_first_pass sha512 remember=5
@ -313,27 +316,23 @@ password        [success=2 default=ignore]      pam_unix.so obscure use_a
### 在基于 RPM 的系统中禁止使用近期使用过的密码
这点对于 RHEL 6.x 和 RHEL 7.x 是相同的。他们的克隆系统类似于 CentOS, Scientific Linux
这点对于 RHEL 6.x 和 RHEL 7.x 和它们的衍生系统 CentOS、Scientific Linux 是相同的
root身份编辑 **/etc/pam.d/system-auth** 文件,
root 身份编辑 `/etc/pam.d/system-auth` 文件,
```
# vi /etc/pam.d/system-auth
```
找到下面这行,并且在末尾添加文字 **remember=5**
找到下面这行,并且在末尾添加文字 `remember=5`
```
password     sufficient     pam_unix.so sha512 shadow nullok try_first_pass use_authtok remember=5
```
现在你知道了 Linux 中的密码策略是什么,以及如何在基于 DEB 和 RPM 的系统中设置不同的密码策略。
现在就这样,我很快会在这里发表另外一天有趣而且有用的文章。在此之前会与 OSTechNix 保持联系。如果您觉得本教程对你有帮助,请在您的社交,专业网络上分享并支持我们。
祝贺!
现在你了解了 Linux 中的密码策略,以及如何在基于 DEB 和 RPM 的系统中设置不同的密码策略。
就这样,我很快会在这里发表另外一天有趣而且有用的文章。在此之前请保持关注。如果您觉得本教程对你有帮助,请在您的社交,专业网络上分享并支持我们。
--------------------------------------------------------------------------------
@ -342,7 +341,7 @@ via: https://www.ostechnix.com/how-to-set-password-policies-in-linux/
作者:[SK][a]
选题:[lujun9972][b]
译者:[liujing97](https://github.com/liujing97)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,99 @@
5 款适合程序员的开源字体
======
> 编程字体有些在普通字体中没有的特点,这五种字体你可以看看。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documentation-type-keys-yearbook.png?itok=Q-ELM2rn)
什么是最好的编程字体呢?首先,你需要考虑到字体被设计出来的初衷可能并不相同。当选择一款用于休闲阅读的字体时,读者希望该字体的字母能够顺滑地衔接,提供一种轻松愉悦的体验。一款标准字体的每个字符,类似于拼图的一块,它需要被仔细的设计,从而与整个字体的其他部分融合在一起。
然而,在编写代码时,通常来说对字体的要求更具功能性。这也是为什么大多数程序员在选择时更偏爱使用固定宽度的等宽字体。选择一款带有容易分辨的数字和标点的字体在美学上令人愉悦;但它是否拥有满足你需求的版权许可也是非常重要的。
某些功能使得字体更适合编程。首先要清楚是什么使得等宽字体看上去井然有序。这里,让我们对比一下字母 `w` 和字母 `i`。当选择一款字体时,重要的是要考虑字母本身及周围的空白。在纸质的书籍和报纸中,有效地利用空间是极为重要的,为瘦小的 `i` 分配较小的空间,为宽大的字母 `w` 分配较大的空间是有意义的。
然而在终端中,你没有这些限制。每个字符享有相等的空间将非常有用。这么做的首要好处是你可以随意扫过一段代码来“估测”代码的长度。第二个好处是能够轻松地对齐字符和标点,高亮在视觉上更加明显。另外打印纸张上的等宽字体比均衡字体更加容易通过 OCR 识别。
在本篇文章中,我们将探索 5 款卓越的开源字体,使用它们来编程和写代码都非常理想。
### 1、Firacode最佳整套编程字体
![FiraCode 示例][1]
*FiraCode, Andrew Lekashman*
在我们列表上的首款字体是 [FiraCode][3]一款真正符合甚至超越了其职责的编程字体。FiraCode 是 Fira 的扩展,而后者是由 Mozilla 委托设计的开源字体族。使得 FiraCode 与众不同的原因是它修改了在代码中常使用的一些符号的组合或连字,使得它看上去更具可读性。这款字体有几种不同的风格,特别是还包含 Retina 选项。你可以在它的 [GitHub][3] 主页中找到它被使用到多种编程语言中的例子。
![FiraCode compared to Fira Mono][2]
*FiraCode 与 Fira Mono 的对比,[Nikita Prokopov][3],源自 GitHub*
### 2、Inconsolata优雅且由卓越设计者创造
![Inconsolata 示例][4]
*Inconsolata, Andrew Lekashman*
[Inconsolata][5] 是最为漂亮的等宽字体之一。从 2006 年开始它便一直是一款开源和可免费获取的字体。它的创造者 Raph Levien 在设计 Inconsolata 时秉承的一个基本原则是:等宽字体并不应该那么糟糕。使得 Inconsolata 如此优秀的两个原因是:对于 `0``o` 这两个字符它们有很大的不同,另外它还特别地设计了标点符号。
### 3、DejaVu Sans Mono许多 Linux 发行版的标准配置,庞大的字形覆盖率
![DejaVu Sans Mono example][6]
*DejaVu Sans Mono, Andrew Lekashman*
受在 GNOME 中使用的带有版权和闭源的 Vera 字体的启发,[DejaVu Sans Mono][7] 是一个非常受欢迎的编程字体,几乎在每个现代的 Linux 发行版中都带有它。在 Book Variant 风格下 DejaVu 拥有惊人的 3310 个字形,相比于一般的字体,它们含有 100 个左右的字形。在工作中你将不会出现缺少某些字符的情况,它覆盖了 Unicode 的绝大部分,并且一直在活跃地增长着。
### 4、Source Code Pro优雅、可读性强由 Adobe 中一个小巧但天才的团队打造
![Source Code Pro example][8]
*Source Code Pro, Andrew Lekashman*
由 Paul Hunt 和 Teo Tuominen 设计,[Source Code Pro][9] 是[由 Adobe 创造的][10]成为了它的首款开源字体。Source Code Pro 值得注意的地方在于它极具可读性且对于容易混淆的字符和标点它有着非常好的区分度。Source Code Pro 也是一个字体族,有 7 中不同的风格Extralight、Light、Regular、Medium、Semibold、Bold 和 Black每种风格都还有斜体变体。
![Differentiating potentially confusable characters][11]
*潜在易混淆的字符之间的区别,[Paul D. Hunt][10] 源自 Adobe Typekit 博客。*
![Metacharacters with special meaning in computer languages][12]
*在计算机领域中有特别含义的特殊元字符, [Paul D. Hunt][10] 源自 Adobe Typekit 博客。*
### 5、Noto Mono巨量的语言覆盖率由 Google 中的一个大团队打造
![Noto Mono example][13]
*Noto Mono, Andrew Lekashman*
在我们列表上的最后一款字体是 [Noto Mono][14],这是 Google 打造的庞大 Note 字体族中的等宽版本。尽管它并不是专为编程所设计,但它在 209 种语言(包括 emoji 颜文字!)中都可以使用,并且一直在维护和更新。该项目非常庞大,是 Google 宣称 “组织全世界信息” 的使命的延续。假如你想更多地了解它,可以查看这个绝妙的[关于这些字体的视频][15]。
### 选择合适的字体
无论你选择那个字体,你都有可能在每天中花费数小时面对它,所以请确保它在审美和哲学层面上与你产生共鸣。选择正确的开源字体是确保你拥有最佳生产环境的一个重要部分。这些字体都是很棒的选择,每个都具有让它脱颖而出的功能强大的特性。
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/11/how-select-open-source-programming-font
作者:[Andrew Lekashman][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com
[1]:https://opensource.com/sites/default/files/u128651/firacode.png (FiraCode example)
[2]:https://opensource.com/sites/default/files/u128651/firacode2.png (FiraCode compared to Fira Mono)
[3]:https://github.com/tonsky/FiraCode
[4]:https://opensource.com/sites/default/files/u128651/inconsolata.png (Inconsolata example)
[5]:http://www.levien.com/type/myfonts/inconsolata.html
[6]:https://opensource.com/sites/default/files/u128651/dejavu_sans_mono.png (DejaVu Sans Mono example)
[7]:https://dejavu-fonts.github.io/
[8]:https://opensource.com/sites/default/files/u128651/source_code_pro.png (Source Code Pro example)
[9]:https://github.com/adobe-fonts/source-code-pro
[10]:https://blog.typekit.com/2012/09/24/source-code-pro/
[11]:https://opensource.com/sites/default/files/u128651/source_code_pro2.png (Differentiating potentially confusable characters)
[12]:https://opensource.com/sites/default/files/u128651/source_code_pro3.png (Metacharacters with special meaning in computer languages)
[13]:https://opensource.com/sites/default/files/u128651/noto.png (Noto Mono example)
[14]:https://www.google.com/get/noto/#mono-mono
[15]:https://www.youtube.com/watch?v=AAzvk9HSi84

View File

@ -0,0 +1,173 @@
12 个最佳 GNOMEGTK主题
======
> 让我们来看一些漂亮的 GTK 主题,你不仅可以用在 Ubuntu 上,也可以用在其它使用 GNOME 的 Linux 发行版上。
对于我们这些使用 Ubuntu 的人来说,默认的桌面环境从 Unity 变成了 Gnome 使得主题和定制变得前所未有的简单。Gnome 有个相当大的定制用户社区,其中不乏可供用户选择的漂亮的 GTK 主题。最近几个月,我不断找到了一些喜欢的主题。我相信这些是你所能找到的最好的主题之一了。
### Ubuntu 和其它 Linux 发行版的最佳主题
这不是一个详细清单,可能不包括一些你已经使用和喜欢的主题,但希望你能至少找到一个能让你喜爱的没见过的主题。所有这里提及的主题都可以工作在 Gnome 3 上,不管是 Ubuntu 还是其它 Linux 发行版。有一些主题的屏幕截屏我没有,所以我从官方网站上找到了它们的图片。
在这里列出的主题没有特别的次序。
但是,在你看这些最好的 GNOME 主题前,你应该学习一下 [如何在 Ubuntu GNOME 中安装主题][1]。
#### 1、Arc-Ambiance
![][2]
Arc 和 Arc 变体主题已经出现了相当长的时间,普遍认为它们是最好的主题之一。在这个示例中,我选择了 Arc-Ambiance ,因为它是 Ubuntu 中的默认 Ambiance 主题。
我是 Arc 主题和默认 Ambiance 主题的粉丝,所以不用说,当我遇到一个融合了两者优点的主题,我不禁长吸了一口气。如果你是 Arc 主题的粉丝但不是这个特定主题的粉丝Gnome 的外观上当然还有适合你口味的大量的选择。
- [下载 Arc-Ambiance 主题][3]
#### 2、Adapta Colorpack
![][4]
Adapta 主题是我所见过的最喜欢的扁平主题之一。像 Arc 一样Adapata 被很多 Linux 用户广泛采用。我选择这个配色包,是因为一次下载你就有数个可选择的配色方案。事实上,有 19 个配色方案可以选择是的你没看错19 个呢!
所以,如果你是如今常见的扁平风格/<ruby>材料设计风格<rt>Material Design Language</rt></ruby>的粉丝,那么,在这个主题包中很可能至少有一个能满足你喜好的变体。
- [下载 Adapta Colorpack 主题][5]
#### 3、Numix Collection
![][6]
Numix! 让我想起了我们一起度过的那些年!对于那些在过去几年装点过桌面环境的人来说,你肯定在某个时间点上遇到过 Numix 主题或图标包。Numix 可能是我爱上的第一个 Linux 现代主题,现在我仍然爱它。虽然经过这些年,但它仍然魅力不失。
灰色色调贯穿主题,尤其是默认的粉红色高亮,带来了真正干净而完整的体验。你可能很难找到一个像 Numix 一样精美的主题包。而且在这个主题包中,你还有很多可供选择的余地,简直不要太棒了!
- [下载 Numix Collection 主题][7]
#### 4、Hooli
![][8]
Hooli 是一个已经出现了一段时间的主题但是我最近才偶然发现它。我是很多扁平主题的粉丝但是通常不太喜欢材料设计风格的主题。Hooli 像 Adapta 一样吸取了那些设计风格,但是我认为它和其它的那些有所不同。绿色高亮是我对这个主题最喜欢的部分之一,并且,它在不冲击整个主题方面做的很好。
- [下载 Hooli 主题][9]
#### 5、Arrongin/Telinkrin
![][10]
福利:二合一主题!它们是在主题领域中的相对新的竞争者。它们都吸取了 Ubuntu 接近完成的 “[communitheme][11]” 的思路并带它到了你的桌面。这两个主题我能找到的唯一真正的区别就是颜色。Arrongin 以 Ubuntu 家族的橙色颜色为中心,而 Telinkrin 则更偏向于 KDE Breeze 系的蓝色,我个人更喜欢蓝色,但是两者都是极好的选择!
- [下载 Arrongin/Telinkrin 主题][12]
#### 6、Gnome-osx
![][13]
我不得不承认,通常,当我看到一个主题有 “osx” 或者在标题中有类似的内容时我就不会不期望太多。大多数受 Apple 启发的主题看起来都比较雷同,我真不能找到使用它们的原因。但我想这两个主题能够打破这种思维定式:这就是 Arc-osc 主题和 Gnome-osx 主题。
我喜欢 Gnome-osx 主题的原因是它在 Gnome 桌面上看起来确实很像 OSX。它在融入桌面环境而不至于变的太扁平方面做得很好。所以对于那些喜欢稍微扁平的主题的人来说如果你喜欢红黄绿按钮样式用于关闭、最小化和最大化这个主题非常适合你。
- [下载 Gnome-osx 主题][14]
#### 7、Ultimate Maia
![][15]
曾经有一段时间我使用 Manjaro Gnome。尽管那以后我又回到了 Ubuntu但是我希望我能打包带走的一个东西是 Manjaro 主题。如果你对 Manjaro 主题和我一样感受相同,那么你是幸运的,因为你可以带它到你想运行 Gnome 的任何 Linux 发行版!
丰富的绿色颜色Breeze 式的关闭、最小化、最大化按钮,以及全面雕琢过的主题使它成为一个不可抗拒的选择。如果你不喜欢绿色,它甚至为你提供一些其它颜色的变体。但是说实话……谁会不喜欢 Manjaro 的绿色呢?
- [下载 Ultimate Maia 主题][16]
#### 8、Vimix
![][17]
这是一个让我激动的主题。它是现代风格的,吸取了 macOS 的红黄绿按钮的风格,但并不是直接复制了它们,并且减少了多变的主题颜色,使之成为了大多数主题的独特替代品。它带来三个深色的变体和几个彩色配色,我们中大多数人都可以从中找到我们喜欢的。
- [下载 Vimix 主题][18]
#### 9、Ant
![][19]
像 Vimix 一样Ant 从 macOS 的按钮颜色中吸取了灵感,但不是直接复制了样式。在 Vimix 减少了颜色花哨的地方Ant 却增加了丰富的颜色,在我的 System 76 Galago Pro 屏幕看起来绚丽极了。三个主题变体的变化差异大相径庭,虽然它可能不见得符合每个人的口味,它无疑是最适合我的。
- [下载 Ant 主题][20]
#### 10、Flat Remix
![][21]
如果你还没有注意到这点对于一些关注关闭、最小化、最大化按钮的人来说我就是一个傻瓜。Flat Remix 使用的颜色主题是我从未在其它地方看到过的,它采用红色、蓝色和橙色方式。把这些添加到一个几乎看起来像是一个混合了 Arc 和 Adapta 的主题的上面,就有了 Flat Remix。
我本人喜欢它的深色主题,但是换成亮色的也是非常好的。因此,如果你喜欢稍稍透明、风格一致的深色主题,以及偶尔的一点点颜色,那 Flat Remix 就适合你。
- [下载 Flat Remix 主题][22]
#### 11、Paper
![][23]
[Paper][24] 已经出现一段时间。我记得第一次使用它是在 2014 年。可以说Paper 的图标包比其 GTK 主题更出名,但是这并不意味着它自身的主题不是一个极好的选择。即使我从一开始就倾心于 Paper 图标,我不能说当我第一次尝试它的时候我就是一个 Paper 主题忠实粉丝。
我觉得鲜亮的色彩和有趣的方式被放到一个主题里是一种“不成熟”的体验。现在几年后Paper 在我心目中已经长大,至少可以这样说,这个主题采取的轻快方式是我非常欣赏的一个。
- [下载 Paper 主题][25]
#### 12、Pop
![][26]
Pop 在这个列表上是一个较新的主题,是由 [System 76][27] 的人们创造的Pop GTK 主题是前面列出的 Adapta 主题的一个分支,并带有一个匹配的图标包,图标包是先前提到的 Paper 图标包的一个分支。
该主题是在 System 76 发布了 [他们自己的发行版][28] Pop!_OS 之后不久发布的。你可以阅读我的 [Pop!_OS 点评][29] 来了解更多信息。不用说,我认为 Pop 是一个极好的主题,带有华丽的装饰,并为 Gnome 桌面带来了一股清新之风。
- [下载 Pop 主题][30]
#### 结束语
很明显,我们有比文中所描述的主题更多的选择,但是这些大多是我在最近几月所使用的最完整、最精良的主题。如果你认为我们错过一些你确实喜欢的主题,或你确实不喜欢我在上面描述的主题,那么在下面的评论区让我们知道,并分享你喜欢的主题更好的原因!
--------------------------------------------------------------------------------
via: https://itsfoss.com/best-gtk-themes/
作者:[Phillip Prado][a]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/phillip/
[1]:https://itsfoss.com/install-themes-ubuntu/
[2]:https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/03/arcambaince.png
[3]:https://www.gnome-look.org/p/1193861/
[4]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/03/adapta.jpg
[5]:https://www.gnome-look.org/p/1190851/
[6]:https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/03/numix.png
[7]:https://www.gnome-look.org/p/1170667/
[8]:https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/03/hooli2.jpg
[9]:https://www.gnome-look.org/p/1102901/
[10]:https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/03/AT.jpg
[11]:https://itsfoss.com/ubuntu-community-theme/
[12]:https://www.gnome-look.org/p/1215199/
[13]:https://itsfoss.com/wp-content/uploads/2018/03/gosx-800x473.jpg
[14]:https://www.opendesktop.org/s/Gnome/p/1171688/
[15]:https://itsfoss.com/wp-content/uploads/2018/03/ultimatemaia-800x450.jpg
[16]:https://www.opendesktop.org/s/Gnome/p/1193879/
[17]:https://itsfoss.com/wp-content/uploads/2018/03/vimix-800x450.jpg
[18]:https://www.gnome-look.org/p/1013698/
[19]:https://itsfoss.com/wp-content/uploads/2018/03/ant-800x533.png
[20]:https://www.opendesktop.org/p/1099856/
[21]:https://itsfoss.com/wp-content/uploads/2018/03/flatremix-800x450.png
[22]:https://www.opendesktop.org/p/1214931/
[23]:https://itsfoss.com/wp-content/uploads/2018/04/paper-800x450.jpg
[24]:https://itsfoss.com/install-paper-theme-linux/
[25]:https://snwh.org/paper/download
[26]:https://itsfoss.com/wp-content/uploads/2018/04/pop-800x449.jpg
[27]:https://system76.com/
[28]:https://itsfoss.com/system76-popos-linux/
[29]:https://itsfoss.com/pop-os-linux-review/
[30]:https://github.com/pop-os/gtk-theme/blob/master/README.md

View File

@ -0,0 +1,95 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10709-1.html)
[#]: subject: (Take to the virtual skies with FlightGear)
[#]: via: (https://opensource.com/article/19/1/flightgear)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
使用 FlightGear 翱翔天空
======
> 你梦想驾驶飞机么?试试开源飞行模拟器 FlightGear 吧。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/flightgear_cockpit_plane_sky.jpg?itok=LRy0lpOS)
如果你曾梦想驾驶飞机,你会喜欢 [FlightGear][1] 的。它是一个功能齐全的[开源][2]飞行模拟器,可在 Linux、MacOS 和 Windows 中运行。
FlightGear 项目始于 1996 年,原因是对商业飞行模拟程序的不满,因为这些程序无法扩展。它的目标是创建一个复杂、强大、可扩展、开放的飞行模拟器框架,来用于学术界和飞行员培训,以及任何想要玩飞行模拟场景的人。
### 入门
FlightGear 的硬件要求适中,包括支持 OpenGL 以实现平滑帧速的加速 3D 显卡。它在我的配备 i5 处理器和仅 4GB 的内存的 Linux 笔记本上运行良好。它的文档包括[在线手册][3]、一个面向[用户][5]和[开发者][6]的 [wiki][4] 门户网站,还有大量的教程(例如它的默认飞机 [Cessna 172p][7])教你如何操作它。
在 [Fedora][8] 和 [Ubuntu][9] Linux 中很容易安装。Fedora 用户可以参考 [Fedora 安装页面][10]来运行 FlightGear。
在 Ubuntu 18.04 中,我需要安装一个仓库:
```
$ sudo add-apt-repository ppa:saiarcot895/flightgear
$ sudo apt-get update
$ sudo apt-get install flightgear
```
安装完成后,我从 GUI 启动它,但你也可以通过输入以下命令从终端启动应用:
```
$ fgfs
```
### 配置 FlightGear
应用窗口左侧的菜单提供配置选项。
![](https://opensource.com/sites/default/files/uploads/flightgear_menu.png)
“Summary” 返回应用的主页面。
“Aircraft” 显示你已安装的飞机,并提供了 FlightGear 的默认“机库”中安装多达 539 种其他飞机的选项。我安装了 Cessna 150L、Piper J-3 Cub 和 Bombardier CRJ-700。一些飞机包括 CRJ-700有教你如何驾驶商用喷气式飞机的教程。我发现这些教程内容翔实且准确。
![](https://opensource.com/sites/default/files/uploads/flightgear_aircraft.png)
要选择驾驶的飞机,请将其高亮显示,然后单击菜单底部的 “Fly!”。我选择了默认的 Cessna 172p 并发现驾驶舱的刻画非常准确。
![](https://opensource.com/sites/default/files/uploads/flightgear_cockpit-view.png)
默认机场是檀香山,但你在 “Location” 菜单中提供你最喜欢机场的 [ICAO 机场代码] [11]进行修改。我找到了一些小型的本地无塔机场,如 Olean 和 Dunkirk纽约以及包括 BuffaloO'Hare 和 Raleigh 在内的大型机场,甚至可以选择特定的跑道。
在 “Environment” 下,你可以调整一天中的时间、季节和天气。模拟包括高级天气建模和从 [NOAA][12] 下载当前天气的能力。
“Settings” 提供在暂停模式中开始模拟的选项。同样在设置中,你可以选择多人模式,这样你就可以与 FlightGear 支持者的全球服务器网络上的其他玩家一起“飞行”。你必须有比较快速的互联网连接来支持此功能。
“Add-ons” 菜单允许你下载飞机和其他场景。
### 开始飞行
为了“起飞”我的 Cessna我使用了罗技操纵杆它用起来不错。你可以使用顶部 “File” 菜单中的选项校准操纵杆。
总的来说,我发现模拟非常准确,图形界面也很棒。你自己试下 FlightGear —— 我想你会发现它是一个非常有趣和完整的模拟软件。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/1/flightgear
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: http://home.flightgear.org/
[2]: http://wiki.flightgear.org/GNU_General_Public_License
[3]: http://flightgear.sourceforge.net/getstart-en/getstart-en.html
[4]: http://wiki.flightgear.org/FlightGear_Wiki
[5]: http://wiki.flightgear.org/Portal:User
[6]: http://wiki.flightgear.org/Portal:Developer
[7]: http://wiki.flightgear.org/Cessna_172P
[8]: http://rpmfind.net/linux/rpm2html/search.php?query=flightgear
[9]: https://launchpad.net/~saiarcot895/+archive/ubuntu/flightgear
[10]: https://apps.fedoraproject.org/packages/FlightGear/
[11]: https://en.wikipedia.org/wiki/ICAO_airport_code
[12]: https://www.noaa.gov/

View File

@ -0,0 +1,348 @@
[#]: collector: (lujun9972)
[#]: translator: (liujing97)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10716-1.html)
[#]: subject: (How To Understand And Identify File types in Linux)
[#]: via: (https://www.2daygeek.com/how-to-understand-and-identify-file-types-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
怎样理解和识别 Linux 中的文件类型
======
众所周知,在 Linux 中一切皆为文件,包括硬盘和显卡等。在 Linux 中导航时,大部分的文件都是普通文件和目录文件。但是也有其他的类型,对应于 5 类不同的作用。因此,理解 Linux 中的文件类型在许多方面都是非常重要的。
如果你不相信,那只需要浏览全文,就会发现它有多重要。如果你不能理解文件类型,就不能够毫无畏惧的做任意的修改。
如果你做了一些错误的修改,会毁坏你的文件系统,那么当你操作的时候请小心一点。在 Linux 系统中文件是非常重要的,因为所有的设备和守护进程都被存储为文件。
### 在 Linux 中有多少种可用类型?
据我所知,在 Linux 中总共有 7 种类型的文件,分为 3 大类。具体如下。
* 普通文件
* 目录文件
* 特殊文件(该类有 5 个文件类型)
* 链接文件
* 字符设备文件
* Socket 文件
* 命名管道文件
* 块文件
参考下面的表可以更好地理解 Linux 中的文件类型。
| 符号  | 意义                  |
| ------- | --------------------------------- |
| ``   | 普通文件。长列表中以下划线 `_` 开头。       |
| `d`   | 目录文件。长列表中以英文字母 `d` 开头。     |
| `l`   | 链接文件。长列表中以英文字母 `l` 开头。      |
| `c`   | 字符设备文件。长列表中以英文字母 `c` 开头。    |
| `s`   | Socket 文件。长列表中以英文字母 `s` 开头。     |
| `p`   | 命名管道文件。长列表中以英文字母 `p` 开头。    |
| `b`   | 块文件。长列表中以英文字母 `b` 开头。       |
### 方法1:手动识别 Linux 中的文件类型
如果你很了解 Linux那么你可以借助上表很容易地识别文件类型。
#### 在 Linux 中如何查看普通文件?
在 Linux 中使用下面的命令去查看普通文件。在 Linux 文件系统中普通文件可以出现在任何地方。
普通文件的颜色是“白色”。
```
# ls -la | grep ^-
-rw-------. 1 mageshm mageshm 1394 Jan 18 15:59 .bash_history
-rw-r--r--. 1 mageshm mageshm 18 May 11 2012 .bash_logout
-rw-r--r--. 1 mageshm mageshm 176 May 11 2012 .bash_profile
-rw-r--r--. 1 mageshm mageshm 124 May 11 2012 .bashrc
-rw-r--r--. 1 root root 26 Dec 27 17:55 liks
-rw-r--r--. 1 root root 104857600 Jan 31 2006 test100.dat
-rw-r--r--. 1 root root 104874307 Dec 30 2012 test100.zip
-rw-r--r--. 1 root root 11536384 Dec 30 2012 test10.zip
-rw-r--r--. 1 root root 61 Dec 27 19:05 test2-bzip2.txt
-rw-r--r--. 1 root root 61 Dec 31 14:24 test3-bzip2.txt
-rw-r--r--. 1 root root 60 Dec 27 19:01 test-bzip2.txt
```
#### 在 Linux 中如何查看目录文件?
在 Linux 中使用下面的命令去查看目录文件。在 Linux 文件系统中目录文件可以出现在任何地方。目录文件的颜色是“蓝色”。
```
# ls -la | grep ^d
drwxr-xr-x. 3 mageshm mageshm 4096 Dec 31 14:24 links/
drwxrwxr-x. 2 mageshm mageshm 4096 Nov 16 15:44 perl5/
drwxr-xr-x. 2 mageshm mageshm 4096 Nov 16 15:37 public_ftp/
drwxr-xr-x. 3 mageshm mageshm 4096 Nov 16 15:37 public_html/
```
#### 在 Linux 中如何查看链接文件?
在 Linux 中使用下面的命令去查看链接文件。在 Linux 文件系统中链接文件可以出现在任何地方。
链接文件有两种可用类型,软连接和硬链接。链接文件的颜色是“浅绿宝石色”。
```
# ls -la | grep ^l
lrwxrwxrwx. 1 root root 31 Dec 7 15:11 s-link-file -> /links/soft-link/test-soft-link
lrwxrwxrwx. 1 root root 38 Dec 7 15:12 s-link-folder -> /links/soft-link/test-soft-link-folder
```
#### 在 Linux 中如何查看字符设备文件?
在 Linux 中使用下面的命令查看字符设备文件。字符设备文件仅出现在特定位置。它出现在目录 `/dev` 下。字符设备文件的颜色是“黄色”。
```
# ls -la | grep ^c
# ls -la | grep ^c
crw-------. 1 root root 5, 1 Jan 28 14:05 console
crw-rw----. 1 root root 10, 61 Jan 28 14:05 cpu_dma_latency
crw-rw----. 1 root root 10, 62 Jan 28 14:05 crash
crw-rw----. 1 root root 29, 0 Jan 28 14:05 fb0
crw-rw-rw-. 1 root root 1, 7 Jan 28 14:05 full
crw-rw-rw-. 1 root root 10, 229 Jan 28 14:05 fuse
```
#### 在 Linux 中如何查看块文件?
在 Linux 中使用下面的命令查看块文件。块文件仅出现在特定位置。它出现在目录 `/dev` 下。块文件的颜色是“黄色”。
```
# ls -la | grep ^b
brw-rw----. 1 root disk 7, 0 Jan 28 14:05 loop0
brw-rw----. 1 root disk 7, 1 Jan 28 14:05 loop1
brw-rw----. 1 root disk 7, 2 Jan 28 14:05 loop2
brw-rw----. 1 root disk 7, 3 Jan 28 14:05 loop3
brw-rw----. 1 root disk 7, 4 Jan 28 14:05 loop4
```
#### 在 Linux 中如何查看 Socket 文件?
在 Linux 中使用下面的命令查看 Socket 文件。Socket 文件可以出现在任何地方。Scoket 文件的颜色是“粉色”。LCTT 译注:此处及下面关于 Socket 文件、命名管道文件可出现的位置原文描述有误,已修改。)
```
# ls -la | grep ^s
srw-rw-rw- 1 root root 0 Jan 5 16:36 system_bus_socket
```
#### 在 Linux 中如何查看命名管道文件?
在 Linux 中使用下面的命令查看命名管道文件。命名管道文件可以出现在任何地方。命名管道文件的颜色是“黄色”。
```
# ls -la | grep ^p
prw-------. 1 root root 0 Jan 28 14:06 replication-notify-fifo|
prw-------. 1 root root 0 Jan 28 14:06 stats-mail|
```
### 方法2:在 Linux 中如何使用 file 命令识别文件类型
在 Linux 中 `file` 命令允许我们去确定不同的文件类型。这里有三个测试集,按此顺序进行三组测试:文件系统测试、魔术字节测试和用于识别文件类型的语言测试。
#### 在 Linux 中如何使用 file 命令查看普通文件
在你的终端简单地输入 `file` 命令跟着普通文件。`file` 命令将会读取提供的文件内容并且准确地显示文件的类型。
这就是我们看到对于每个普通文件有不同结果的原因。参考下面普通文件的不同结果。
```
# file 2daygeek_access.log
2daygeek_access.log: ASCII text, with very long lines
# file powertop.html
powertop.html: HTML document, ASCII text, with very long lines
# file 2g-test
2g-test: JSON data
# file powertop.txt
powertop.txt: HTML document, UTF-8 Unicode text, with very long lines
# file 2g-test-05-01-2019.tar.gz
2g-test-05-01-2019.tar.gz: gzip compressed data, last modified: Sat Jan 5 18:22:20 2019, from Unix, original size 450560
```
#### 在 Linux 中如何使用 file 命令查看目录文件?
在你的终端简单地输入 `file` 命令跟着目录。参阅下面的结果。
```
# file Pictures/
Pictures/: directory
```
#### 在 Linux 中如何使用 file 命令查看链接文件?
在你的终端简单地输入 `file` 命令跟着链接文件。参阅下面的结果。
```
# file log
log: symbolic link to /run/systemd/journal/dev-log
```
#### 在 Linux 中如何使用 file 命令查看字符设备文件?
在你的终端简单地输入 `file` 命令跟着字符设备文件。参阅下面的结果。
```
# file vcsu
vcsu: character special (7/64)
```
#### 在 Linux 中如何使用 file 命令查看块文件?
在你的终端简单地输入 `file` 命令跟着块文件。参阅下面的结果。
```
# file sda1
sda1: block special (8/1)
```
#### 在 Linux 中如何使用 file 命令查看 Socket 文件?
在你的终端简单地输入 `file` 命令跟着 Socket 文件。参阅下面的结果。
```
# file system_bus_socket
system_bus_socket: socket
```
#### 在 Linux 中如何使用 file 命令查看命名管道文件?
在你的终端简单地输入 `file` 命令跟着命名管道文件。参阅下面的结果。
```
# file pipe-test
pipe-test: fifo (named pipe)
```
### 方法 3在 Linux 中如何使用 stat 命令识别文件类型?
`stat` 命令允许我们去查看文件类型或文件系统状态。该实用程序比 `file` 命令提供更多的信息。它显示文件的大量信息例如大小、块大小、IO 块大小、Inode 值、链接、文件权限、UID、GID、文件的访问/更新和修改的时间等详细信息。
#### 在 Linux 中如何使用 stat 命令查看普通文件?
在你的终端简单地输入 `stat` 命令跟着普通文件。参阅下面的结果。
```
# stat 2daygeek_access.log
File: 2daygeek_access.log
Size: 14406929 Blocks: 28144 IO Block: 4096 regular file
Device: 10301h/66305d Inode: 1727555 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
Access: 2019-01-03 14:05:26.430328867 +0530
Modify: 2019-01-03 14:05:26.460328868 +0530
Change: 2019-01-03 14:05:26.460328868 +0530
Birth: -
```
#### 在 Linux 中如何使用 stat 命令查看目录文件?
在你的终端简单地输入 `stat` 命令跟着目录文件。参阅下面的结果。
```
# stat Pictures/
File: Pictures/
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: 10301h/66305d Inode: 1703982 Links: 3
Access: (0755/drwxr-xr-x) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
Access: 2018-11-24 03:22:11.090000828 +0530
Modify: 2019-01-05 18:27:01.546958817 +0530
Change: 2019-01-05 18:27:01.546958817 +0530
Birth: -
```
#### 在 Linux 中如何使用 stat 命令查看链接文件?
在你的终端简单地输入 `stat` 命令跟着链接文件。参阅下面的结果。
```
# stat /dev/log
File: /dev/log -> /run/systemd/journal/dev-log
Size: 28 Blocks: 0 IO Block: 4096 symbolic link
Device: 6h/6d Inode: 278 Links: 1
Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2019-01-05 16:36:31.033333447 +0530
Modify: 2019-01-05 16:36:30.766666768 +0530
Change: 2019-01-05 16:36:30.766666768 +0530
Birth: -
```
#### 在 Linux 中如何使用 stat 命令查看字符设备文件?
在你的终端简单地输入 `stat` 命令跟着字符设备文件。参阅下面的结果。
```
# stat /dev/vcsu
File: /dev/vcsu
Size: 0 Blocks: 0 IO Block: 4096 character special file
Device: 6h/6d Inode: 16 Links: 1 Device type: 7,40
Access: (0660/crw-rw----) Uid: ( 0/ root) Gid: ( 5/ tty)
Access: 2019-01-05 16:36:31.056666781 +0530
Modify: 2019-01-05 16:36:31.056666781 +0530
Change: 2019-01-05 16:36:31.056666781 +0530
Birth: -
```
#### 在 Linux 中如何使用 stat 命令查看块文件?
在你的终端简单地输入 `stat` 命令跟着块文件。参阅下面的结果。
```
# stat /dev/sda1
File: /dev/sda1
Size: 0 Blocks: 0 IO Block: 4096 block special file
Device: 6h/6d Inode: 250 Links: 1 Device type: 8,1
Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 994/ disk)
Access: 2019-01-05 16:36:31.596666806 +0530
Modify: 2019-01-05 16:36:31.596666806 +0530
Change: 2019-01-05 16:36:31.596666806 +0530
Birth: -
```
#### 在 Linux 中如何使用 stat 命令查看 Socket 文件?
在你的终端简单地输入 `stat` 命令跟着 Socket 文件。参阅下面的结果。
```
# stat /var/run/dbus/system_bus_socket
File: /var/run/dbus/system_bus_socket
Size: 0 Blocks: 0 IO Block: 4096 socket
Device: 15h/21d Inode: 576 Links: 1
Access: (0666/srw-rw-rw-) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2019-01-05 16:36:31.823333482 +0530
Modify: 2019-01-05 16:36:31.810000149 +0530
Change: 2019-01-05 16:36:31.810000149 +0530
Birth: -
```
#### 在 Linux 中如何使用 stat 命令查看命名管道文件?
在你的终端简单地输入 `stat` 命令跟着命名管道文件。参阅下面的结果。
```
# stat pipe-test
File: pipe-test
Size: 0 Blocks: 0 IO Block: 4096 fifo
Device: 10301h/66305d Inode: 1705583 Links: 1
Access: (0644/prw-r--r--) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
Access: 2019-01-06 02:00:03.040394731 +0530
Modify: 2019-01-06 02:00:03.040394731 +0530
Change: 2019-01-06 02:00:03.040394731 +0530
Birth: -
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-understand-and-identify-file-types-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[liujing97](https://github.com/liujing97)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,153 @@
[#]: collector: (lujun9972)
[#]: translator: (liujing97)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10727-1.html)
[#]: subject: (7 Methods To Identify Disk Partition/FileSystem UUID On Linux)
[#]: via: (https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-unique-identifier-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Linux 中获取硬盘分区或文件系统的 UUID 的七种方法
======
作为一个 Linux 系统管理员,你应该知道如何去查看分区的 UUID 或文件系统的 UUID。因为现在大多数的 Linux 系统都使用 UUID 挂载分区。你可以在 `/etc/fstab` 文件中可以验证。
有许多可用的实用程序可以查看 UUID。本文我们将会向你展示多种查看 UUID 的方法,并且你可以选择一种适合于你的方法。
### 何为 UUID
UUID 意即<ruby>通用唯一识别码<rt>Universally Unique Identifier</rt></ruby>,它可以帮助 Linux 系统识别一个磁盘分区而不是块设备文件。
自内核 2.15.1 起libuuid 就是 util-linux-ng 包中的一部分,它被默认安装在 Linux 系统中。UUID 由该库生成,可以合理地认为在一个系统中 UUID 是唯一的,并且在所有系统中也是唯一的。
这是在计算机系统中用来标识信息的一个 128 位比特的数字。UUID 最初被用在<ruby>阿波罗网络计算机系统<rt>Apollo Network Computing System</rt></ruby>NCS之后 UUID 被<ruby>开放软件基金会<rt>Open Software Foundation</rt></ruby>OSF标准化成为<ruby>分布式计算环境<rt>Distributed Computing Environment</rt></ruby>DCE的一部分。
UUID 以 32 个十六进制的数字表示,被连字符分割为 5 组显示,总共的 36 个字符的格式为 8-4-4-4-1232 个字母或数字和 4 个连字符)。
例如: `d92fa769-e00f-4fd7-b6ed-ecf7224af7fa`
我的 `/etc/fstab` 文件示例:
```
# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a device; this may
# be used with UUID= as a more robust way to name devices that works even if
# disks are added and removed. See fstab(5).
#
#
UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f / ext4 defaults,noatime 0 1
UUID=a2092b92-af29-4760-8e68-7a201922573b swap swap defaults,noatime 0 2
```
我们可以使用下面的 7 个命令来查看。
* `blkid` 命令:定位或打印块设备的属性。
* `lsblk` 命令:列出所有可用的或指定的块设备的信息。
* `hwinfo` 命令:硬件信息工具,是另外一个很好的实用工具,用于查询系统中已存在硬件。
* `udevadm` 命令udev 管理工具
* `tune2fs` 命令:调整 ext2/ext3/ext4 文件系统上的可调文件系统参数。
* `dumpe2fs` 命令:查询 ext2/ext3/ext4 文件系统的信息。 
* 使用 `by-uuid` 路径:该目录下包含有 UUID 和实际的块设备文件UUID 与实际的块设备文件链接在一起。
### Linux 中如何使用 blkid 命令查看磁盘分区或文件系统的 UUID
`blkid` 是定位或打印块设备属性的命令行实用工具。它利用 libblkid 库在 Linux 系统中获得到磁盘分区的 UUID。
```
# blkid
/dev/sda1: UUID="d92fa769-e00f-4fd7-b6ed-ecf7224af7fa" TYPE="ext4" PARTUUID="eab59449-01"
/dev/sdc1: UUID="d17e3c31-e2c9-4f11-809c-94a549bc43b7" TYPE="ext2" PARTUUID="8cc8f9e5-01"
/dev/sdc3: UUID="ca307aa4-0866-49b1-8184-004025789e63" TYPE="ext4" PARTUUID="8cc8f9e5-03"
/dev/sdc5: PARTUUID="8cc8f9e5-05"
```
### Linux 中如何使用 lsblk 命令查看磁盘分区或文件系统的 UUID
`lsblk` 列出所有有关可用或指定块设备的信息。`lsblk` 命令读取 sysfs 文件系统和 udev 数据库以收集信息。
如果 udev 数据库不可用或者编译的 lsblk 不支持 udev它会试图从块设备中读取卷标、UUID 和文件系统类型。这种情况下,必须以 root 身份运行。该命令默认会以类似于树的格式打印出所有的块设备RAM 盘除外)。
```
# lsblk -o name,mountpoint,size,uuid
NAME MOUNTPOINT SIZE UUID
sda 30G
└─sda1 / 20G d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
sdb 10G
sdc 10G
├─sdc1 1G d17e3c31-e2c9-4f11-809c-94a549bc43b7
├─sdc3 1G ca307aa4-0866-49b1-8184-004025789e63
├─sdc4 1K
└─sdc5 1G
sdd 10G
sde 10G
sr0 1024M
```
### Linux 中如何使用 by-uuid 路径查看磁盘分区或文件系统的 UUID
该目录包含了 UUID 和实际的块设备文件UUID 与实际的块设备文件链接在一起。
```
# ls -lh /dev/disk/by-uuid/
total 0
lrwxrwxrwx 1 root root 10 Jan 29 08:34 ca307aa4-0866-49b1-8184-004025789e63 -> ../../sdc3
lrwxrwxrwx 1 root root 10 Jan 29 08:34 d17e3c31-e2c9-4f11-809c-94a549bc43b7 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Jan 29 08:34 d92fa769-e00f-4fd7-b6ed-ecf7224af7fa -> ../../sda1
```
### Linux 中如何使用 hwinfo 命令查看磁盘分区或文件系统的 UUID
[hwinfo][1] 意即硬件信息工具,是另外一种很好的实用工具。它被用来检测系统中已存在的硬件,并且以可读的格式显示各种硬件组件的细节信息。
```
# hwinfo --block | grep by-uuid | awk '{print $3,$7}'
/dev/sdc1, /dev/disk/by-uuid/d17e3c31-e2c9-4f11-809c-94a549bc43b7
/dev/sdc3, /dev/disk/by-uuid/ca307aa4-0866-49b1-8184-004025789e63
/dev/sda1, /dev/disk/by-uuid/d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
```
### Linux 中如何使用 udevadm 命令查看磁盘分区或文件系统的 UUID
`udevadm` 需要命令和命令特定的操作。它控制 systemd-udevd 的运行时行为,请求内核事件、管理事件队列并且提供简单的调试机制。
```
# udevadm info -q all -n /dev/sdc1 | grep -i by-uuid | head -1
S: disk/by-uuid/d17e3c31-e2c9-4f11-809c-94a549bc43b7
```
### Linux 中如何使用 tune2fs 命令查看磁盘分区或文件系统的 UUID
`tune2fs` 允许系统管理员在 Linux 的 ext2、ext3、ext4 文件系统中调整各种可调的文件系统参数。这些选项的当前值可以使用选项 `-l` 显示。
```
# tune2fs -l /dev/sdc1 | grep UUID
Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7
```
### Linux 中如何使用 dumpe2fs 命令查看磁盘分区或文件系统的 UUID
`dumpe2fs` 打印出现在设备文件系统中的超级块和块组的信息。
```
# dumpe2fs /dev/sdc1 | grep UUID
dumpe2fs 1.43.5 (04-Aug-2017)
Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-unique-identifier-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[liujing97](https://github.com/liujing97)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/

View File

@ -0,0 +1,74 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10723-1.html)
[#]: subject: (Ubuntu 14.04 is Reaching the End of Life. Here are Your Options)
[#]: via: (https://itsfoss.com/ubuntu-14-04-end-of-life/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Ubuntu 14.04 即将结束支持,你该怎么办?
======
Ubuntu 14.04 即将于 2019 年 4 月 30 日结束支持。这意味着在此日期之后 Ubuntu 14.04 用户将无法获得安全和维护更新。
你甚至不会获得已安装应用的更新,并且不手动修改 `sources.list` 则无法使用 `apt` 命令或软件中心安装新应用。
Ubuntu 14.04 大约在五年前发布。这是 Ubuntu 长期支持版本LTS
[检查 Ubuntu 版本][1]并查看你是否仍在使用 Ubuntu 14.04。如果是桌面或服务器版,你可能想知道在这种情况下你应该怎么做。
我来帮助你。告诉你在这种情况下你有些什么选择。
![][2]
### 升级到 Ubuntu 16.04 LTS最简单的方式
如果你可以连接互联网,你可以从 Ubuntu 14.04 升级到 Ubuntu 16.04 LTS。
Ubuntu 16.04 也是一个长期支持版本,它将支持到 2021 年 4 月。这意味着下次升级前你还有两年的时间。
我建议阅读这个[升级 Ubuntu 版本][3]的教程。它最初是为了将 Ubuntu 16.04 升级到 Ubuntu 18.04 而编写的,但这些步骤也适用于你的情况。
### 做好备份,全新安装 Ubuntu 18.04 LTS非常适合桌面用户
另一个选择是备份你的文档、音乐、图片、下载和其他任何你不想丢失数据的文件夹。
我说的备份指的是将这些文件夹复制到外部 USB 盘。换句话说,你应该有办法将数据复制回计算机,因为你将格式化你的系统。
我建议桌面用户使用此选项。Ubuntu 18.04 是目前的长期支持版本,它将至少在 2023 年 4 月之前得到支持。在你被迫进行下次升级之前,你将有四年的时间。
### 支付扩展安全维护费用并继续使用 Ubuntu 14.04
这适用于企业客户。Canonical 是 Ubuntu 的母公司,它提供 Ubuntu Advantage 计划,客户可以支付电话电子邮件支持和其他益处。
Ubuntu Advantage 计划用户还有[扩展安全维护][4]ESM功能。即使给定版本的生命周期结束后此计划也会提供安全更新。
这需要付出金钱。服务器用户每个物理节点每年花费 225 美元。对于桌面用户,价格为每年 150 美元。你可以在[此处][5]了解 Ubuntu Advantage 计划的详细定价。
### 还在使用 Ubuntu 14.04 吗?
如果你还在使用 Ubuntu 14.04,那么你应该开始了解这些选择,因为你还有不到一个月的时间。
在任何情况下,你都不能在 2019 年 4 月 30 日之后使用 Ubuntu 14.04,因为你的系统由于缺乏安全更新而容易受到攻击。无法安装新应用将是一个额外的痛苦。
那么,你会做什么选择?升级到 Ubuntu 16.04 或 18.04 或付费 ESM
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-14-04-end-of-life/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/ubuntu-14-04-end-of-life-featured.png?resize=800%2C450&ssl=1
[3]: https://itsfoss.com/upgrade-ubuntu-version/
[4]: https://www.ubuntu.com/esm
[5]: https://www.ubuntu.com/support/plans-and-pricing

View File

@ -1,41 +1,42 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10707-1.html)
[#]: subject: (7 resources for learning to use your Raspberry Pi)
[#]: via: (https://opensource.com/article/19/3/resources-raspberry-pi)
[#]: author: (Manuel Dewald https://opensource.com/users/ntlx)
学习使用树莓派的 7 个资源
======
缩短树莓派学习曲线的书籍、课程和网站。
> 一些缩短树莓派学习曲线的书籍、课程和网站。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/reading_book_stars_list.png?itok=Iwa1oBOl)
[树莓派][1]是一款小型单板计算机,最初用于教学和学习编程和计算机科学。但如今它有更多用处。它是一种经济、低功耗计算机,人们将它用于各种各样的事情 - 从家庭娱乐到服务器应用,再到物联网 IoT 项目。
[树莓派][1]是一款小型单板计算机,最初用于教学和学习编程和计算机科学。但如今它有更多用处。它是一种经济的低功耗计算机,人们将它用于各种各样的事情 —— 从家庭娱乐到服务器应用再到物联网IoT 项目。
关于这个主题有很多资源,你可以做很多不同的项目,很难知道从哪里开始。以下是一些资源,可以帮助你开始使用树莓派。愉快地浏览,但不要停留在这里。到处看下,深入下去你就会发现树莓派的新世界。
关于这个主题有很多资源,你可以做很多不同的项目,很难知道从哪里开始。以下是一些资源,可以帮助你开始使用树莓派。看看这篇文章,但不要满足于此。到处看下,深入下去你就会发现树莓派的新世界。
### 书籍
关于树莓派有很多不同语言的书籍。这两本将帮助你开始了解,然后深入了解树莓派。
关于树莓派有很多不同语言的书籍。这两本将帮助你开始了解,然后深入了解树莓派。
#### 由 Simon Monk 编写的 Raspberry Pi Cookbook软件和硬件问题及解决方案
#### 由 Simon Monk 编写的《树莓派手边书:软件和硬件问题及解决方案》
Simon Monk 是一名软件工程师,并且多年来一直是手工业余爱好者。他最初被 Arduino 这块易于使用的开发板所吸引,后来出版了一本关于它的[书][2]。后来,他开始使用树莓派并写了 [Raspberry Pi Cookbook软件和硬件问题和解决方案][3]这本书。在本书中,你可以找到大量树莓派项目的最佳时间,以及你可能面对的各种挑战的解决方案。
Simon Monk 是一名软件工程师,并且多年来一直是业余手工爱好者。他最初被 Arduino 这块易于使用的开发板所吸引,后来出版了一本关于它的[书][2]。后来,他开始使用树莓派并写了《[树莓派手边书:软件和硬件问题和解决方案][3]》这本书。在本书中,你可以找到大量树莓派项目的最佳时间,以及你可能面对的各种挑战的解决方案。
####由 Simon Monk 编写的树莓派编程:从 Python 入门
#### 由 Simon Monk 编写的树莓派编程:从 Python 入门
Python 已经发展成为开始树莓派项目的首选编程语言,因为它易于学习和使用,即使你没有任何编程经验。此外,它的许多库可以帮助你专注于使你的项目变得特别,而不是实现协议反复地与传感器不断通信。Monk 在 Raspberry Pi Cookbook 中写了两章关于 Python 编程,但[树莓派编程:从 Python 入门][4]是一个更全面的快速入门。它向你介绍了 Python并向你展示了可以在树莓派上使用它创建的一些项目。
Python 已经发展成为开始一个树莓派项目的首选编程语言,因为它易于学习和使用,即使你没有任何编程经验。此外,它的许多库可以帮助你专注于使你的项目变得特别,而不是实现协议以与传感器反复通信。Monk 在《树莓派手边书》中写了两章关于 Python 编程,但《[树莓派编程:从 Python 入门][4]》是一个更全面的快速入门。它向你介绍了 Python并向你展示了可以在树莓派上使用它创建的一些项目。
### 在线课程
新的树莓派用户可以选择许多在线课程和教程,包括这个入门课程。
#### Raspberry Pi Class
Instructables 的免费 [Raspberry Pi Class][5] 在线课程提供了对树莓派的全面介绍。它从树莓派和 Linux 操作基础开始,然后进入 Python 编程和 GPIO 通信。如果你是这方面的新手,并希望快速入门,这使它成为一个很好的从上到下的树莓派指南。
#### 树莓派课程
Instructables 免费的在线[树莓派课程][5]提供了对树莓派的全面介绍。它从树莓派和 Linux 操作基础开始,然后进入 Python 编程和 GPIO 通信。如果你是这方面的新手,并希望快速入门,这使它成为一个很好的自上而下的树莓派指南。
### 网站
@ -43,7 +44,7 @@ Instructables 的免费 [Raspberry Pi Class][5] 在线课程提供了对树莓
#### RaspberryPi.org
官方的[树莓派][6]网站是最好的入门之一。许多关于特定项目的文章有链接到基础知识的链接,如将 Raspbian 安装到树莓派上。 (这是我倾向的,而不是在每个操作中重复说明。)你还可以找到学生技术[教育][8]方面的[示例项目][7]和课程。
官方的[树莓派][6]网站是最好的入门之一。有许多关于特定项目的文章会链接到这里的基础知识,如将 Raspbian 安装到树莓派上。(这是我倾向的做法,而不是在每篇文章中重复说明。)你还可以找到学生技术[教育][8]方面的[示例项目][7]和课程。
#### Opensource.com
@ -51,7 +52,7 @@ Instructables 的免费 [Raspberry Pi Class][5] 在线课程提供了对树莓
#### Instructables 和 Hackaday
你想造自己的复古街机么?或者在镜子上显示当天的天气信息、时间和第一事务?你是否想要为派对创建一个文字时钟或者相簿?你可以在 [Instructables][10] 和 [Hackaday][11] 这样的网站上找到如何使用树莓派完成所有这些(以及更多!)的说明。如果你不确定是否要买树莓派,请浏览这些网站,你会发现有很多理由可以购买。
你想造自己的复古街机么?或者在镜子上显示当天的天气信息、时间和第一事务?你是否想要为派对创建一个文字时钟或者相簿?你可以在 [Instructables][10] 和 [Hackaday][11] 这样的网站上找到如何使用树莓派完成所有这些(以及更多!)的说明。如果你不确定是否要买树莓派,请浏览这些网站,你会发现有很多理由值得购买。
你最喜欢的树莓派资源是什么?请在评论中分享!
@ -62,7 +63,7 @@ via: https://opensource.com/article/19/3/resources-raspberry-pi
作者:[Manuel Dewald][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,19 +1,20 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10711-1.html)
[#]: subject: (Do advanced math with Mathematica on the Raspberry Pi)
[#]: via: (https://opensource.com/article/19/3/do-math-raspberry-pi)
[#]: author: (Anderson Silva https://opensource.com/users/ansilva)
在树莓派上使用 Mathematica 进行高级数学运算
树莓派使用入门:在树莓派上使用 Mathematica 进行高级数学运算
======
Wolfram 将一个版本 Mathematica 捆绑到了 Raspbian 中。在我们关于树莓派入门系列的第 12 篇文章中学习如何使用它。
> Wolfram 在 Raspbian 中捆绑了一个版本的 Mathematica。在我们的树莓派入门系列的第 12 篇文章中将学习如何使用它。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/edu_math_formulas.png?itok=B59mYTG3)
在 90 年代中期,我进入了大学数学专业,即使我以计算机科学学位毕业,第二专业数学我已经上了足够的课程,但还有两门小课没有上。当时,我被介绍了 [Wolfram][2] 中一个名为[Mathematica][1] 的应用,我们可以将黑板上的许多代数和微分方程输入计算机。我每月花几个小时在实验室学习 Wolfram 语言并在 Mathematica 上解决积分等问题。
在 90 年代中期,我进入了大学数学专业,虽然我是以计算机科学学位毕业的,但是我就差两门课程就拿到了双学位,包括数学专业的学位。当时,我接触到了 [Wolfram][2] 的一个名为 [Mathematica][1] 的应用,我们可以将黑板上的许多代数和微分方程输入计算机。我每月花几个小时在实验室学习 Wolfram 语言并在 Mathematica 上解决积分等问题。
对于大学生来说 Mathematica 是闭源而且昂贵的,因此在差不多 20 年后,看到 Wolfram 将一个版本的 Mathematica 与 Raspbian 和 Raspberry Pi 捆绑在一起是一个惊喜。如果你决定使用另一个基于 Debian 的发行版,你可以从这里[下载][3]。请注意,此版本仅供非商业用途免费使用。
@ -23,7 +24,7 @@ Wolfram 将一个版本 Mathematica 捆绑到了 Raspbian 中。在我们关于
要深入了解 Mathematica请查看 [Wolfram 语言文档][5]。如果你只是想解决一些基本的微积分问题,请[查看它的函数][6]部分。如果你想[绘制一些 2D 和 3D 图形][7],请阅读链接的教程。
或者,如果你想在做数学运算时坚持使用开源工具,请查看命令行工具 **expr**、**factor** 和 **bc**。(记住使用 [**man** 命令][8] 阅读使用帮助)如果想画图,[Gnuplot][9] 是个不错的选择。
或者,如果你想在做数学运算时坚持使用开源工具,请查看命令行工具 `expr`、`factor` 和 `bc`。(记住使用 [man 命令][8] 阅读使用帮助)如果想画图,[Gnuplot][9] 是个不错的选择。
--------------------------------------------------------------------------------
@ -32,7 +33,7 @@ via: https://opensource.com/article/19/3/do-math-raspberry-pi
作者:[Anderson Silva][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -46,4 +47,4 @@ via: https://opensource.com/article/19/3/do-math-raspberry-pi
[6]: https://reference.wolfram.com/language/guide/Calculus.html
[7]: https://reference.wolfram.com/language/howto/PlotAGraph.html
[8]: https://opensource.com/article/19/3/learn-linux-raspberry-pi
[9]: http://gnuplot.info/
[9]: http://gnuplot.info/

View File

@ -0,0 +1,53 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10731-1.html)
[#]: subject: (How to contribute to the Raspberry Pi community)
[#]: via: (https://opensource.com/article/19/3/contribute-raspberry-pi-community)
[#]: author: (Anderson Silva (Red Hat) https://opensource.com/users/ansilva/users/kepler22b/users/ansilva)
树莓派使用入门:如何为树莓派社区做出贡献
======
> 在我们的入门系列的第 13 篇文章中,发现参与树莓派社区的方法。
![][1]
这个系列已经逐渐接近尾声,我已经写了很多它的乐趣,我大多希望它能帮助人们使用树莓派进行教育或娱乐。也许这些文章能说服你买你的第一个树莓派,或者让你重新发现抽屉里的吃灰设备。如果这里有真的,那么我认为这个系列就是成功的。
如果你想买一台,并宣传这块绿色的小板子有多么多功能,这里有几个方法帮你与树莓派社区建立连接:
* 帮助改进[官方文档][2]
  * 贡献代码给依赖的[项目][3]
  * 用 Raspbian 报告 [bug][4]
  * 报告不同 ARM 架构分发版的的 bug
  * 看一眼英国国内的树莓派基金会的[代码俱乐部][5]或英国境外的[国际代码俱乐部][6],帮助孩子学习编码
  * 帮助[翻译][7]
  * 在 [Raspberry Jam][8] 当志愿者
这些只是你可以为树莓派社区做贡献的几种方式。最后但同样重要的是,你可以加入我并[投稿文章][9]到你最喜欢的开源网站 [Opensource.com][10]。 :-)
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/contribute-raspberry-pi-community
作者:[Anderson Silva (Red Hat)][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ansilva/users/kepler22b/users/ansilva
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry_pi_community.jpg?itok=dcKwb5et
[2]: https://www.raspberrypi.org/documentation/CONTRIBUTING.md
[3]: https://www.raspberrypi.org/github/
[4]: https://www.raspbian.org/RaspbianBugs
[5]: https://www.codeclub.org.uk/
[6]: https://www.codeclubworld.org/
[7]: https://www.raspberrypi.org/translate/
[8]: https://www.raspberrypi.org/jam/
[9]: https://opensource.com/participate
[10]: http://Opensource.com

View File

@ -0,0 +1,75 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10734-1.html)
[#]: subject: (14 days of celebrating the Raspberry Pi)
[#]: via: (https://opensource.com/article/19/3/happy-pi-day)
[#]: author: (Anderson Silva (Red Hat) https://opensource.com/users/ansilva)
树莓派使用入门:庆祝树莓派的 14 天
======
> 在我们关于树莓派入门系列的第 14 篇也是最后一篇文章中,回顾一下我们学到的所有东西。
![][1]
### 派节快乐!
每年的 3 月 14 日,我们这些极客都会庆祝派节。我们用这种方式缩写日期: `MMDD`3 月 14 于是写成 03/14它的数字上提醒我们 3.14,或者说 [π][2] 的前三位数字。许多美国人没有意识到的是,世界上几乎没有其他国家使用这种[日期格式][3],因此派节几乎只适用于美国,尽管它在全球范围内得到了庆祝。
无论你身在何处,让我们一起庆祝树莓派,并通过回顾过去两周我们所涉及的主题来结束本系列:
* 第 1 天:[你应该选择哪种树莓派?][4]
* 第 2 天:[如何购买树莓派][5]
* 第 3 天:[如何启动一个新的树莓派][6]
* 第 4 天:[用树莓派学习 Linux][7]
* 第 5 天:[教孩子们用树莓派学编程的 5 种方法][8]
* 第 6 天:[可以使用树莓派学习的 3 种流行编程语言][9]
* 第 7 天:[如何更新树莓派][10]
* 第 8 天:[如何使用树莓派来娱乐][11]
* 第 9 天:[树莓派上的模拟器和原生 Linux 游戏][12]
* 第 10 天:[进入物理世界 —— 如何使用树莓派的 GPIO 针脚][13]
* 第 11 天:[通过树莓派和 kali Linux 学习计算机安全][14]
* 第 12 天:[在树莓派上使用 Mathematica 进行高级数学运算][15]
* 第 13 天:[如何为树莓派社区做出贡献][16]
![Pi Day illustration][18]
我将结束本系列,感谢所有关注的人,尤其是那些在过去 14 天里从中学到了东西的人!我还想鼓励大家不断扩展他们对树莓派以及围绕它构建的所有开源(和闭源)技术的了解。
我还鼓励你了解其他文化、哲学、宗教和世界观。让我们成为人类的是这种惊人的 (有时是有趣的) 能力,我们不仅要适应外部环境,而且要适应智力环境。
不管你做什么,保持学习!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/happy-pi-day
作者:[Anderson Silva (Red Hat)][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ansilva
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry-pi-juggle.png?itok=oTgGGSRA
[2]: https://www.piday.org/million/
[3]: https://en.wikipedia.org/wiki/Date_format_by_country
[4]: https://linux.cn/article-10611-1.html
[5]: https://linux.cn/article-10615-1.html
[6]: https://linux.cn/article-10644-1.html
[7]: https://linux.cn/article-10645-1.html
[8]: https://linux.cn/article-10653-1.html
[9]: https://linux.cn/article-10661-1.html
[10]: https://linux.cn/article-10665-1.html
[11]: https://linux.cn/article-10669-1.html
[12]: https://linux.cn/article-10682-1.html
[13]: https://linux.cn/article-10687-1.html
[14]: https://linux.cn/article-10690-1.html
[15]: https://linux.cn/article-10711-1.html
[16]: https://linux.cn/article-10731-1.html
[17]: /file/426561
[18]: https://opensource.com/sites/default/files/uploads/raspberrypi_14_piday.jpg (Pi Day illustration)

View File

@ -0,0 +1,118 @@
[#]: collector: (lujun9972)
[#]: translator: (Moelf)
[#]: reviewer: (acyanbird, wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10714-1.html)
[#]: subject: (A Look Back at the History of Firefox)
[#]: via: (https://itsfoss.com/history-of-firefox)
[#]: author: (John Paul https://itsfoss.com/author/john/)
回顾 Firefox 历史
======
从很久之前开始,火狐浏览器就一直是开源社区的一根顶梁柱。这些年来它几乎是所有 Linux 发行版的默认浏览器并且曾是阻挡微软彻底争霸浏览器界的最后一块磐石。这款浏览器的起源可以一直回溯到互联网创生的时代。本周LCTT 译注:此文发布于 2019.3.14)是互联网成立 30 周年的纪念日,趁这个机会回顾一下我们熟悉并爱戴的火狐浏览器实在是再好不过了。
### 发源
在上世纪 90 年代早期,一个叫 [Marc Andreessen][1] 的年轻人正在伊利诺伊大学攻读计算机科学学士学位。在那里,他开始为[国家超算应用中心NCSA][2]工作。就在这段时间内,<ruby>[蒂姆·伯纳斯·李][3]<rt>Tim Berners-Lee</rt></ruby> 爵士发布了今天已经为我们所熟知的 Web 的早期标准。Marc 在那时候[了解][4]到了一款叫 [ViolaWWW][5] 的化石级浏览器。Marc 和 Eric Bina 看到了这种技术的潜力,他们开发了一个易于安装的基于 Unix 平台的浏览器,并取名 [NCSA Mosaic][6]。第一个 alpha 版本发布于 1993 年 6 月。到 9 月的时候,浏览器已经有 Windows 和 Macintosh 移植版本了。因为比当时其他任何浏览器软件都易于使用Mosaic 很快变得相当流行。
1994 年Marc 毕业并移居到加州。一个叫 Jim Clark 的人结识了他Clark 那时候通过卖电脑软硬件赚了点钱。Clark 也用过 Mosaic 浏览器并且看到了互联网的经济前景。Clark 创立了一家公司并且雇了 Marc 和 Eric 专做互联网软件。公司一开始叫 “Mosaic 通讯”,但是伊利诺伊大学并不喜欢他们用 [Mosaic 这个名字][7]。所以公司转而改名为 “<ruby>网景<rt>Netscape</rt></ruby>通讯”。
该公司的第一个项目是给任天堂 64 开发在线对战网络,然而不怎么成功。他们第一个以公司名义发布的产品是一款叫做 Mosaic Netscape 0.9 的浏览器,很快这款浏览器被改名叫 Netscape Navigator。在内部浏览器的开发代号就是 mozilla意即 “Mosaic 杀手”。一位员工还创作了一幅[哥斯拉风格的][8]卡通画。他们当时想在竞争中彻底胜出。
![Early Firefox Mascot][9]
*早期 Mozilla 在 Netscape 的吉祥物*
他们取得了辉煌的胜利。那时Netscape 最大的优势是他们的浏览器在各种操作系统上体验极为一致。Netscape 将其宣传为给所有人平等的互联网体验。
随着越来越多的人使用 Netscape NavigatorNCSA Mosaic 的市场份额逐步下降。到了 1995 年Netscape 公开上市了。[上市首日][10],股价从开盘的 $28直窜到 $78收盘于 $58。Netscape 那时所向披靡。
但好景不长。在 1994 年的夏天,微软发布了 Internet Explorer 1.0,这款浏览器基于 Spyglass Mosaic而后者又直接基于 NCSA Mosaic。[浏览器战争][11] 就此展开。
在接下来的几年里Netscape 和微软就浏览器霸主地位展开斗争。他们各自加入了很多新特性以取得优势。不幸的是IE 有和 Windows 操作系统捆绑的巨大优势。更甚于此,微软也有更多的程序员和资本可以调动。在 1997 年年底Netscape 公司开始遇到财务问题。
### 迈向开源
![Mozilla Firefox][12]
1998 年 1 月Netscape 开源了 Netscape Communicator 4.0 软件套装的代码。[旨在][13] “集合互联网成千上万的程序员的才智,把最好的功能加入 Netscape 的软件。这一策略旨在加速开发,并且让 Netscape 在未来能向个人和商业用户免费提供高质量的 Netscape Communicator 版本”。
这个项目由新创立的 Mozilla 机构管理。然而Netscape Communicator 4.0 的代码由于大小和复杂程度而很难开发。雪上加霜的是,浏览器的一些组件由于第三方的许可证问题而不能被开源。到头来,他们决定用新兴的 [Gecko][14] 渲染引擎重新开发浏览器。
到了 1998 年的 11 月Netscape 被美国在线AOL以[价值 42 亿美元的股权][15]收购。
从头来过是一项艰巨的任务。Mozilla Firefox最初名为 Phoenix直到 2002 年 6 月才面世它同样可以运行在多种操作系统上Linux、Mac OS、Windows 和 Solaris。
1999 年AOL 宣布他们将停止浏览器开发。随后创建了 Mozilla 基金会,用于管理 Mozilla 的商标和项目相关的融资事宜。最早 Mozilla 基金会从 AOL、IBM、Sun Microsystems 和红帽Red Hat收到了总计 200 万美金的捐赠。
到了 2003 年 3 月因为套件越来越臃肿Mozilla [宣布][16] 计划把该套件分割成单独的应用。这个单独的浏览器一开始起名 Phoenix。但是由于和 BIOS 制造企业凤凰科技的商标官司,浏览器改名 Firebird火鸟 —— 结果和火鸟数据库的开发者又起了冲突。浏览器只能再次被重命名,才有了现在家喻户晓的 Firefox火狐
那时,[Mozilla 说][17],”我们在过去一年里学到了很多关于起名的技巧(不是因为我们愿意才学的)。我们现在很小心地研究了名字,确保不会再有什么夭蛾子了。我们已经开始向美国专利商标局注册我们新商标”。
![Mozilla Firefox 1.0][18]
*Firefox 1.0 : [图片致谢][19]*
第一个正式的 Firefox 版本是 [0.8][20],发布于 2004 年 2 月 8 日。紧接着 11 月 9 日他们发布了 1.0 版本。2.0 和 3.0 版本分别在 06 年 10 月 和 08 年 6 月问世。每个大版本更新都带来了很多新的特性和提升。从很多角度上讲Firefox 都领先 IE 不少,无论是功能还是技术先进性,即便如此 IE 还是有更多用户。
一切都在 Google 发布 Chrome 浏览器的时候改变了。在 Chrome 发布2008 年 9 月的前几个月Firefox 占有 30% 的[浏览器份额][21] 而 IE 有超过 60%。而在 StatCounter 的 [2019 年 1 月][22]报告里Firefox 有不到 10% 的份额,而 Chrome 有超过 70%。
> 趣味知识点
> 和大家以为的不一样,火狐的 logo 其实没有狐狸。那其实是个 <ruby>[小熊猫][23]<rt>Red Panda</rt></ruby>。在中文里,“火狐狸”是小熊猫的另一个名字。
### 展望未来
如上文所说的一样Firefox 正在经历很长一段以来的份额低谷。曾经有那么一段时间,有很多浏览器都基于 Firefox 开发,比如早期的 [Flock 浏览器][24]。而现在大多数浏览器都基于谷歌的技术了,比如 Opera 和 Vivaldi。甚至连微软都放弃开发自己的浏览器而转而[加入 Chromium 帮派][25]。
这也许看起来和 Netscape 当年的辉煌形成鲜明的对比。但让我们不要忘记 Firefox 已经有的许多成就。一群来自世界各地的程序员,就这么开发出了这个星球上第二大份额的浏览器。他们在微软垄断如日中天的时候还占据这 30% 的份额,他们可以再次做到这一点。无论如何,他们都有我们。开源社区坚定地站在他们身后。
抗争垄断是我使用 Firefox [的众多原因之一][26]。随着 Mozilla 在改头换面的 [Firefox Quantum][27] 上赢回了一些份额,我相信它将一路向上攀爬。
你还想了解 Linux 和开源历史上的什么其他事件?欢迎在评论区告诉我们。
如果你觉得这篇文章不错,请在社交媒体上分享!比如 Hacker News 或者 [Reddit][28]。
--------------------------------------------------------------------------------
via: https://itsfoss.com/history-of-firefox
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[Moelf](https://github.com/Moelf)
校对:[acyanbird](https://github.com/acyanbird), [wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Marc_Andreessen
[2]: https://en.wikipedia.org/wiki/National_Center_for_Supercomputing_Applications
[3]: https://en.wikipedia.org/wiki/Tim_Berners-Lee
[4]: https://www.w3.org/DesignIssues/TimBook-old/History.html
[5]: http://viola.org/
[6]: https://en.wikipedia.org/wiki/Mosaic_(web_browser)
[7]: http://www.computinghistory.org.uk/det/1789/Marc-Andreessen/
[8]: http://www.davetitus.com/mozilla/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/Mozilla_boxing.jpg?ssl=1
[10]: https://www.marketwatch.com/story/netscape-ipo-ignited-the-boom-taught-some-hard-lessons-20058518550
[11]: https://en.wikipedia.org/wiki/Browser_wars
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/mozilla-firefox.jpg?resize=800%2C450&ssl=1
[13]: https://web.archive.org/web/20021001071727/wp.netscape.com/newsref/pr/newsrelease558.html
[14]: https://en.wikipedia.org/wiki/Gecko_(software)
[15]: http://news.cnet.com/2100-1023-218360.html
[16]: https://web.archive.org/web/20050618000315/http://www.mozilla.org/roadmap/roadmap-02-Apr-2003.html
[17]: https://www-archive.mozilla.org/projects/firefox/firefox-name-faq.html
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/firefox-1.jpg?ssl=1
[19]: https://www.iceni.com/blog/firefox-1-0-introduced-2004/
[20]: https://en.wikipedia.org/wiki/Firefox_version_history
[21]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
[22]: http://gs.statcounter.com/browser-market-share/desktop/worldwide/#monthly-201901-201901-bar
[23]: https://en.wikipedia.org/wiki/Red_panda
[24]: https://en.wikipedia.org/wiki/Flock_(web_browser)
[25]: https://www.windowscentral.com/microsoft-building-chromium-powered-web-browser-windows-10
[26]: https://itsfoss.com/why-firefox/
[27]: https://itsfoss.com/firefox-quantum-ubuntu/
[28]: http://reddit.com/r/linuxusersgroup
[29]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/mozilla-firefox.jpg?fit=800%2C450&ssl=1

View File

@ -1,24 +1,24 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10732-1.html)
[#]: subject: (Sweet Home 3D: An open source tool to help you decide on your dream home)
[#]: via: (https://opensource.com/article/19/3/tool-find-home)
[#]: author: (Jeff Macharyas (Community Moderator) )
Sweet Home 3D一个帮助你决定梦想家庭的开源工具
Sweet Home 3D一个帮助你寻找梦想家庭的开源工具
======
室内设计应用可以轻松渲染你喜欢的房子,不管是真实的或是想象的。
> 室内设计应用可以轻松渲染你喜欢的房子,不管是真实的或是想象的。
![Houses in a row][1]
我最近接受了一份在弗吉尼亚州的新工作。由于我妻子一直在纽约工作,看着我们在纽约的房子直至出售,我有责任出去为我们和我们的猫找一所新房子。在我们搬进去之前她不会看到的房子!
我最近接受了一份在弗吉尼亚州的新工作。由于我妻子一直在纽约工作,看着我们在纽约的房子直至出售,我有责任出去为我们和我们的猫找一所新房子。在我们搬进去之前她看不到新房子。
我和一个房地产经纪人签约,并看了几间房子,拍了许多照片,写下了潦草的笔记。晚上,我会将照片上传到 Google Drive 文件夹中,我和我老婆会通过手机同时查看这些照片,同时我还记住房间是在右边还是左边,是否有风扇等。
我和一个房地产经纪人签约,并看了几间房子,拍了许多照片,写下了潦草的笔记。晚上,我会将照片上传到 Google Drive 文件夹中,我和我老婆会通过手机同时查看这些照片,同时我还记住房间是在右边还是左边,是否有风扇等。
由于这是一个相当繁琐且不太准确的方式来展示我的发现,我因此去寻找一个开源解决方案,以更好地展示我们未来的梦想之家将会是什么样的,而不会取决于我的模糊记忆和模糊的照片。
由于这是一个相当繁琐且不太准确的展示我的发现的方式,我因此去寻找一个开源解决方案,以更好地展示我们未来的梦想之家将会是什么样的,而不会取决于我的模糊记忆和模糊的照片。
[Sweet Home 3D][2] 完全满足了我的要求。Sweet Home 3D 可在 Sourceforge 上获取,并在 GNU 通用公共许可证下发布。它的[网站][3]信息非常丰富我能够立即启动并运行。Sweet Home 3D 由总部位于巴黎的 eTeks 的 Emmanuel Puybaret 开发。
@ -32,19 +32,19 @@ Sweet Home 3D一个帮助你决定梦想家庭的开源工具
现在我画完了“内墙”,我从网站下载了各种“家具”,其中包括实际的家具以及门、窗、架子等。每个项目都以 ZIP 文件的形式下载,因此我创建了一个包含所有未压缩文件的文件夹。我可以自定义每件家具和重复的物品比如门,可以方便地复制粘贴到指定的地方。
在我将所有墙壁和门窗都布置完后,我就使用应用的 3D 视图浏览房屋。根据照片和记忆,我对所有物体进行了调整直到接近房屋的样子。我可以花更多时间添加纹理,附属家具和物品,但这已经达到了我需要的程度。
在我将所有墙壁和门窗都布置完后,我就使用这个应用的 3D 视图浏览房屋。根据照片和记忆,我对所有物体进行了调整直到接近房屋的样子。我可以花更多时间添加纹理,附属家具和物品,但这已经达到了我需要的程度。
![Sweet Home 3D floorplan][7]
完成之后,我将计划导出为 OBJ 文件,它可在各种程序中打开,例如 [Blender][8] 和 Mac 上的 Preview,方便旋转房屋并从各个角度查看。视频功能最有用,我可以创建一个起点,然后在房子中绘制一条路径,并记录“旅程”。我将视频导出为 MOV 文件,并使用 QuickTime 在 Mac 上打开和查看。
完成之后,我将该项目导出为 OBJ 文件,它可在各种程序中打开,例如 [Blender][8] 和 Mac 上的“预览”中,方便旋转房屋并从各个角度查看。视频功能最有用,我可以创建一个起点,然后在房子中绘制一条路径,并记录“旅程”。我将视频导出为 MOV 文件,并使用 QuickTime 在 Mac 上打开和查看。
我的妻子能够(几乎)所有我看到的,我们甚至可以开始在搬家前布置家具。现在,我所要做的就是装上卡车搬到新家。
我的妻子能够(几乎)能看到所有我看到的,我们甚至可以开始在搬家前布置家具。现在,我所要做的就是把行李装上卡车搬到新家。
Sweet Home 3D 在我的新工作中也是有用的。我正在寻找一种方法来改善学院建筑的地图,并计划在 [Inkscape][9] 或 Illustrator 或其他软件中重新绘制它。但是,由于我有平面地图,我可以使用 Sweet Home 3D 创建平面图的 3D 版本并将其上传到我们的网站以便更方便地找到地方。
### 开源犯罪现场?
一件有趣的事:根据 [Sweet Home 3D 的博客][10],“法国法医办公室(科学警察)最近选择 Sweet Home 3D 作为设计计划表示路线和犯罪现场的工具。这是法国政府建议优先考虑免费开源解决方案的具体应用。“
一件有趣的事:根据 [Sweet Home 3D 的博客][10],“法国法医办公室(科学警察)最近选择 Sweet Home 3D 作为设计规划表示路线和犯罪现场的工具。这是法国政府建议优先考虑自由开源解决方案的具体应用。“
这是公民和政府如何利用开源解决方案创建个人项目、解决犯罪和建立世界的又一点证据。
@ -55,11 +55,11 @@ via: https://opensource.com/article/19/3/tool-find-home
作者:[Jeff Macharyas (Community Moderator)][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[a]: https://opensource.com/users/jeffmacharyas
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/house_home_colors_live_building.jpg?itok=HLpsIfIL (Houses in a row)
[2]: https://sourceforge.net/projects/sweethome3d/

View File

@ -0,0 +1,144 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10717-1.html)
[#]: subject: (Using Square Brackets in Bash: Part 1)
[#]: via: (https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1)
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
在 Bash 中使用[方括号] (一)
======
![square brackets][1]
> 这篇文章将要介绍方括号及其在命令行中的不同用法。
看完[花括号在命令行中的用法][3]之后,现在我们继续来看方括号(`[]`)在上下文中是如何发挥作用的。
### 通配
方括号最简单的用法就是通配。你可能在知道“<ruby><rt>Globbing</rt></ruby>”这个概念之前就已经通过通配来匹配内容了,列出具有相同特征的多个文件就是一个很常见的场景,例如列出所有 JPEG 文件:
```
ls *.jpg
```
使用<ruby>通配符<rt>wildcard</rt></ruby>来得到符合某个模式的所有内容,这个过程就叫通配。
在上面的例子当中,星号(`*`就代表“0 个或多个字符”。除此以外,还有代表“有且仅有一个字符”的问号(`?`)。因此
```
ls d*k*
```
可以列出 `darkly``ducky`,而且 `dark``duck` 也是可以被列出的,因为 `*` 可以匹配 0 个字符。而
```
ls d*k?
```
则只能列出 `ducky`,不会列出 `darkly`、`dark` 和 `duck`
方括号也可以用于通配。为了便于演示,可以创建一个用于测试的目录,并在这个目录下创建文件:
```
touch file0{0..9}{0..9}
```
(如果你还不清楚上面这个命令的原理,可以看一下[另一篇介绍花括号的文章][3]
执行上面这个命令之后,就会创建 `file000`、`file001`、……、`file099` 这 100 个文件。
如果要列出这些文件当中第二位数字是 7 或 8 的文件,可以执行:
```
ls file0[78]?
```
如果要列出 `file022`、`file027`、`file028`、`file052`、`file057`、`file058`、`file092`、`file097`、`file098`,可以执行:
```
ls file0[259][278]
```
当然,不仅仅是 `ls`,很多其它的命令行工具都可以使用方括号来进行通配操作。但在删除文件、移动文件、复制文件的过程中使用通配,你需要有一点横向思维。
例如将 `file010``file029` 这 30 个文件复制成 `archive010``archive029` 这 30 个副本,不可以这样执行:
```
cp file0[12]? archive0[12]?
```
因为通配只能针对已有的文件,而 `archive` 开头的文件并不存在,不能进行通配。
而这条命令
```
cp file0[12]? archive0[1..2][0..9]
```
也同样不行,因为 `cp` 并不允许将多个文件复制到多个文件。在复制多个文件的情况下,只能将多个文件复制到一个指定的目录下:
```
mkdir archive
cp file0[12]? archive
```
这条命令是可以正常运行的,但它只会把这 30 个文件以同样的名称复制到 `archive/` 目录下,而这并不是我们想要的效果。
如果你阅读过我[关于花括号的文章][3],你大概会记得可以使用 `%` 来截掉字符串的末尾部分,而使用 `#` 则可以截掉字符串的开头部分。
例如:
```
myvar="Hello World"
echo Goodbye Cruel ${myvar#Hello}
```
就会输出 `Goodbye Cruel World`,因为 `#Hello``myvar` 变量中开头的 `Hello` 去掉了。
在通配的过程中,也可以使用这一个技巧。
```
for i in file0[12]?;\
do\
cp $i archive${i#file};\
done
```
上面的第一行命令告诉 Bash 需要对所有 `file01` 开头或者 `file02` 开头,且后面只跟一个任意字符的文件进行操作,第二行的 `do` 和第四行的 `done` 代表需要对这些文件都执行这一块中的命令。
第三行就是实际的复制操作了,这里使用了两次 `$i` 变量:第一次在 `cp` 命令中直接作为源文件的文件名使用,第二次则是截掉文件名开头的 `file` 部分,然后在开头补上一个 `archive`,也就是这样:
```
"archive" + "file019" - "file" = "archive019"
```
最终整个 `cp` 命令展开为:
```
cp file019 archive019
```
最后,顺带说明一下反斜杠 `\` 的作用是将一条长命令拆分成多行,这样可以方便阅读。
在下一节,我们会了解方括号的更多用法,敬请关注。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1
作者:[Paul Brown][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/bro66
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/square-gabriele-diwald-475007-unsplash.jpg?itok=cKmysLfd "square brackets"
[2]: https://www.linux.com/LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
[3]: https://linux.cn/article-10624-1.html

View File

@ -0,0 +1,71 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10715-1.html)
[#]: subject: (Setting kernel command line arguments with Fedora 30)
[#]: via: (https://fedoramagazine.org/setting-kernel-command-line-arguments-with-fedora-30/)
[#]: author: (Laura Abbott https://fedoramagazine.org/makes-fedora-kernel/)
如何在 Fedora 30 中设置内核命令行参数
======
![][1]
在调试或试验内核时,向内核命令行添加选项是一项常见任务。即将发布的 Fedora 30 版本改为使用 Bootloader 规范([BLS][2])。根据你修改内核命令行选项的方式,你的工作流可能会更改。继续阅读获取更多信息。
要确定你的系统是使用 BLS 还是旧的规范,请查看文件:
```
/etc/default/grub
```
如果你看到:
```
GRUB_ENABLE_BLSCFG=true
```
看到这个,你运行的是 BLS你可能需要更改设置内核命令行参数的方式。
如果你只想修改单个内核条目(例如,暂时解决显示问题),可以使用 `grubby` 命令:
```
$ grubby --update-kernel /boot/vmlinuz-5.0.1-300.fc30.x86_64 --args="amdgpu.dc=0"
```
要删除内核参数,可以传递 `--remove-args` 参数给 `grubby`
```
$ grubby --update-kernel /boot/vmlinuz-5.0.1-300.fc30.x86_64 --remove-args="amdgpu.dc=0"
```
如果有应该添加到每个内核命令行的选项(例如,你希望禁用 `rdrand` 指令生成随机数),则可以运行 `grubby` 命令:
```
$ grubby --update-kernel=ALL --args="nordrand"
```
这将更新所有内核条目的命令行,并保存作为将来条目的命令行选项。
如果你想要从所有内核中删除该选项,则可以再次使用 `--remove-args``--update-kernel=ALL`
```
$ grubby --update-kernel=ALL --remove-args="nordrand"
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/setting-kernel-command-line-arguments-with-fedora-30/
作者:[Laura Abbott][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/makes-fedora-kernel/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/f30-kernel-1-816x345.jpg
[2]: https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault

View File

@ -0,0 +1,70 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10722-1.html)
[#]: subject: (3 cool text-based email clients)
[#]: via: (https://fedoramagazine.org/3-cool-text-based-email-clients/)
[#]: author: (Clément Verna https://fedoramagazine.org/author/cverna/)
3 个很酷的基于文本的邮件客户端
======
![][1]
编写和接收电子邮件是每个人日常工作的重要组成部分选择电子邮件客户端通常是一个重要决定。Fedora 系统提供了大量的电子邮件客户端可供选择,其中包括基于文本的电子邮件应用。
### Mutt
Mutt 可能是最受欢迎的基于文本的电子邮件客户端之一。它有人们期望的所有常用功能。Mutt 支持颜色代码、邮件会话、POP3 和 IMAP。但它最好的功能之一是它具有高度可配置性。实际上用户可以轻松地更改键绑定并创建宏以使工具适应特定的工作流程。
要尝试 Mutt请[使用 sudo][2] 和 `dnf` 安装它:
```
$ sudo dnf install mutt
```
为了帮助新手入门Mutt 有一个非常全面的充满了宏示例和配置技巧的 [wiki][3]。
### Alpine
Alpine 也是最受欢迎的基于文本的电子邮件客户端。它比 Mutt 更适合初学者你可以通过应用本身配置大部分功能而无需编辑配置文件。Alpine 的一个强大功能是能够对电子邮件进行评分。这对那些订阅含有大量邮件的邮件列表如 Fedora 的[开发列表][4]的用户来说尤其有趣。通过使用分数Alpine 可以根据用户的兴趣对电子邮件进行排序,首先显示高分的电子邮件。
也可以使用 `dnf` 从 Fedora 的仓库安装 Alpine。
```
$ sudo dnf install alpine
```
使用 Alpine 时,你可以按 `Ctrl+G` 组合键轻松访问文档。
### nmh
nmhnew Mail Handling遵循 UNIX 工具哲学。它提供了一组用于发送、接收、保存、检索和操作电子邮件的单一用途程序。这使你可以将 `nmh` 命令与其他程序交换,或利用 `nmh` 编写脚本来创建更多自定义工具。例如,你可以将 Mutt 与 `nmh` 一起使用。
使用 `dnf` 可以轻松安装 `nmh`
```
$ sudo dnf install nmh
```
要了解有关 `nmh` 和邮件处理的更多信息,你可以阅读这本 GPL 许可的[书][5]。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/3-cool-text-based-email-clients/
作者:[Clément Verna][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/cverna/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2018/07/email-clients-816x345.png
[2]: https://fedoramagazine.org/howto-use-sudo/
[3]: https://gitlab.com/muttmua/mutt/wikis/home
[4]: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/
[5]: https://rand-mh.sourceforge.io/book/

View File

@ -1,27 +1,28 @@
[#]: collector: (lujun9972)
[#]: translator: (liujing97)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10705-1.html)
[#]: subject: (How to create a filesystem on a Linux partition or logical volume)
[#]: via: (https://opensource.com/article/19/4/create-filesystem-linux-partition)
[#]: author: (Kedar Vijay Kulkarni (Red Hat) https://opensource.com/users/kkulkarn)
[#]: 作者: (Kedar Vijay Kulkarni (Red Hat) https://opensource.com/users/kkulkarn)
How to create a filesystem on a Linux partition or logical volume
如何在 Linux 分区或逻辑卷中创建文件系统
======
Learn to create a filesystem and mount it persistently or
non-persistently in your system.
> 学习在你的系统中创建一个文件系统,并且长期或者非长期地挂载它。
![Filing papers and documents][1]
In computing, a filesystem controls how data is stored and retrieved and helps organize the files on the storage media. Without a filesystem, information in storage would be one large block of data, and you couldn't tell where one piece of information stopped and the next began. A filesystem helps manage all of this by providing names to files that store data and maintaining a table of files and directories—along with their start/end location, total size, etc.—on disks within the filesystem.
在计算技术中,文件系统控制如何存储和检索数据,并且帮助组织存储媒介中的文件。如果没有文件系统,信息将被存储为一个大数据块,而且你无法知道一条信息在哪结束,下一条信息在哪开始。文件系统通过为存储数据的文件提供名称,并且在文件系统中的磁盘上维护文件和目录表以及它们的开始和结束位置、总的大小等来帮助管理所有的这些信息。
In Linux, when you create a hard disk partition or a logical volume, the next step is usually to create a filesystem by formatting the partition or logical volume. This how-to assumes you know how to create a partition or a logical volume, and you just want to format it to contain a filesystem and mount it.
在 Linux 中,当你创建一个硬盘分区或者逻辑卷之后,接下来通常是通过格式化这个分区或逻辑卷来创建文件系统。这个操作方法假设你已经知道如何创建分区或逻辑卷,并且你希望将它格式化为包含有文件系统,并且挂载它。
### Create a filesystem
### 创建文件系统
Imagine you just added a new disk to your system and created a partition named **/dev/sda1** on it.
假设你为你的系统添加了一块新的硬盘并且在它上面创建了一个叫 `/dev/sda1` 的分区。
1. To verify that the Linux kernel can see the partition, you can **cat** out **/proc/partitions** like this:
1、为了验证 Linux 内核已经发现这个分区,你可以 `cat``/proc/partitions` 的内容,就像这样:
```
[root@localhost ~]# cat /proc/partitions
@ -40,7 +41,7 @@ major minor #blocks name
```
2. Decide what kind of filesystem you want to create, such as ext4, XFS, or anything else. Here are a few options:
2、决定你想要去创建的文件系统种类比如 ext4、XFS或者其他的一些。这里是一些可选项
```
[root@localhost ~]# mkfs.<tab><tab>
@ -48,7 +49,7 @@ mkfs.btrfs mkfs.cramfs mkfs.ext2 mkfs.ext3 mkfs.ext4 mkfs.minix mkfs.xfs
```
3. For the purposes of this exercise, choose ext4. (I like ext4 because it allows you to shrink the filesystem if you need to, a thing that isn't as straightforward with XFS.) Here's how it can be done (the output may differ based on device name/sizes):
3、为了这次练习的目的选择 ext4。我喜欢 ext4因为如果你需要的话它可以允许你去压缩文件系统这对于 XFS 并不简单。)这里是完成它的方法(输出可能会因设备名称或者大小而不同):
```
[root@localhost ~]# mkfs.ext4 /dev/sda1
@ -74,18 +75,16 @@ Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
```
4. In the previous step, if you want to create a different kind of filesystem, use a different **mkfs** command variation.
4、在上一步中如果你想去创建不同的文件系统请使用不同变种的 `mkfs` 命令。
### 挂载文件系统
当你创建好文件系统后,你可以在你的操作系统中挂载它。
### Mount a filesystem
After you create your filesystem, you can mount it in your operating system.
1. First, identify the UUID of your new filesystem. Issue the **blkid** command to list all known block storage devices and look for **sda1** in the output:
1、首先识别出新文件系统的 UUID 编码。使用 `blkid` 命令列出所有可识别的块存储设备并且在输出信息中查找 `sda1`
```
[root@localhost ~]# blkid
[root@localhost ~]# blkid
/dev/vda1: UUID="716e713d-4e91-4186-81fd-c6cfa1b0974d" TYPE="xfs"
/dev/sr1: UUID="2019-03-08-16-17-02-00" LABEL="config-2" TYPE="iso9660"
/dev/sda1: UUID="wow9N8-dX2d-ETN4-zK09-Gr1k-qCVF-eCerbF" TYPE="LVM2_member"
@ -94,11 +93,10 @@ After you create your filesystem, you can mount it in your operating system.
[root@localhost ~]#
```
2. Run the following command to mount the **/dev/sd1** device :
2、运行下面的命令挂载 `/dev/sd1` 设备:
```
[root@localhost ~]# mkdir /mnt/mount_point_for_dev_sda1
[root@localhost ~]# mkdir /mnt/mount_point_for_dev_sda1
[root@localhost ~]# ls /mnt/
mount_point_for_dev_sda1
[root@localhost ~]# mount -t ext4 /dev/sda1 /mnt/mount_point_for_dev_sda1/
@ -113,19 +111,16 @@ tmpfs 93M 0 93M 0% /run/user/0
/dev/sda1 2.9G 9.0M 2.7G 1% /mnt/mount_point_for_dev_sda1
[root@localhost ~]#
```
The **df -h** command shows which filesystem is mounted on which mount point. Look for **/dev/sd1**. The mount command above used the device name **/dev/sda1**. Substitute it with the UUID identified in the **blkid** command. Also, note that a new directory was created to mount **/dev/sda1** under **/mnt**.
命令 `df -h` 显示了每个文件系统被挂载的挂载点。查找 `/dev/sd1`。上面的挂载命令使用的设备名称是 `/dev/sda1`。用 `blkid` 命令中的 UUID 编码替换它。注意,在 `/mnt` 下一个被新创建的目录挂载了 `/dev/sda1`
3. A problem with using the mount command directly on the command line (as in the previous step) is that the mount won't persist across reboots. To mount the filesystem persistently, edit the **/etc/fstab** file to include your mount information:
3、直接在命令行下使用挂载命令就像上一步一样会有一个问题那就是挂载不会在设备重启后存在。为使永久性地挂载文件系统编辑 `/etc/fstab` 文件去包含你的挂载信息:
```
UUID=ac96b366-0cdd-4e4c-9493-bb93531be644 /mnt/mount_point_for_dev_sda1/ ext4 defaults 0 0
```
4. After you edit **/etc/fstab** , you can **umount /mnt/mount_point_for_dev_sda1** and run the command **mount -a** to mount everything listed in **/etc/fstab**. If everything went right, you can still list **df -h** and see your filesystem mounted:
4、编辑完 `/etc/fstab` 文件后,你可以 `umount /mnt/mount_point_for_fev_sda1` 并且运行 `mount -a` 命令去挂载被列在 `/etc/fstab` 文件中的所有设备文件。如果一切顺利的话,你可以使用 `df -h` 列出并且查看你挂载的文件系统:
```
root@localhost ~]# umount /mnt/mount_point_for_dev_sda1/
@ -141,25 +136,23 @@ tmpfs 93M 0 93M 0% /run/user/0
/dev/sda1 2.9G 9.0M 2.7G 1% /mnt/mount_point_for_dev_sda1
```
5. You can also check whether the filesystem was mounted:
5、你也可以检测文件系统是否被挂载
```
[root@localhost ~]# mount | grep ^/dev/sd
/dev/sda1 on /mnt/mount_point_for_dev_sda1 type ext4 (rw,relatime,seclabel,stripe=8191,data=ordered)
```
Now you know how to create a filesystem and mount it persistently or non-persistently within your system.
现在你已经知道如何去创建文件系统并且长期或者非长期的挂载在你的系统中。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/create-filesystem-linux-partition
作者:[Kedar Vijay Kulkarni (Red Hat)][a]
作者:[Kedar Vijay Kulkarni][a]
选题:[lujun9972][b]
译者:[liujing97](https://github.com/liujing97)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,74 @@
[#]: collector: (lujun9972)
[#]: translator: (zhs852)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10702-1.html)
[#]: subject: (Happy 14th anniversary Git: What do you love about Git?)
[#]: via: (https://opensource.com/article/19/4/what-do-you-love-about-git)
[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike/users/seth)
Git 十四周年:你喜欢 Git 的哪一点?
======
> Git 为软件开发所带来的巨大影响是其它工具难以企及的。
![arrows cycle symbol for failing faster][1]
在 Linus Torvalds 开发 Git 后的十四年间,它为软件开发所带来的影响是其它工具难以企及的:在 [StackOverflow 的 2018 年开发者调查][2] 中87% 的受访者都表示他们使用 Git 来作为他们项目的版本控制工具。显然,没有其它工具能撼动 Git 版本控制管理工具SCM之王的地位。
为了在 4 月 7 日 Git 的十四周年这一天向 Git 表示敬意,我问了一些爱好者他们最喜欢 Git 的哪一点。以下便是他们所告诉我的:
*(为了便于理解,部分回答已经进行了小幅修改)*
“我无法忍受 Git。无论是难以理解的术语还是它的分布式。使用 Gerrit 这样的插件才能使它像 Subversion 或 Perforce 这样的集中式仓库管理器使用的工具的一半好用。不过既然这次的问题是‘你喜欢 Git 的什么我还是希望回答Git 使得对复杂的源代码树操作成为可能,并且它的回滚功能使得实现一个要 20 次修改才能更正的问题变得简单起来。” — _[Sweet Tea Dorminy][3]_
“我喜欢 Git 是因为它不会强制我执行特定的工作流程,并且开发团队可以自由地以适合自己的方式来进行团队开发,无论是拉取请求、以电子邮件递送差异文件或是给予所有人推送的权限。” — _[Andy Price][4]_
“我从 2006、2007 年的样子就开始使用 Git 了。我喜欢 Git 是因为它既适用于那种从未离开过我电脑的小项目也适用于大型的团队合作的分布式项目。Git 使你可以从(几乎)所有的错误提交中回滚到先前版本,这个功能显著地减轻了我在软件版本管理方面的压力。” — _[Jonathan S. Katz][5]_
“我很欣赏 Git 那种 [底层命令和高层命令][6] 的理念。用户可以使用 Git 有效率地分享任何形式的信息,而不需要知道其内部工作原理。而好奇的人可以透过其表层的命令,而发现其为许多代码分享平台提供了支持的可以定位内容的文件系统。” — _[Matthew Broberg][7]_
“我喜欢 Git 是因为浏览、开发、构建、测试和向我的 Git 仓库中提交代码的工作几乎都能用它来完成。它经常会调动起我参与开源项目的积极性。” — _[Daniel Oh][8]_
“Git 是我用过的首个版本控制工具。数年间,它从一个可怕的工具变成了一个友好的工具。我喜欢它使你在修改代码的时候更加自信,因为它能保证你主分支的安全(除非你强制提交了一段考虑不周的代码到主分支)。你可以检出先前的提交来撤销更改,这一点也是很棒的。” — _[Kedar Vijay Kulkarni][9]_
“我之所以喜欢 Git 是因为它淘汰了一些其它的版本控制工具。没人使用 VSS而 Subversion 可以和 git-svn 一起使用如果必要BitKeeper 则和 Monotone 一样只为老一辈所知。当然,我们还有 Mercurial不过在我几年之前用它来为 Firefox 添加 AArch64 支持时,我觉得它仍是那种还未完善的工具。部分人可能还会提到 Perforce、SourceSafe 或是其它企业级的解决方案,我只想说它们在开源世界里并不流行。” — _[Marcin Juszkiewicz][10]_
“我喜欢内置的 SHA1 化对象模型commit → tree → blob的简易性。我也喜欢它的高层命令。同时我也将它作为对 JBoss/Red Hat Fuse 的补丁机制。并且这种机制确实有效。我还喜欢 Git 的 [三棵树的故事][11]。” — _[Grzegorz Grzybek][12]_
“我喜欢 [自动生成的 Git 说明页][13](这个页面虽然听起来是有关 Git 的,但是事实上这是一个没有实际意义的页面,不过它总是会给人一种像是真的 Git 页面的感觉…),这使得我对 Git 的敬意油然而生。” — _[Marko Myllynen][14]_
“Git 改变了我作为开发者的生活。它使得 SCM 问题从世界上消失得无影无踪。”— _[Joel Takvorian][15]_
* * *
看完这十个爱好者的回答之后,就轮到你了:你最欣赏 Git 的什么?请在评论区分享你的看法!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/what-do-you-love-about-git
作者:[Jen Wike Huger][a]
选题:[lujun9972][b]
译者:[zhs852](https://github.com/zhs852)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jen-wike/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh (arrows cycle symbol for failing faster)
[2]: https://insights.stackoverflow.com/survey/2018/#work-_-version-control
[3]: https://github.com/sweettea
[4]: https://www.linkedin.com/in/andrew-price-8771796/
[5]: https://opensource.com/users/jkatz05
[6]: https://git-scm.com/book/en/v2/Git-Internals-Plumbing-and-Porcelain
[7]: https://opensource.com/users/mbbroberg
[8]: https://opensource.com/users/daniel-oh
[9]: https://opensource.com/users/kkulkarn
[10]: https://github.com/hrw
[11]: https://speakerdeck.com/schacon/a-tale-of-three-trees
[12]: https://github.com/grgrzybek
[13]: https://git-man-page-generator.lokaltog.net/
[14]: https://github.com/myllynen
[15]: https://github.com/jotak

View File

@ -0,0 +1,65 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10725-1.html)
[#]: subject: (Bash vs. Python: Which language should you use?)
[#]: via: (https://opensource.com/article/19/4/bash-vs-python)
[#]: author: (Archit Modi Red Hat https://opensource.com/users/architmodi/users/greg-p/users/oz123)
Bash vs Python你该使用哪个
======
> 两种编程语言都各有优缺点,它们在某些任务方面互有胜负。
![][1]
[Bash][2] 和 [Python][3] 是大多数自动化工程师最喜欢的编程语言。它们都各有优缺点,有时很难选择应该使用哪一个。所以,最诚实的答案是:这取决于任务、范围、背景和任务的复杂性。
让我们来比较一下这两种语言,以便更好地理解它们各自的优点。
### Bash
* 是一种 Linux/Unix shell 命令语言
* 非常适合编写使用命令行界面CLI实用程序的 shell 脚本,利用一个命令的输出传递给另一个命令(管道),以及执行简单的任务(可以多达 100 行代码)
* 可以按原样使用命令行命令和实用程序
* 启动时间比 Python 快,但执行时性能差
* Windows 中默认没有安装。你的脚本可能不会兼容多个操作系统,但是 Bash 是大多数 Linux/Unix 系统的默认 shell
* 与其它 shell (如 csh、zsh、fish) *不* 完全兼容。
* 通过管道(`|`)传递 CLI 实用程序如 `sed`、`awk`、`grep` 等会降低其性能
* 缺少很多函数、对象、数据结构和多线程支持,这限制了它在复杂脚本或编程中的使用
* 缺少良好的调试工具和实用程序
### Python
* 是一种面对对象编程语言OOP因此它比 Bash 更加通用
* 几乎可以用于任何任务
* 适用于大多数操作系统,默认情况下它在大多数 Unix/Linux 系统中都有安装
* 与伪代码非常相似
* 具有简单、清晰、易于学习和阅读的语法
* 拥有大量的库、文档以及一个活跃的社区
* 提供比 Bash 更友好的错误处理特性
* 有比 Bash 更好的调试工具和实用程序,这使得它在开发涉及到很多行代码的复杂软件应用程序时是一种很棒的语言
* 应用程序(或脚本)可能包含许多第三方依赖项,这些依赖项必须在执行前安装
* 对于简单任务,需要编写比 Bash 更多的代码
我希望这些列表能够让你更好地了解该使用哪种语言以及在何时使用它。
你在日常工作中更多会使用哪种语言Bash 还是 Python请在评论中分享。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/bash-vs-python
作者:[Archit Modi (Red Hat)][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/architmodi/users/greg-p/users/oz123
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_happy_sad_developer_programming.png?itok=72nkfSQ_
[2]: /article/18/7/admin-guide-bash
[3]: /article/17/11/5-approaches-learning-python

View File

@ -0,0 +1,231 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10736-1.html)
[#]: subject: (How To Check The List Of Open Ports In Linux?)
[#]: via: (https://www.2daygeek.com/linux-scan-check-open-ports-using-netstat-ss-nmap/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
如何检查 Linux 中的开放端口列表?
======
最近,我们就同一主题写了两篇文章。这些文章内容帮助你如何检查远程服务器中给定的端口是否打开。
如果你想 [检查远程 Linux 系统上的端口是否打开][1] 请点击链接浏览。如果你想 [检查多个远程 Linux 系统上的端口是否打开][2] 请点击链接浏览。如果你想 [检查多个远程 Linux 系统上的多个端口状态][2] 请点击链接浏览。
但是本文帮助你检查本地系统上的开放端口列表。
在 Linux 中很少有用于此目的的实用程序。然而,我提供了四个最重要的 Linux 命令来检查这一点。
你可以使用以下四个命令来完成这个工作。这些命令是非常出名的并被 Linux 管理员广泛使用。
* `netstat`netstat (“network statistics”) 是一个显示网络连接(进和出)相关信息命令行工具,例如:路由表, 伪装连接,多点传送成员和网络端口。
* `nmap`Nmap (“Network Mapper”) 是一个网络探索与安全审计的开源工具。它旨在快速扫描大型网络。
* `ss` ss 被用于转储套接字统计信息。它也可以类似 netstat 使用。相比其他工具它可以展示更多的TCP状态信息。
* `lsof` lsof 是 List Open File 的缩写. 它用于输出被某个进程打开的所有文件。
### 如何使用 Linux 命令 netstat 检查系统中的开放端口列表
`netstat` 是 Network Statistics 的缩写,是一个显示网络连接(进和出)相关信息命令行工具,例如:路由表、伪装连接、多播成员和网络端口。
它可以列出所有的 tcp、udp 连接和所有的 unix 套接字连接。
它用于发现发现网络问题,确定网络连接数量。
```
# netstat -tplugn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 2038/master
tcp 0 0 127.0.0.1:199 0.0.0.0:* LISTEN 1396/snmpd
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1398/httpd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1388/sshd
tcp6 0 0 :::25 :::* LISTEN 2038/master
tcp6 0 0 :::22 :::* LISTEN 1388/sshd
udp 0 0 0.0.0.0:39136 0.0.0.0:* 1396/snmpd
udp 0 0 0.0.0.0:56130 0.0.0.0:* 1396/snmpd
udp 0 0 0.0.0.0:40105 0.0.0.0:* 1396/snmpd
udp 0 0 0.0.0.0:11584 0.0.0.0:* 1396/snmpd
udp 0 0 0.0.0.0:30105 0.0.0.0:* 1396/snmpd
udp 0 0 0.0.0.0:50656 0.0.0.0:* 1396/snmpd
udp 0 0 0.0.0.0:1632 0.0.0.0:* 1396/snmpd
udp 0 0 0.0.0.0:28265 0.0.0.0:* 1396/snmpd
udp 0 0 0.0.0.0:40764 0.0.0.0:* 1396/snmpd
udp 0 0 10.90.56.21:123 0.0.0.0:* 895/ntpd
udp 0 0 127.0.0.1:123 0.0.0.0:* 895/ntpd
udp 0 0 0.0.0.0:123 0.0.0.0:* 895/ntpd
udp 0 0 0.0.0.0:53390 0.0.0.0:* 1396/snmpd
udp 0 0 0.0.0.0:161 0.0.0.0:* 1396/snmpd
udp6 0 0 :::123 :::* 895/ntpd
IPv6/IPv4 Group Memberships
Interface RefCnt Group
--------------- ------ ---------------------
lo 1 224.0.0.1
eth0 1 224.0.0.1
lo 1 ff02::1
lo 1 ff01::1
eth0 1 ff02::1
eth0 1 ff01::1
```
你也可以使用下面的命令检查特定的端口。
```
# # netstat -tplugn | grep :22
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1388/sshd
tcp6 0 0 :::22 :::* LISTEN 1388/sshd
```
### 如何使用 Linux 命令 ss 检查系统中的开放端口列表?
`ss` 被用于转储套接字统计信息。它也可以显示类似 `netstat` 的信息。相比其他工具它可以展示更多的 TCP 状态信息。
```
# ss -lntu
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
udp UNCONN 0 0 *:39136 *:*
udp UNCONN 0 0 *:56130 *:*
udp UNCONN 0 0 *:40105 *:*
udp UNCONN 0 0 *:11584 *:*
udp UNCONN 0 0 *:30105 *:*
udp UNCONN 0 0 *:50656 *:*
udp UNCONN 0 0 *:1632 *:*
udp UNCONN 0 0 *:28265 *:*
udp UNCONN 0 0 *:40764 *:*
udp UNCONN 0 0 10.90.56.21:123 *:*
udp UNCONN 0 0 127.0.0.1:123 *:*
udp UNCONN 0 0 *:123 *:*
udp UNCONN 0 0 *:53390 *:*
udp UNCONN 0 0 *:161 *:*
udp UNCONN 0 0 :::123 :::*
tcp LISTEN 0 100 *:25 *:*
tcp LISTEN 0 128 127.0.0.1:199 *:*
tcp LISTEN 0 128 *:80 *:*
tcp LISTEN 0 128 *:22 *:*
tcp LISTEN 0 100 :::25 :::*
tcp LISTEN 0 128 :::22 :::*
```
你也可以使用下面的命令检查特定的端口。
```
# # ss -lntu | grep ':25'
tcp LISTEN 0 100 *:25 *:*
tcp LISTEN 0 100 :::25 :::*
```
### 如何使用 Linux 命令 nmap 检查系统中的开放端口列表?
Nmap (“Network Mapper”) 是一个网络探索与安全审计的开源工具。它旨在快速扫描大型网络,当然它也可以工作在独立主机上。
Nmap 使用裸 IP 数据包以一种新颖的方式来确定网络上有哪些主机可用,这些主机提供什么服务(应用程序名称和版本),它们运行什么操作系统(版本),使用什么类型的数据包过滤器/防火墙,以及许多其他特征。
虽然 Nmap 通常用于安全审计,但许多系统和网络管理员发现它对于日常工作也非常有用,例如网络资产清点、管理服务升级计划以及监控主机或服务正常运行时间。
```
# nmap -sTU -O localhost
Starting Nmap 6.40 ( http://nmap.org ) at 2019-03-20 09:57 CDT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00028s latency).
Other addresses for localhost (not scanned): 127.0.0.1
Not shown: 1994 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
80/tcp open http
199/tcp open smux
123/udp open ntp
161/udp open snmp
Device type: general purpose
Running: Linux 3.X
OS CPE: cpe:/o:linux:linux_kernel:3
OS details: Linux 3.7 - 3.9
Network Distance: 0 hops
OS detection performed. Please report any incorrect results at http://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 1.93 seconds
```
你也可以使用下面的命令检查特定的端口。
```
# nmap -sTU -O localhost | grep 123
123/udp open ntp
```
### 如何使用 Linux 命令 lsof 检查系统中的开放端口列表?
它向你显示系统上打开的文件列表以及打开它们的进程。还会向你显示与文件相关的其他信息。
```
# lsof -i
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ntpd 895 ntp 16u IPv4 18481 0t0 UDP *:ntp
ntpd 895 ntp 17u IPv6 18482 0t0 UDP *:ntp
ntpd 895 ntp 18u IPv4 18487 0t0 UDP localhost:ntp
ntpd 895 ntp 20u IPv4 23020 0t0 UDP CentOS7.2daygeek.com:ntp
sshd 1388 root 3u IPv4 20065 0t0 TCP *:ssh (LISTEN)
sshd 1388 root 4u IPv6 20067 0t0 TCP *:ssh (LISTEN)
snmpd 1396 root 6u IPv4 22739 0t0 UDP *:snmp
snmpd 1396 root 7u IPv4 22729 0t0 UDP *:40105
snmpd 1396 root 8u IPv4 22730 0t0 UDP *:50656
snmpd 1396 root 9u IPv4 22731 0t0 UDP *:pammratc
snmpd 1396 root 10u IPv4 22732 0t0 UDP *:30105
snmpd 1396 root 11u IPv4 22733 0t0 UDP *:40764
snmpd 1396 root 12u IPv4 22734 0t0 UDP *:53390
snmpd 1396 root 13u IPv4 22735 0t0 UDP *:28265
snmpd 1396 root 14u IPv4 22736 0t0 UDP *:11584
snmpd 1396 root 15u IPv4 22737 0t0 UDP *:39136
snmpd 1396 root 16u IPv4 22738 0t0 UDP *:56130
snmpd 1396 root 17u IPv4 22740 0t0 TCP localhost:smux (LISTEN)
httpd 1398 root 3u IPv4 20337 0t0 TCP *:http (LISTEN)
master 2038 root 13u IPv4 21638 0t0 TCP *:smtp (LISTEN)
master 2038 root 14u IPv6 21639 0t0 TCP *:smtp (LISTEN)
sshd 9052 root 3u IPv4 1419955 0t0 TCP CentOS7.2daygeek.com:ssh->Ubuntu18-04.2daygeek.com:11408 (ESTABLISHED)
httpd 13371 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
httpd 13372 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
httpd 13373 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
httpd 13374 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
httpd 13375 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
```
你也可以使用下面的命令检查特定的端口。
```
# lsof -i:80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
httpd 1398 root 3u IPv4 20337 0t0 TCP *:http (LISTEN)
httpd 13371 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
httpd 13372 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
httpd 13373 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
httpd 13374 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
httpd 13375 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/linux-scan-check-open-ports-using-netstat-ss-nmap/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://linux.cn/article-10675-1.html
[2]: https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-using-nc-command/

View File

@ -0,0 +1,443 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10740-1.html)
[#]: subject: (The Fargate Illusion)
[#]: via: (https://leebriggs.co.uk/blog/2019/04/13/the-fargate-illusion.html)
[#]: author: (Lee Briggs https://leebriggs.co.uk/)
破除对 AWS Fargate 的幻觉
======
我在 $work 建立了一个基于 Kubernetes 的平台已经快一年了,而且有点像 Kubernetes 的布道者了。真的,我认为这项技术太棒了。然而我并没有对它的运营和维护的困难程度抱过什么幻想。今年早些时候我读了[这样][1]的一篇文章,并对其中某些观点深以为然。如果我在一家规模较小的、有 10 到 15 个工程师的公司,假如有人建议管理和维护一批 Kubernetes 集群,那我会感到害怕的,因为它的运维开销太高了!
尽管我现在对 Kubernetes 的一切都很感兴趣,但我仍然对“<ruby>无服务器<rt>Serverless</rt></ruby>”计算会消灭运维工程师的说法抱有好奇。这种好奇心主要来源于我希望在未来仍然能有一份有报酬的工作 —— 如果我们前景光明的未来不需要运维工程师,那我得明白到底是怎么回事。我已经在 Lamdba 和Google Cloud Functions 上做了一些实验,结果让我印象十分深刻,但我仍然坚信无服务器解决方案只是解决了一部分问题。
我关注 [AWS Fargate][2] 已经有一段时间了,这就是 $work 的开发人员所推崇为“无服务器计算”的东西 —— 主要是因为 Fargate用它你就可以无需管理底层节点而运行你的 Docker 容器。我想看看它到底意味着什么,所以我开始尝试从头开始在 Fargate 上运行一个应用,看看是否可以成功。这里我对成功的定义是一个与“生产级”应用程序相近的东西,我想应该包含以下内容:
* 一个在 Fargate 上运行的容器
* 配置信息以环境变量的形式下推
* “秘密信息” 不能是明文的
* 位于负载均衡器之后
* 有效的 SSL 证书的 TLS 通道
我以“基础设施即代码”的角度来开始整个任务,不遵循默认的 AWS 控制台向导,而是使用 terraform 来定义基础架构。这很可能让整个事情变得复杂,但我想确保任何部署都是可重现的,任何想要遵循此步骤的人都可发现我的结论。
上述所有标准通常都可以通过基于 Kubernetes 的平台使用一些外部的附加组件和插件来实现所以我确实是以一种比较的心态来处理整个任务的因为我要将它与我的常用工作流程进行比较。我的主要目标是看看Fargate 有多容易,特别是与 Kubernetes 相比时。结果让我感到非常惊讶。
### AWS 是有开销的
我有一个干净的 AWS 账户,并决定从零到部署一个 webapp。与 AWS 中的其它基础设施一样,我必须首先使基本的基础设施正常工作起来,因此我需要先定义一个 VPC。
遵循最佳实践,因此我将这个 VPC 划分为跨可用区AZ的子网一个公共子网和私有子网。这时我想到只要这种设置基础设施的需求存在我就能找到一份这种工作。AWS 是"免"运维的这一概念一直让我感到愤怒。开发者社区中的许多人理所当然地认为在设置和定义一个设计良好的 AWS 账户和基础设施是不需要付出多少工作和努力的。而这种想当然甚至发生在开始谈论多帐户架构*之前*就有了——现在我仍然使用单一帐户,我已经必须定义好基础设施和传统的网络设备。
这里也值得记住,我已经做了很多次,所以我*很清楚*该做什么。我可以在我的帐户中使用默认的 VPC 以及预先提供的子网,我觉得很多刚开始的人也可以使用它。这大概花了我半个小时才运行起来,但我不禁想到,即使我想运行 lambda 函数,我仍然需要某种连接和网络。定义 NAT 网关和在 VPC 中路由根本不会让你觉得很“Serverless”但要往下进行这就是必须要做的。
### 运行简单的容器
在我启动运行了基本的基础设施之后,现在我想让我的 Docker 容器运行起来。我开始翻阅 Fargate 文档并浏览 [入门][3] 文档,这些就马上就展现在了我面前:
![][4]
等等,只是让我的容器运行就至少要有**三个**步骤?这完全不像我所想的,不过还是让我们开始吧。
#### 任务定义
<ruby>任务定义<rt>Task Definition<rt></ruby>”用来定义要运行的实际容器。我在这里遇到的问题是,任务定义这件事非常复杂。这里有很多选项都很简单,比如指定 Docker 镜像和内存限制,但我还必须定义一个网络模型以及我并不熟悉的其它各种选项。真需要这样吗?如果我完全没有 AWS 方面的知识就进入到这个过程里,那么在这个阶段我会感觉非常的不知所措。可以在 AWS 页面上找到这些 [参数][5] 的完整列表,这个列表很长。我知道我的容器需要一些环境变量,它需要暴露一个端口。所以我首先在一个神奇的 [terraform 模块][6] 的帮助下定义了这一点,这真的让这件事更容易了。如果没有这个模块,我就得手写 JSON 来定义我的容器定义。
首先我定义了一些环境变量:
```
container_environment_variables = [
{
name = "USER"
value = "${var.user}"
},
{
name = "PASSWORD"
value = "${var.password}"
}
]
```
然后我使用上面提及的模块组成了任务定义:
```
module "container_definition_app" {
source = "cloudposse/ecs-container-definition/aws"
version = "v0.7.0"
container_name = "${var.name}"
container_image = "${var.image}"
container_cpu = "${var.ecs_task_cpu}"
container_memory = "${var.ecs_task_memory}"
container_memory_reservation = "${var.container_memory_reservation}"
port_mappings = [
{
containerPort = "${var.app_port}"
hostPort = "${var.app_port}"
protocol = "tcp"
},
]
environment = "${local.container_environment_variables}"
}
```
在这一点上我非常困惑,我需要在这里定义很多配置才能运行,而这时什么都没有开始呢,但这是必要的 —— 运行 Docker 容器肯定需要了解一些容器配置的知识。我 [之前写过][7] 关于 Kubernetes 和配置管理的问题的文章,在这里似乎遇到了同样的问题。
接下来,我在上面的模块中定义了任务定义(幸好从我这里抽象出了所需的 JSON —— 如果我不得不手写JSON我可能已经放弃了
当我定义模块参数时,我突然意识到我漏掉了一些东西。我需要一个 IAM 角色!好吧,让我来定义:
```
resource "aws_iam_role" "ecs_task_execution" {
name = "${var.name}-ecs_task_execution"
assume_role_policy = <<EOF
{
"Version": "2008-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Effect": "Allow"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "ecs_task_execution" {
count = "${length(var.policies_arn)}"
role = "${aws_iam_role.ecs_task_execution.id}"
policy_arn = "${element(var.policies_arn, count.index)}"
}
```
这样做是有意义的,我需要在 Kubernetes 中定义一个 RBAC 策略,所以在这里我还未完全错失或获得任何东西。这时我开始觉得从 Kubernetes 的角度来看,这种感觉非常熟悉。
```
resource "aws_ecs_task_definition" "app" {
family = "${var.name}"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = "${var.ecs_task_cpu}"
memory = "${var.ecs_task_memory}"
execution_role_arn = "${aws_iam_role.ecs_task_execution.arn}"
task_role_arn = "${aws_iam_role.ecs_task_execution.arn}"
container_definitions = "${module.container_definition_app.json}"
}
```
现在,为了运行起来我已经写了很多行代码,我阅读了很多 ECS 文档,我所做的就是定义一个任务定义。我还没有让这个东西运行起来。在这一点上,我真的很困惑这个基于 Kubernetes 的平台到底增值了什么,但我继续前行。
#### 服务
服务,一定程度上是将容器如何暴露给外部,另外是如何定义它拥有的副本数量。我的第一个想法是“啊!这就像一个 Kubernetes 服务!”我开始着手映射端口等。这是我第一次在 terraform 上跑:
```
resource "aws_ecs_service" "app" {
name = "${var.name}"
cluster = "${module.ecs.this_ecs_cluster_id}"
task_definition = "${data.aws_ecs_task_definition.app.family}:${max(aws_ecs_task_definition.app.revision, data.aws_ecs_task_definition.app.revision)}"
desired_count = "${var.ecs_service_desired_count}"
launch_type = "FARGATE"
deployment_maximum_percent = "${var.ecs_service_deployment_maximum_percent}"
deployment_minimum_healthy_percent = "${var.ecs_service_deployment_minimum_healthy_percent}"
network_configuration {
subnets = ["${values(local.private_subnets)}"]
security_groups = ["${module.app.this_security_group_id}"]
}
}
```
当我必须定义允许访问所需端口的安全组时,我再次感到沮丧,当我这样做了并将其插入到网络配置中后,我就像被扇了一巴掌。
我还需要定义自己的负载均衡器?
什么?
当然不是吗?
##### 负载均衡器从未远离
老实说,我很满意,我甚至不确定为什么。我已经习惯了 Kubernetes 的服务和 Ingress 对象,我一心认为用 Kubernetes 将我的应用程序放到网上是多么容易。当然,我们在 $work 花了几个月的时间建立一个平台,以便更轻松。我是 [external-dns][8] 和 [cert-manager][9] 的重度用户,它们可以自动填充 Ingress 对象上的 DNS 条目并自动化 TLS 证书,我非常了解进行这些设置所需的工作,但老实说,我认为在 Fargate 上做这件事会更容易。我认识到 Fargate 并没有声称自己是“如何运行应用程序”这件事的全部和最终目的,它只是抽象出节点管理,但我一直被告知这比 Kubernetes *更加容易*。我真的很惊讶。定义负载均衡器(即使你不想使用 Ingress 和 Ingress Controller也是向 Kubernetes 部署服务的重要组成部分,我不得不在这里再次做同样的事情。这一切都让人觉得如此熟悉。
我现在意识到我需要:
* 一个负载均衡器
* 一个 TLS 证书
* 一个 DNS 名字
所以我着手做了这些。我使用了一些流行的 terraform 模块,并做了这个:
```
# Define a wildcard cert for my app
module "acm" {
source = "terraform-aws-modules/acm/aws"
version = "v1.1.0"
create_certificate = true
domain_name = "${var.route53_zone_name}"
zone_id = "${data.aws_route53_zone.this.id}"
subject_alternative_names = [
"*.${var.route53_zone_name}",
]
tags = "${local.tags}"
}
# Define my loadbalancer
resource "aws_lb" "main" {
name = "${var.name}"
subnets = [ "${values(local.public_subnets)}" ]
security_groups = ["${module.alb_https_sg.this_security_group_id}", "${module.alb_http_sg.this_security_group_id}"]
}
resource "aws_lb_target_group" "main" {
name = "${var.name}"
port = "${var.app_port}"
protocol = "HTTP"
vpc_id = "${local.vpc_id}"
target_type = "ip"
depends_on = [ "aws_lb.main" ]
}
# Redirect all traffic from the ALB to the target group
resource "aws_lb_listener" "main" {
load_balancer_arn = "${aws_lb.main.id}"
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_lb_target_group.main.id}"
type = "forward"
}
}
resource "aws_lb_listener" "main-tls" {
load_balancer_arn = "${aws_lb.main.id}"
port = "443"
protocol = "HTTPS"
certificate_arn = "${module.acm.this_acm_certificate_arn}"
default_action {
target_group_arn = "${aws_lb_target_group.main.id}"
type = "forward"
}
}
```
我必须承认,我在这里搞砸了好几次。我不得不在 AWS 控制台中四处翻弄以弄清楚我做错了什么。这当然不是一个“轻松”的过程而且我之前已经做过很多次了。老实说在这一点上Kubernetes 看起来对我很有启发性,但我意识到这是因为我对它非常熟悉。幸运的是我能够使用托管的 Kubernetes 平台(预装了 external-dns 和 cert-manager我真的很想知道我漏掉了 Fargate 什么增值的地方。它真的感觉不那么简单。
经过一番折腾,我现在有一个可以工作的 ECS 服务。包括服务在内的最终定义如下所示:
```
data "aws_ecs_task_definition" "app" {
task_definition = "${var.name}"
depends_on = ["aws_ecs_task_definition.app"]
}
resource "aws_ecs_service" "app" {
name = "${var.name}"
cluster = "${module.ecs.this_ecs_cluster_id}"
task_definition = "${data.aws_ecs_task_definition.app.family}:${max(aws_ecs_task_definition.app.revision, data.aws_ecs_task_definition.app.revision)}"
desired_count = "${var.ecs_service_desired_count}"
launch_type = "FARGATE"
deployment_maximum_percent = "${var.ecs_service_deployment_maximum_percent}"
deployment_minimum_healthy_percent = "${var.ecs_service_deployment_minimum_healthy_percent}"
network_configuration {
subnets = ["${values(local.private_subnets)}"]
security_groups = ["${module.app_sg.this_security_group_id}"]
}
load_balancer {
target_group_arn = "${aws_lb_target_group.main.id}"
container_name = "app"
container_port = "${var.app_port}"
}
depends_on = [
"aws_lb_listener.main",
]
}
```
我觉得我已经接近完成了,但后来我记起了我只完成了最初的“入门”文档中所需的 3 个步骤中的 2 个,我仍然需要定义 ECS 群集。
#### 集群
感谢 [定义模块][10],定义要所有这些运行的集群实际上非常简单。
```
module "ecs" {
source = "terraform-aws-modules/ecs/aws"
version = "v1.1.0"
name = "${var.name}"
}
```
这里让我感到惊讶的是为什么我必须定义一个集群。作为一个相当熟悉 ECS 的人,你会觉得你需要一个集群,但我试图从一个必须经历这个过程的新人的角度来考虑这一点 —— 对我来说Fargate 标榜自己“
Serverless”而你仍需要定义集群这似乎很令人惊讶。当然这是一个小细节但它确实盘旋在我的脑海里。
### 告诉我你的 Secret
在这个阶段,我很高兴我成功地运行了一些东西。然而,我的原始的成功标准缺少一些东西。如果我们回到任务定义那里,你会记得我的应用程序有一个存放密码的环境变量:
```
container_environment_variables = [
{
name = "USER"
value = "${var.user}"
},
{
name = "PASSWORD"
value = "${var.password}"
}
]
```
如果我在 AWS 控制台中查看我的任务定义,我的密码就在那里,明晃晃的明文。我希望不要这样,所以我开始尝试将其转化为其他东西,类似于 [Kubernetes 的Secrets管理][11]。
#### AWS SSM
Fargate / ECS 执行<ruby>secret 管理<rt>secret management</rt></ruby>部分的方式是使用 [AWS SSM][12](此服务的全名是 AWS 系统管理器参数存储库 Systems Manager Parameter Store但我不想使用这个名称因为坦率地说这个名字太愚蠢了
AWS 文档很好的[涵盖了这个内容][13],因此我开始将其转换为 terraform。
##### 指定秘密信息
首先,你必须定义一个参数并为其命名。在 terraform 中,它看起来像这样:
```
resource "aws_ssm_parameter" "app_password" {
name = "${var.app_password_param_name}" # The name of the value in AWS SSM
type = "SecureString"
value = "${var.app_password}" # The actual value of the password, like correct-horse-battery-stable
}
```
显然,这里的关键部分是 “SecureString” 类型。这会使用默认的 AWS KMS 密钥来加密数据,这对我来说并不是很直观。这比 Kubernetes 的 Secret 管理具有巨大优势,默认情况下,这些 Secret 在 etcd 中是不加密的。
然后我为 ECS 指定了另一个本地值映射,并将其作为 Secret 参数传递:
```
container_secrets = [
{
name = "PASSWORD"
valueFrom = "${var.app_password_param_name}"
},
]
module "container_definition_app" {
source = "cloudposse/ecs-container-definition/aws"
version = "v0.7.0"
container_name = "${var.name}"
container_image = "${var.image}"
container_cpu = "${var.ecs_task_cpu}"
container_memory = "${var.ecs_task_memory}"
container_memory_reservation = "${var.container_memory_reservation}"
port_mappings = [
{
containerPort = "${var.app_port}"
hostPort = "${var.app_port}"
protocol = "tcp"
},
]
environment = "${local.container_environment_variables}"
secrets = "${local.container_secrets}"
```
##### 出了个问题
此刻,我重新部署了我的任务定义,并且非常困惑。为什么任务没有正确拉起?当新的任务定义(版本 8可用时我一直在控制台中看到正在运行的应用程序仍在使用先前的任务定义版本 7。解决这件事花费的时间比我预期的要长但是在控制台的事件屏幕上我注意到了 IAM 错误。我错过了一个步骤,容器无法从 AWS SSM 中读取 Secret 信息,因为它没有正确的 IAM 权限。这是我第一次真正对整个这件事情感到沮丧。从用户体验的角度来看,这里的反馈非常*糟糕*。如果我没有发觉的话,我会认为一切都很好,因为仍然有一个任务正在运行,我的应用程序仍然可以通过正确的 URL 访问 —— 只不过是旧的配置而已。
在 Kubernetes 里,我会清楚地看到 pod 定义中的错误。Fargate 可以确保我的应用不会停止,这绝对是太棒了,但作为一名运维,我需要一些关于发生了什么的实际反馈。这真的不够好。我真的希望 Fargate 团队的人能够读到这篇文章,改善这种体验。
### 就这样了
到这里就结束了,我的应用程序正在运行,也符合我的所有标准。我确实意识到我做了一些改进,其中包括:
* 定义一个 cloudwatch 日志组,这样我就可以正确地写日志了
* 添加了一个 route53 托管区域,使整个事情从 DNS 角度更容易自动化
* 修复并重新调整了 IAM 权限,这里太宽泛了
但老实说,现在我想反思一下这段经历。我写了一个关于我的经历的 [推特会话][14],然后花了其余时间思考我在这里的真实感受。
### 代价
经过一夜的反思,我意识到无论你是使用 Fargate 还是 Kubernetes这个过程都大致相同。最让我感到惊讶的是尽管我经常听说 Fargate “更容易”,但我真的没有看到任何超过 Kubernetes 平台的好处。现在,如果你正在构建 Kubernetes 集群,我绝对可以看到这里的价值 —— 管理节点和控制面板只是不必要的开销,问题是 —— 基于 Kubernetes 的平台的大多数消费者都*没有*这样做。如果你很幸运能够使用 GKE你几乎不需要考虑集群的管理你可以使用单个 `gcloud` 命令来运行集群。我经常使用 Digital Ocean 的 Kubernetes 托管服务,我可以肯定地说它就像操作 Fargate 集群一样简单,实际上在某种程度上它更容易。
必须定义一些基础设施来运行你的容器就是此时的代价。谷歌本周可能刚刚使用他们的 [Google Cloud Run][15] 产品改变了游戏规则,但他们在这一领域的领先优势远远领先于其他所有人。
从这整个经历中,我可以肯定的说:*大规模运行容器仍然很难。*它需要思考,需要领域知识,需要运维和开发人员之间的协作。它还需要一个基础来构建 —— 任何基于 AWS 的操作都需要事先定义和运行一些基础架构。我对一些公司似乎渴望的 “NoOps” 概念非常感兴趣。我想如果你正在运行一个无状态应用程序,你可以把它全部放在一个 lambda 函数和一个 API 网关中,这可能不错,但我们是否真的适合在任何一种企业环境中这样做?我真的不这么认为。
#### 公平比较
令我印象深刻的另一个现实是,技术 A 和技术 B 之间的比较通常不太公平,我经常在 AWS 上看到这一点。这种实际情况往往与 Jeff Barr 博客文章截然不同。如果你是一家足够小的公司,你可以使用 AWS 控制台在 AWS 中部署你的应用程序并接受所有默认值,这绝对更容易。但是,我不想使用默认值,因为默认值几乎是不适用于生产环境的。一旦你开始剥离掉云服务商服务的层面,你就会开始意识到最终你仍然是在运行软件 —— 它仍然需要设计良好、部署良好、运行良好。我相信 AWS 和 Kubernetes 以及所有其他云服务商的增值服务使得它更容易运行、设计和操作,但它绝对不是免费的。
#### Kubernetes 的争议
最后就是:如果你将 Kubernetes 纯粹视为一个容器编排工具,你可能会喜欢 Fargate。然而随着我对 Kubernetes 越来越熟悉,我开始意识到它作为一种技术的重要性 —— 不仅因为它是一个伟大的容器编排工具,而且因为它的设计模式 —— 它是声明性的、API 驱动的平台。 在*整个* Fargate 过程期间发生的一个简单的事情是如果我删除这里某个东西Fargate 不一定会为我重新创建它。自动缩放很不错,不需要管理服务器和操作系统的补丁及更新也很棒,但我觉得因为无法使用 Kubernetes 自我修复和 API 驱动模型而失去了很多。当然Kubernetes 有一个学习曲线但从这里的体验来看Fargate 也是如此。
### 总结
尽管我在这个过程中遭遇了困惑,但我确实很喜欢这种体验。我仍然相信 Fargate 是一项出色的技术AWS 团队对 ECS/Fargate 所做的工作确实非常出色。然而,我的观点是,这绝对不比 Kubernetes “更容易”,只是……难点不同。
在生产环境中运行容器时出现的问题大致相同。如果你从这篇文章中有所收获,它应该是这样的:*不管你选择的哪种方式都有运维开销*。不要相信你选择一些东西你的世界就变得更轻松。我个人的意见是:如果你有一个运维团队,而你的公司要为多个应用程序团队部署容器 —— 选择一种技术并围绕它构建流程和工具以使其更容易。
人们说的一点肯定没错,用点技巧可以更容易地使用某种技术。在这个阶段,谈到 Fargate下面的漫画这总结了我的感受
![][16]
--------------------------------------------------------------------------------
via: https://leebriggs.co.uk/blog/2019/04/13/the-fargate-illusion.html
作者:[Lee Briggs][a]
选题:[lujun9972][b]
译者:[Bestony](https://github.com/Bestony)
校对:[wxy](https://github.com/wxy), 临石(阿里云智能技术专家)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://leebriggs.co.uk/
[b]: https://github.com/lujun9972
[1]: https://matthias-endler.de/2019/maybe-you-dont-need-kubernetes/
[2]: https://aws.amazon.com/fargate/
[3]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_GetStarted.html
[4]: https://i.imgur.com/YfMyXBdl.png
[5]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
[6]: https://github.com/cloudposse/terraform-aws-ecs-container-definition
[7]: https://leebriggs.co.uk/blog/2018/05/08/kubernetes-config-mgmt.html
[8]: https://github.com/kubernetes-incubator/external-dns
[9]: https://github.com/jetstack/cert-manager
[10]: https://github.com/terraform-aws-modules/terraform-aws-ecs
[11]: https://kubernetes.io/docs/concepts/configuration/secret/
[12]: https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html
[13]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html
[14]: https://twitter.com/briggsl/status/1116870900719030272
[15]: https://cloud.google.com/run/
[16]: https://i.imgur.com/Bx7Q50Jl.jpg

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,115 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Moelf)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Look Back at the History of Firefox)
[#]: via: (https://itsfoss.com/history-of-firefox)
[#]: author: (John Paul https://itsfoss.com/author/john/)
A Look Back at the History of Firefox
======
The Firefox browser has been a mainstay of the open-source community for a long time. For many years it was the default web browser on (almost) all Linux distros and the lone obstacle to Microsofts total dominance of the internet. This browser has roots that go back all the way to the very early days of the internet. Since this week marks the 30th anniversary of the internet, there is no better time to talk about how Firefox became the browser we all know and love.
### Early Roots
In the early 1990s, a young man named [Marc Andreessen][1] was working on his bachelors degree in computer science at the University of Illinois. While there, he started working for the [National Center for Supercomputing Applications][2]. During that time [Sir Tim Berners-Lee][3] released an early form of the web standards that we know today. Marc [was introduced][4] to a very primitive web browser named [ViolaWWW][5]. Seeing that the technology had potential, Marc and Eric Bina created an easy to install browser for Unix named [NCSA Mosaic][6]). The first alpha was released in June 1993. By September, there were ports to Windows and Macintosh. Mosaic became very popular because it was easier to use than other browsing software.
In 1994, Marc graduated and moved to California. He was approached by Jim Clark, who had made his money selling computer hardware and software. Clark had used Mosaic and saw the financial possibilities of the internet. Clark recruited Marc and Eric to start an internet software company. The company was originally named Mosaic Communications Corporation, however, the University of Illinois did not like [their use of the name Mosaic][7]. As a result, the company name was changed to Netscape Communications Corporation.
The companys first project was an online gaming network for the Nintendo 64, but that fell through. The first product they released was a web browser named Mosaic Netscape 0.9, subsequently renamed Netscape Navigator. Internally, the browser project was codenamed mozilla, which stood for “Mosaic killer”. An employee created a cartoon of a [Godzilla like creature][8]. They wanted to take out the competition.
![Early Firefox Mascot][9]Early Mozilla mascot at Netscape
They succeed mightily. At the time, one of the biggest advantages that Netscape had was the fact that its browser looked and functioned the same on every operating system. Netscape described this as giving everyone a level playing field.
As usage of Netscape Navigator increase, the market share of NCSA Mosaic cratered. In 1995, Netscape went public. [On the first day][10], the stock started at $28, jumped to $75 and ended the day at $58. Netscape was without any rivals.
But that didnt last for long. In the summer of 1994, Microsoft released Internet Explorer 1.0, which was based on Spyglass Mosaic which was based on NCSA Mosaic. The [browser wars][11] had begun.
Over the next few years, Netscape and Microsoft competed for dominance of the internet. Each added features to compete with the other. Unfortunately, Internet Explorer had an advantage because it came bundled with Windows. On top of that, Microsoft had more programmers and money to throw at the problem. Toward the end of 1997, Netscape started to run into financial problems.
### Going Open Source
![Mozilla Firefox][12]
In January 1998, Netscape open-sourced the code of the Netscape Communicator 4.0 suite. The [goal][13] was to “harness the creative power of thousands of programmers on the Internet by incorporating their best enhancements into future versions of Netscapes software. This strategy is designed to accelerate development and free distribution by Netscape of future high-quality versions of Netscape Communicator to business customers and individuals.”
The project was to be shepherded by the newly created Mozilla Organization. However, the code from Netscape Communicator 4.0 proved to be very difficult to work with due to its size and complexity. On top of that, several parts could not be open sourced because of licensing agreements with third parties. In the end, it was decided to rewrite the browser from scratch using the new [Gecko][14]) rendering engine.
In November 1998, Netscape was acquired by AOL for [stock swap valued at $4.2 billion][15].
Starting from scratch was a major undertaking. Mozilla Firefox (initially nicknamed Phoenix) was created in June 2002 and it worked on multiple operating systems, such as Linux, Mac OS, Microsoft Windows, and Solaris.
The following year, AOL announced that they would be shutting down browser development. The Mozilla Foundation was subsequently created to handle the Mozilla trademarks and handle the financing of the project. Initially, the Mozilla Foundation received $2 million in donations from AOL, IBM, Sun Microsystems, and Red Hat.
In March 2003, Mozilla [announced pl][16][a][16][ns][16] to separate the suite into stand-alone applications because of creeping software bloat. The stand-alone browser was initially named Phoenix. However, the name was changed due to a trademark dispute with the BIOS manufacturer Phoenix Technologies, which had a BIOS-based browser named trademark dispute with the BIOS manufacturer Phoenix Technologies. Phoenix was renamed Firebird only to run afoul of the Firebird database server people. The browser was once more renamed to the Firefox that we all know.
At the time, [Mozilla said][17], “Weve learned a lot about choosing names in the past year (more than we would have liked to). We have been very careful in researching the name to ensure that we will not have any problems down the road. We have begun the process of registering our new trademark with the US Patent and Trademark office.”
![Mozilla Firefox 1.0][18]Firefox 1.0 : [Picture Credit][19]
The first official release of Firefox was [0.8][20] on February 8, 2004. 1.0 followed on November 9, 2004. Version 2.0 and 3.0 followed in October 2006 and June 2008 respectively. Each major release brought with it many new features and improvements. In many respects, Firefox pulled ahead of Internet Explorer in terms of features and technology, but IE still had more users.
That changed with the release of Googles Chrome browser. In the months before the release of Chrome in September 2008, Firefox accounted for 30% of all [browser usage][21] and IE had over 60%. According to StatCounters [January 2019 report][22], Firefox accounts for less than 10% of all browser usage, while Chrome has over 70%.
Fun Fact
Contrary to popular belief, the logo of Firefox doesnt feature a fox. Its actually a [Red Panda][23]. In Chinese, “fire fox” is another name for the red panda.
### The Future
As noted above, Firefox currently has the lowest market share in its recent history. There was a time when a bunch of browsers were based on Firefox, such as the early version of the [Flock browser][24]). Now most browsers are based on Google technology, such as Opera and Vivaldi. Even Microsoft is giving up on browser development and [joining the Chromium band wagon][25].
This might seem like quite a downer after the heights of the early Netscape years. But dont forget what Firefox has accomplished. A group of developers from around the world have created the second most used browser in the world. They clawed 30% market share away from Microsofts monopoly, they can do it again. After all, they have us, the open source community, behind them.
The fight against the monopoly is one of the several reasons [why I use Firefox][26]. Mozilla regained some of its lost market-share with the revamped release of [Firefox Quantum][27] and I believe that it will continue the upward path.
What event from Linux and open source history would you like us to write about next? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][28].
--------------------------------------------------------------------------------
via: https://itsfoss.com/history-of-firefox
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Marc_Andreessen
[2]: https://en.wikipedia.org/wiki/National_Center_for_Supercomputing_Applications
[3]: https://en.wikipedia.org/wiki/Tim_Berners-Lee
[4]: https://www.w3.org/DesignIssues/TimBook-old/History.html
[5]: http://viola.org/
[6]: https://en.wikipedia.org/wiki/Mosaic_(web_browser
[7]: http://www.computinghistory.org.uk/det/1789/Marc-Andreessen/
[8]: http://www.davetitus.com/mozilla/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/Mozilla_boxing.jpg?ssl=1
[10]: https://www.marketwatch.com/story/netscape-ipo-ignited-the-boom-taught-some-hard-lessons-20058518550
[11]: https://en.wikipedia.org/wiki/Browser_wars
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/mozilla-firefox.jpg?resize=800%2C450&ssl=1
[13]: https://web.archive.org/web/20021001071727/wp.netscape.com/newsref/pr/newsrelease558.html
[14]: https://en.wikipedia.org/wiki/Gecko_(software)
[15]: http://news.cnet.com/2100-1023-218360.html
[16]: https://web.archive.org/web/20050618000315/http://www.mozilla.org/roadmap/roadmap-02-Apr-2003.html
[17]: https://www-archive.mozilla.org/projects/firefox/firefox-name-faq.html
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/firefox-1.jpg?ssl=1
[19]: https://www.iceni.com/blog/firefox-1-0-introduced-2004/
[20]: https://en.wikipedia.org/wiki/Firefox_version_history
[21]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
[22]: http://gs.statcounter.com/browser-market-share/desktop/worldwide/#monthly-201901-201901-bar
[23]: https://en.wikipedia.org/wiki/Red_panda
[24]: https://en.wikipedia.org/wiki/Flock_(web_browser
[25]: https://www.windowscentral.com/microsoft-building-chromium-powered-web-browser-windows-10
[26]: https://itsfoss.com/why-firefox/
[27]: https://itsfoss.com/firefox-quantum-ubuntu/
[28]: http://reddit.com/r/linuxusersgroup
[29]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/mozilla-firefox.jpg?fit=800%2C450&ssl=1

View File

@ -0,0 +1,69 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Anti-lasers could give us perfect antennas, greater data capacity)
[#]: via: (https://www.networkworld.com/article/3386879/anti-lasers-could-give-us-perfect-antennas-greater-data-capacity.html#tk.rss_all)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Anti-lasers could give us perfect antennas, greater data capacity
======
Anti-lasers get close to providing a 100% efficient signal channel for data, say engineers.
![Guirong Hao / Valery Brozhinsky / Getty Images][1]
Playing laser light backwards could adjust data transmission signals so that they perfectly match receiving antennas. The fine-tuning of signals like this, not achieved with such detail before, could create more capacity for ever-increasing data demand.
"Imagine, for example, that you could adjust a cell phone signal exactly the right way, so that it is perfectly absorbed by the antenna in your phone," says Stefan Rotter of the Institute for Theoretical Physics of Technische Universität Wien (TU Wien) in a [press release][2].
Rotter is talking about “Random Anti-Laser,” a project he has been a part of. The idea behind it is that if one could time-reverse a laser, then the laser (right now considered the best light source ever built) becomes the best available light absorber. Perfect absorption of a signal wave would mean that all of the data-carrying energy is absorbed by the receiving device, thus it becomes 100% efficient.
**[ Related:[What is 5G wireless? How it will change networking as we know it?][3] ]**
“The easiest way to think about this process is in terms of a movie showing a conventional laser sending out laser light, which is played backwards,” the TU Wein article says. The anti-laser is the exact opposite of the laser — instead of sending specific colors perfectly when energy is applied, it receives specific colors perfectly.
Perfect absorption of a signal wave would mean that all of the data-carrying energy is absorbed by the receiving device, thus it becomes 100% efficient.
Counter-intuitively, its the random scattering of light in all directions thats behind the engineering. However, the Vienna, Austria, university group performs precise calculations on those scattering, splitting signals. That lets the researchers harness the light.
### How the anti-laser technology works
The microwave-based, experimental device the researchers have built in the lab to prove the idea doesnt just potentially apply to cell phones; wireless internet of things (IoT) devices would also get more data throughput. How it works: The device consists of an antenna-containing chamber encompassed by cylinders, all arranged haphazardly, the researchers explain. The cylinders distribute an elaborate, arbitrary wave pattern “similar to [throwing] stones in a puddle of water, at which water waves are deflected.”
Measurements then take place to identify exactly how the signals return. The team involved, which also includes collaborators from the University of Nice, France, then “characterize the random structure and calculate the wave front that is completely swallowed by the central antenna at the right absorption strength.” Ninety-nine point eight percent is absorbed, making it remarkably and virtually perfect. Data throughput, range, and other variables thus improve.
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][4] ]**
Achieving perfect antennas has been pretty much only theoretically possible for engineers to date. Reflected energy (RF back into the transmitter from antenna inefficiencies) has always been an issue in general. Reflections from surfaces, too, have been always been a problem.
“Think about a mobile phone signal that is reflected several times before it reaches your cell phone,” Rotter says. Its not easy to get the tuning right — as the antennas physical locations move, reflected surfaces become different.
### Scattering lasers
Scattering, similar to that used in this project, is becoming more important in communications overall. “Waves that are being scattered in a complex way are really all around us,” the group says.
An example is random-lasers (which the groups anti-laser is based on) that unlike traditional lasers, do not use reflective surfaces but trap scattered light and then “emit a very complicated, system-specific laser field when supplied with energy.” The anti-random-laser developed by Rotter and his group simply reverses that in time:
“Instead of a light source that emits a specific wave depending on its random inner structure, it is also possible to build the perfect absorber.” The anti-random-laser.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3386879/anti-lasers-could-give-us-perfect-antennas-greater-data-capacity.html#tk.rss_all
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/data_cubes_transformation_conversion_by_guirong_hao_gettyimages-1062387214_plus_abstract_binary_by_valerybrozhinsky_gettyimages-865457032_3x2_2400x1600-100790211-large.jpg
[2]: https://www.tuwien.ac.at/en/news/news_detail/article/126574/
[3]: https://www.networkworld.com/article/3203489/lan-wan/what-is-5g-wireless-networking-benefits-standards-availability-versus-lte.html
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,60 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Google partners with Intel, HPE and Lenovo for hybrid cloud)
[#]: via: (https://www.networkworld.com/article/3388062/google-partners-with-intel-hpe-and-lenovo-for-hybrid-cloud.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Google partners with Intel, HPE and Lenovo for hybrid cloud
======
Google boosted its on-premises and cloud connections with Kubernetes and serverless computing.
![Ilze Lucero \(CC0\)][1]
Still struggling to get its Google Cloud business out of single-digit marketshare, Google this week introduced new partnerships with Lenovo and Intel to help bolster its hybrid cloud offerings, both built on Googles Kubernetes container technology.
At Googles Next 19 show this week, Intel and Google said they will collaborate on Google's Anthos, a new reference design based on the second-Generation Xeon Scalable processor introduced last week and an optimized Kubernetes software stack designed to deliver increased workload portability between public and private cloud environments.
**[ Read also:[What hybrid cloud means in practice][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
As part the Anthos announcement, Hewlett Packard Enterprise (HPE) said it has validated Anthos on its ProLiant servers, while Lenovo has done the same for its ThinkAgile platform. This solution will enable customers to get a consistent Kubernetes experience between Google Cloud and their on-premises HPE or Lenovo servers. No official word from Dell yet, but they cant be far behind.
Users will be able to manage their Kubernetes clusters and enforce policy consistently across environments either in the public cloud or on-premises. In addition, Anthos delivers a fully integrated stack of hardened components, including OS and container runtimes that are tested and validated by Google, so customers can upgrade their clusters with confidence and minimize downtime.
### What is Google Anthos?
Google formally introduced [Anthos][4] at this years show. Anthos, formerly Cloud Services Platform, is meant to allow users to run their containerized applications without spending time on building, managing, and operating Kubernetes clusters. It runs both on Google Cloud Platform (GCP) with Google Kubernetes Engine (GKE) and in your data center with GKE On-Prem. Anthos will also let you manage workloads running on third-party clouds such as Amazon Web Services (AWS) and Microsoft Azure.
Google also announced the beta release of Anthos Migrate, which auto-migrates virtual machines (VM) from on-premises or other clouds directly into containers in GKE with minimal effort. This allows enterprises to migrate their infrastructure in one streamlined motion, without upfront modifications to the original VMs or applications.
Intel said it will publish the production design as an Intel Select Solution, as well as a developer platform, making it available to anyone who wants it.
### Serverless environments
Google isnt stopping with Kubernetes containers, its also pushing ahead with serverless environments. [Cloud Run][5] is Googles implementation of serverless computing, which is something of a misnomer. You still run your apps on servers; you just arent using a dedicated physical server. It is stateless, so resources are not allocated until you actually run or use the application.
Cloud Run is a fully serverless offering that takes care of all infrastructure management, including the provisioning, configuring, scaling, and managing of servers. It automatically scales up or down within seconds, even down to zero depending on traffic, ensuring you pay only for the resources you actually use. Cloud Run can be used on GKE, offering the option to run side by side with other workloads deployed in the same cluster.
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3388062/google-partners-with-intel-hpe-and-lenovo-for-hybrid-cloud.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/03/cubes_blocks_squares_containers_ilze_lucero_cc0_via_unsplash_1200x800-100752172-large.jpg
[2]: https://www.networkworld.com/article/3249495/what-hybrid-cloud-mean-practice
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://cloud.google.com/blog/topics/hybrid-cloud/new-platform-for-managing-applications-in-todays-multi-cloud-world
[5]: https://cloud.google.com/blog/products/serverless/announcing-cloud-run-the-newest-member-of-our-serverless-compute-stack
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,60 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (HPE and Nutanix partner for hyperconverged private cloud systems)
[#]: via: (https://www.networkworld.com/article/3388297/hpe-and-nutanix-partner-for-hyperconverged-private-cloud-systems.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
HPE and Nutanix partner for hyperconverged private cloud systems
======
Both companies will sell HP ProLiant appliances with Nutanix software but to different markets.
![Hewlett Packard Enterprise][1]
Hewlett Packard Enterprise (HPE) has partnered with Nutanix to offer Nutanixs hyperconverged infrastructure (HCI) software available as a managed private cloud service and on HPE-branded appliances.
As part of the deal, the two companies will be competing against each other in hardware sales, sort of. If you want the consumption model you get through HPEs GreenLake, where your usage is metered and you pay for only the time you use it (similar to the cloud), then you would get the ProLiant hardware from HPE.
If you want an appliance model where you buy the hardware outright, like in the traditional sense of server sales, you would get the same ProLiant through Nutanix.
**[ Read also:[What is hybrid cloud computing?][2] and [Multicloud mania: what to know][3] ]**
As it is, HPE GreenLake offers multiple cloud offerings to customers, including virtualization courtesy of VMware and Microsoft. With the Nutanix partnership, HPE is adding Nutanixs free Acropolis hypervisor to its offerings.
“Customers get to choose an alternative to VMware with this,” said Pradeep Kumar, senior vice president and general manager of HPEs Pointnext consultancy. “They like the Acropolis license model, since its license-free. Then they have choice points so pricing is competitive. Some like VMware, and I think its our job to offer them both and they can pick and choose.”
Kumar added that the whole Nutanix stack is 15 to 18% less with Acropolis than a VMware-powered system, since they save on the hypervisor.
The HPE-Nutanix partnership offers a fully managed hybrid cloud infrastructure delivered as a service and deployed in customers data centers or co-location facility. The managed private cloud service gives enterprises a hyperconverged environment in-house without having to manage the infrastructure themselves and, more importantly, without the burden of ownership. GreenLake operates more like a lease than ownership.
### HPE GreenLake's private cloud services promise to significantly reduce costs
HPE is pushing hard on GreenLake, which basically mimics cloud platform pricing models of paying for what you use rather than outright ownership. Kumar said HPE projects the consumption model will account for 30% of HPEs business in the next few years.
GreenLake makes some hefty promises. According to Nutanix-commissioned IDC research, customers will achieve a 60% reduction in the five-year cost of operations, while a HPE-commissioned Forrester report found customers benefit from a 30% Capex savings due to eliminated need for overprovisioning and a 90% reduction in support and professional services costs.
By shifting to an IT as a Service model, HPE claims to provide a 40% increase in productivity by reducing the support load on IT operations staff and to shorten the time to deploy IT projects by 65%.
The two new offerings from the partnership HPE GreenLakes private cloud service running Nutanix software and the HPE-branded appliances integrated with Nutanix software are expected to be available during the 2019 third quarter, the companies said.
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3388297/hpe-and-nutanix-partner-for-hyperconverged-private-cloud-systems.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2015/11/hpe_building-100625424-large.jpg
[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,76 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Govt warns on VPN security bug in Cisco, Palo Alto, F5, Pulse software)
[#]: via: (https://www.networkworld.com/article/3388646/gov-t-warns-on-vpn-security-bug-in-cisco-palo-alto-f5-pulse-software.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Govt warns on VPN security bug in Cisco, Palo Alto, F5, Pulse software
======
VPN packages from Cisco, Palo Alto, F5 and Pusle may improperly secure tokens and cookies
![Getty Images][1]
The Department of Homeland Security has issued a warning that some [VPN][2] packages from Cisco, Palo Alto, F5 and Pusle may improperly secure tokens and cookies, allowing nefarious actors an opening to invade and take control over an end users system.
The DHSs Cybersecurity and Infrastructure Security Agency (CISA) [warning][3] comes on the heels of a notice from Carnegie Mellon's CERT that multiple VPN applications store the authentication and/or session cookies insecurely in memory and/or log files.
**[Also see:[What to consider when deploying a next generation firewall][4]. Get regularly scheduled insights by [signing up for Network World newsletters][5]]**
“If an attacker has persistent access to a VPN user's endpoint or exfiltrates the cookie using other methods, they can replay the session and bypass other authentication methods,” [CERT wrote][6]. “An attacker would then have access to the same applications that the user does through their VPN session.”
According to the CERT warning, the following products and versions store the cookie insecurely in log files:
* Palo Alto Networks GlobalProtect Agent 4.1.0 for Windows and GlobalProtect Agent 4.1.10 and earlier for macOS0 ([CVE-2019-1573][7])
* Pulse Secure Connect Secure prior to 8.1R14, 8.2, 8.3R6, and 9.0R2.
The following products and versions store the cookie insecurely in memory:
* Palo Alto Networks GlobalProtect Agent 4.1.0 for Windows and GlobalProtect Agent 4.1.10 and earlier for macOS0.
* Pulse Secure Connect Secure prior to 8.1R14, 8.2, 8.3R6, and 9.0R2.
* Cisco AnyConnect 4.7.x and prior.
CERT says that Palo Alto Networks GlobalProtect version 4.1.1 [patches][8] this vulnerability.
In the CERT warning F5 stated it has been aware of the insecure memory storage since 2013 and has not yet been patched. More information can be found [here][9]. F5 also stated it has been aware of the insecure log storage since 2017 and fixed it in version 12.1.3 and 13.1.0 and onwards. More information can be found [here][10].
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][11] ]**
CERT said it is unaware of any patches at the time of publishing for Cisco AnyConnect and Pulse Secure Connect Secure.
CERT credited the [National Defense ISAC Remote Access Working Group][12] for reporting the vulnerability.
Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3388646/gov-t-warns-on-vpn-security-bug-in-cisco-palo-alto-f5-pulse-software.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/10/broken-chain_metal_link_breach_security-100777433-large.jpg
[2]: https://www.networkworld.com/article/3268744/understanding-virtual-private-networks-and-why-vpns-are-important-to-sd-wan.html
[3]: https://www.us-cert.gov/ncas/current-activity/2019/04/12/Vulnerability-Multiple-VPN-Applications
[4]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
[5]: https://www.networkworld.com/newsletters/signup.html
[6]: https://www.kb.cert.org/vuls/id/192371/
[7]: https://nvd.nist.gov/vuln/detail/CVE-2019-1573
[8]: https://securityadvisories.paloaltonetworks.com/Home/Detail/146
[9]: https://support.f5.com/csp/article/K14969
[10]: https://support.f5.com/csp/article/K45432295
[11]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[12]: https://ndisac.org/workinggroups/
[13]: https://www.facebook.com/NetworkWorld/
[14]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,75 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Nyansas Voyance expands to the IoT)
[#]: via: (https://www.networkworld.com/article/3388301/nyansa-s-voyance-expands-to-the-iot.html#tk.rss_all)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
Nyansas Voyance expands to the IoT
======
![Brandon Mowinkel \(CC0\)][1]
Nyansa announced today that their flagship Voyance product can now apply its AI-based secret sauce to [IoT][2] devices, over and above the networking equipment and IT endpoints it could already manage.
Voyance a network management product that leverages AI to automate the discovery of devices on the network and identify unusual behavior has been around for two years now, and Nyansa says that its being used to observe a total of 25 million client devices operating across roughly 200 customer networks.
**More on IoT:**
* [Most powerful Internet of Things companies][3]
* [10 Hot IoT startups to watch][4]
* [The 6 ways to make money in IoT][5]
* [][6] [Blockchain, service-centric networking key to IoT success][7]
* [Getting grounded in IoT networking and security][8]
* [Building IoT-ready networks must become a priority][9]
* [What is the Industrial IoT? [And why the stakes are so high]][10]
Its a software-only product (available either via public SaaS or private cloud) that works by scanning a customers network and identifying every device attached to it, then establishing a behavioral baseline that will let it flag suspicious actions (e.g., sending a lot more data than other devices of its kind, connecting to unusual servers) and even perform automated root-cause analysis of network issues.
The process doesnt happen instantaneously, particularly the creation of the baseline, but its designed to be minimally invasive to existing network management frameworks and easy to implement.
Nyansa said that the medical field has been one of the key targets for the newly IoT-enabled iteration of Voyance, and one early customer Baptist Health, a Florida-based healthcare company that runs four hospitals and several other clinics and practices said that Voyance IoT has offered a new level of visibility into the business complex array of connected diagnostic and treatment machines.
“In the past we didnt have the ability to identify security concerns in this way, related to rogue devices on the enterprise network, and now were able to do that,” said CISO Thad Phillips.
While spiraling network complexity isnt an issue confined to the IoT, theres a strong argument that the number and variety of devices connected to an IoT-enabled network represent a new challenge to network management, particularly in light of the fact that many such devices arent particularly secure.
“Theyre not manufactured by networking vendors or security vendors, so for a performance standpoint, they have a lot of quirks … and on the security side, thats sort of a big problem there as well,” said Anand Srinivas, Nyansas co-founder and CTO.
Enabling the Voyance platform to identify and manage IoT devices along with traditional endpoints seems to be mostly a matter of adding new device signatures to the system, but Enterprise Management Associates research director Shamus McGillicuddy said that, while the systems designed for automation and ease of use, AIOps products like Voyance do need to be managed to make sure that theyre functioning correctly.
“Anything based on machine learning is going to take a while to make sure it understands your environment and you might have to retrain it,” he said. “Theres always going to be more and more things connecting to IP networks, and its just going to be a question of building up a database.”
Voyance IoT is available now. Pricing starts at $16,000 per year, and goes up with the number of total devices managed. (Current Voyance users can manage up to 100 IoT devices at no additional cost.)
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3388301/nyansa-s-voyance-expands-to-the-iot.html#tk.rss_all
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/geometric_architecture_ceiling_structure_lines_connections_networks_perspective_by_brandon_mowinkel_cc0_via_unsplash_2400x1600-100788530-large.jpg
[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
[3]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
[4]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
[5]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
[6]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
[7]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
[8]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
[9]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
[10]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,59 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Two tools to help visualize and simplify your data-driven operations)
[#]: via: (https://www.networkworld.com/article/3389756/two-tools-to-help-visualize-and-simplify-your-data-driven-operations.html#tk.rss_all)
[#]: author: (Kent McNeil, Vice President of Software, Ciena Blue Planet )
Two tools to help visualize and simplify your data-driven operations
======
Amidst the rising complexity of networks, and influx of data, service providers are striving to keep operational complexity under control. Blue Planets Kent McNeil explains how they can turn this challenge into a huge opportunity, and in fact reduce operational effort by exploiting state-of-the-art graph database visualization and delta-based federation technologies.
![danleap][1]
**Build the picture: Visualize your data**
The Internet of Things (IoT), 5G, smart technology, virtual reality all these applications guarantee one thing for communications service providers (CSPs): more data. As networks become increasingly overwhelmed by mounds of data, CSPs are on the hunt for ways to make the most of the intelligence collected and are looking for ways to monetize their services, provide more customizable offerings, and enhance their network performance.
Customer analytics has gone some way towards fulfilling this need for greater insights, but with the rise in the volume and variety of consumer and IoT applications, the influx of data will increase at a phenomenal rate. The data includes not only customer-related data, but also device and network data, adding complexity to the picture. CSPs must harness this information to understand the relationships between any two things, to understand the connections within their data and to ultimately, leverage it for a better customer experience.
**See the upward graphical trend with graph databases**
Traditional relational databases certainly have their use, but graph databases offer a novel perspective. The visual representation between the component parts enables CSPs to understand and analyze the characteristics, as well as to act in a timely manner when confronted with any discrepancies.
Graph databases can help CSPs tackle this new challenge, ensuring the data is not just stored, but also processed and analyzed. It enables complex network questions to be asked and answered, ensuring that CSPs are not sidelined as “dumb pipes” in the IoT movement.
The use of graph databases has started to become more mainstream, as businesses see the benefits. IBM conducted a generic industry study, entitled “The State of Graph Databases Worldwide”, which found that people are moving to graph databases for speed, performance enhancement of applications, and streamlined operations. Ways in which businesses are using, or are planning to use, graph technology is highest for network and IT operations, followed by master data management. Performance is a key factor for CSPs, as is personalization, which enables support for more tailored service offerings.
Another advantage of graph databases for CSPs is that of unravelling the complexity of network inventory in a clear, visualized picture this capability gives CSPs a competitive advantage as speed and performance become increasingly paramount. This need for speed and reliability will increase tenfold as IoT continues its impressive global ramp-up. Operational complexity also grows as the influx of generated data produced by IoT will further challenge the scalability of existing operational environments. Graph databases can help CSPs tackle this new challenge, ensuring the data is not just stored, but also processed and analyzed. It enables complex network questions to be asked and answered, ensuring that CSPs are not sidelined as “dumb pipes” in the IoT movement.
**Change the tide of data with delta-based federation**
New data, updated data, corrected data, deleted data all needs to be managed, in line with regulations, and instantaneously. But this capability does not exist in the reality of many CSPs Operational Support Systems (OSS). Many still battle with updating data and relying on full uploads of network inventory in order to perform key service fulfillment and assurance tasks. This method is time-intensive and risky due to potential conflicts and inaccuracies. With data being accessed from a variety of systems, CSPs must have a way to effectively hone in on only what is required.
Integrating network data into one simplified system limits the impact on the legacy OSS systems. This allows each OSS to continue its specific role, yet to feed data into a single interface, hence enabling teams to see the complete picture and gain efficiencies while launching new services or pinpointing and resolving service and network issues.
A delta-based federation model ensures that an accurate picture is presented, and only essential changes are conducted reliably and quickly. This simplified method filters the delta changes, reducing the time involved in updating, and minimizing the system load and risks. A validation process takes place to catch any errors or issues with the data, so CSPs can apply checks and retain control over modifications. Integrating network data into one simplified system limits the impact on the legacy OSS systems. This allows each OSS to continue its specific role, yet to feed data into a single interface, hence enabling teams to see the complete picture and gain efficiencies while launching new services or pinpointing and resolving service and network issues.
**Ride the wave**
25 billion connected things are predicted by Gartner on a global scale by 2021 and CSPs are already struggling with the current levels of data, which Gartner estimates at 14.2 billion in 2019. Over the last decade, CSPs have faced significant rises in the levels of data consumed as demand for new services and higher bandwidth applications has taken off. This data wave is set to continue and CSPs have two important tools at their disposal helping them ride the wave. Firstly, CSPs have specialist, legacy OSS already in place which they can leverage as a basis for integrating data and implementing optimized systems. Secondly, they can utilize new technologies in database inventory management: graph databases and delta-based federation. The advantages of effectively integrating network data, visualizing it, and creating a clear map of the inter-connections, enable CSPs to make critical decisions more quickly and accurately, resulting in most optimized and informed service operations.
[Watch this video to learn more about Blue Planet][2]
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3389756/two-tools-to-help-visualize-and-simplify-your-data-driven-operations.html#tk.rss_all
作者:[Kent McNeil, Vice President of Software, Ciena Blue Planet][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/istock-165721901-100793858-large.jpg
[2]: https://www.blueplanet.com/resources/IT-plus-network-now-a-powerhouse-combination.html?utm_campaign=X1058319&utm_source=NWW&utm_term=BPVideo&utm_medium=sponsoredpost4

View File

@ -0,0 +1,146 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What SDN is and where its going)
[#]: via: (https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
What SDN is and where its going
======
Software-defined networking (SDN) established a foothold in cloud computing, intent-based networking, and network security, with Cisco, VMware, Juniper and others leading the charge.
![seedkin / Getty Images][1]
Hardware reigned supreme in the networking world until the emergence of software-defined networking (SDN), a category of technologies that separate the network control plane from the forwarding plane to enable more automated provisioning and policy-based management of network resources.
SDN's origins can be traced to a research collaboration between Stanford University and the University of California at Berkeley that ultimately yielded the [OpenFlow][2] protocol in the 2008 timeframe.
**[Learn more about the[difference between SDN and NFV][3]. Get regularly scheduled insights by [signing up for Network World newsletters][4]]**
OpenFlow is only one of the first SDN canons, but it's a key component because it started the networking software revolution. OpenFlow defined a programmable network protocol that could help manage and direct traffic among routers and switches no matter which vendor made the underlying router or switch.
In the years since its inception, SDN has evolved into a reputable networking technology offered by key vendors including Cisco, VMware, Juniper, Pluribus and Big Switch. The Open Networking Foundation develops myriad open-source SDN technologies as well.
"Datacenter SDN no longer attracts breathless hype and fevered expectations, but the market is growing healthily, and its prospects remain robust," wrote Brad Casemore, IDC research vice president, data center networks, in a recent report, [_Worldwide Datacenter Software-Defined Networking Forecast, 20182022_][5]*. "*Datacenter modernization, driven by the relentless pursuit of digital transformation and characterized by the adoption of cloudlike infrastructure, will help to maintain growth, as will opportunities to extend datacenter SDN overlays and fabrics to multicloud application environments."
SDN will be increasingly perceived as a form of established, conventional networking, Casemore said.
IDC estimates that the worldwide data center SDN market will be worth more than $12 billion in 2022, recording a CAGR of 18.5% during the 20172022 period. The market generated revenue of nearly $5.15 billion in 2017, up more than 32.2% from 2016.
In 2017, the physical network represented the largest segment of the worldwide datacenter SDN market, accounting for revenue of nearly $2.2 billion, or about 42% of the overall total revenue. In 2022, however, the physical network is expected to claim about $3.65 billion in revenue, slightly less than the $3.68 billion attributable to network virtualization overlays/SDN controller software but more than the $3.18 billion for SDN applications.
“We're now at a point where SDN is better understood, where its use cases and value propositions are familiar to most datacenter network buyers and where a growing number of enterprises are finding that SDN offerings offer practical benefits,” Casemore said. “With SDN growth and the shift toward software-based network automation, the network is regaining lost ground and moving into better alignment with a wave of new application workloads that are driving meaningful business outcomes.”
### **What is SDN? **
The idea of programmability is the basis for the most precise definition of what SDN is: technology that separates the control plane management of network devices from the underlying data plane that forwards network traffic.
IDC broadens that definition of SDN by stating: “Datacenter SDN architectures feature software-defined overlays or controllers that are abstracted from the underlying network hardware, offering intent-or policy-based management of the network as a whole. This results in a datacenter network that is better aligned with the needs of application workloads through automated (thereby faster) provisioning, programmatic network management, pervasive application-oriented visibility, and where needed, direct integration with cloud orchestration platforms.”
The driving ideas behind the development of SDN are myriad. For example, it promises to reduce the complexity of statically defined networks; make automating network functions much easier; and allow for simpler provisioning and management of networked resources, everywhere from the data center to the campus or wide area network.
Separating the control and data planes is the most common way to think of what SDN is, but it is much more than that, said Mike Capuano, chief marketing officer for [Pluribus][6].
“At its heart SDN has a centralized or distributed intelligent entity that has an entire view of the network, that can make routing and switching decisions based on that view,” Capuano said. “Typically, network routers and switches only know about their neighboring network gear. But with a properly configured SDN environment, that central entity can control everything, from easily changing policies to simplifying configuration and automation across the enterprise.”
### How does SDN support edge computing, IoT and remote access?
A variety of networking trends have played into the central idea of SDN. Distributing computing power to remote sites, moving data center functions to the [edge][7], adopting cloud computing, and supporting [Internet of Things][8] environments each of these efforts can be made easier and more cost efficient via a properly configured SDN environment.
Typically in an SDN environment, customers can see all of their devices and TCP flows, which means they can slice up the network from the data or management plane to support a variety of applications and configurations, Capuano said. So users can more easily segment an IoT application from the production world if they want, for example.
Some SDN controllers have the smarts to see that the network is getting congested and, in response, pump up bandwidth or processing to make sure remote and edge components dont suffer latency.
SDN technologies also help in distributed locations that have few IT personnel on site, such as an enterprise branch office or service provider central office, said Michael Bushong, vice president of enterprise and cloud marketing at Juniper Networks.
“Naturally these places require remote and centralized delivery of connectivity, visibility and security. SDN solutions that centralize and abstract control and automate workflows across many places in the network, and their devices, improve operational reliability, speed and experience,” Bushong said.
### **How does SDN support intent-based networking?**
Intent-based networking ([IBN][9]) has a variety of components, but basically is about giving network administrators the ability to define what they want the network to do, and having an automated network management platform create the desired state and enforce policies to ensure what the business wants happens.
“If a key tenet of SDN is abstracted control over a fleet of infrastructure, then the provisioning paradigm and dynamic control to regulate infrastructure state is necessarily higher level,” Bushong said. “Policy is closer to declarative intent, moving away from the minutia of individual device details and imperative and reactive commands.”
IDC says that intent-based networking “represents an evolution of SDN to achieve even greater degrees of operational simplicity, automated intelligence, and closed-loop functionality.”
For that reason, IBN represents a notable milestone on the journey toward autonomous infrastructure that includes a self-driving network, which will function much like the self-driving car, producing desired outcomes based on what network operators and their organizations wish to accomplish, Casemore stated.
“While the self-driving car has been designed to deliver passengers safely to their destination with minimal human intervention, the self-driving network, as part of autonomous datacenter infrastructure, eventually will achieve similar outcomes in areas such as network provisioning, management, and troubleshooting — delivering applications and data, dynamically creating and altering network paths, and providing security enforcement with minimal need for operator intervention,” Casemore stated.
While IBN technologies are relatively young, Gartner says by 2020, more than 1,000 large enterprises will use intent-based networking systems in production, up from less than 15 in the second quarter of 2018.
### **How does SDN help customers with security?**
SDN enables a variety of security benefits. A customer can split up a network connection between an end user and the data center and have different security settings for the various types of network traffic. A network could have one public-facing, low security network that does not touch any sensitive information. Another segment could have much more fine-grained remote access control with software-based [firewall][10] and encryption policies on it, which allow sensitive data to traverse over it.
“For example, if a customer has an IoT group it doesnt feel is all that mature with regards to security, via the SDN controller you can segment that group off away from the critical high-value corporate traffic,” Capuano stated. “SDN users can roll out security policies across the network from the data center to the edge and if you do all of this on top of white boxes, deployments can be 30 60 percent cheaper than traditional gear.”
The ability to look at a set of workloads and see if they match a given security policy is a key benefit of SDN, especially as data is distributed, said Thomas Scheibe, vice president of product management for Ciscos Nexus and ACI product lines.
"The ability to deploy a whitelist security model like we do with ACI [Application Centric Infrastructure] that lets only specific entities access explicit resources across your network fabric is another key security element SDN enables," Scheibe said.
A growing number of SDN platforms now support [microsegmentation][11], according to Casemore.
“In fact, micro-segmentation has developed as a notable use case for SDN. As SDN platforms are extended to support multicloud environments, they will be used to mitigate the inherent complexity of establishing and maintaining consistent network and security policies across hybrid IT landscapes,” Casemore said.
### **What is SDNs role in cloud computing?**
SDNs role in the move toward [private cloud][12] and [hybrid cloud][13] adoption seems a natural. In fact, big SDN players such as Cisco, Juniper and VMware have all made moves to tie together enterprise data center and cloud worlds.
Cisco's ACI Anywhere package would, for example, let policies configured through Cisco's SDN APIC (Application Policy Infrastructure Controller) use native APIs offered by a public-cloud provider to orchestrate changes within both the private and public cloud environments, Cisco said.
“As organizations look to scale their hybrid cloud environments, it will be critical to leverage solutions that help improve productivity and processes,” said [Bob Laliberte][14], a senior analyst with Enterprise Strategy Group, in a recent [Network World article][15]. “The ability to leverage the same solution, like Ciscos ACI, in your own private-cloud environment as well as across multiple public clouds will enable organizations to successfully scale their cloud environments.”
Growth of public and private clouds and enterprises' embrace of distributed multicloud application environments will have an ongoing and significant impact on data center SDN, representing both a challenge and an opportunity for vendors, said IDCs Casemore.
“Agility is a key attribute of digital transformation, and enterprises will adopt architectures, infrastructures, and technologies that provide for agile deployment, provisioning, and ongoing operational management. In a datacenter networking context, the imperative of digital transformation drives adoption of extensive network automation, including SDN,” Casemore said.
### Where does SD-WAN fit in?
The software-defined wide area network ([SD-WAN][16]) is a natural application of SDN that extends the technology over a WAN. While the SDN architecture is typically the underpinning in a data center or campus, SD-WAN takes it a step further.
At its most basic, SD-WAN lets companies aggregate a variety of network connections including MPLS, 4G LTE and DSL into a branch or network edge location and have a software management platform that can turn up new sites, prioritize traffic and set security policies.
SD-WAN's driving principle is to simplify the way big companies turn up new links to branch offices, better manage the way those links are utilized for data, voice or video and potentially save money in the process.
[SD-WAN][17] lets networks route traffic based on centrally managed roles and rules, no matter what the entry and exit points of the traffic are, and with full security. For example, if a user in a branch office is working in Office365, SD-WAN can route their traffic directly to the closest cloud data center for that app, improving network responsiveness for the user and lowering bandwidth costs for the business.
"SD-WAN has been a promised technology for years, but in 2019 it will be a major driver in how networks are built and re-built," Anand Oswal, senior vice president of engineering in Ciscos Enterprise Networking Business, said a Network World [article][18] earlier this year.
It's a profoundly hot market with tons of players including [Cisco][19], VMware, Silver Peak, Riverbed, Aryaka, Fortinet, Nokia and Versa.
IDC says the SD-WAN infrastructure market will hit $4.5 billion by 2022, growing at a more than 40% yearly clip between now and then.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/what-is-sdn_2_where-is-it-going_arrows_fork-in-the-road-100793314-large.jpg
[2]: https://www.networkworld.com/article/2202144/data-center-faq-what-is-openflow-and-why-is-it-needed.html
[3]: https://www.networkworld.com/article/3206709/lan-wan/what-s-the-difference-between-sdn-and-nfv.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.idc.com/getdoc.jsp?containerId=US43862418
[6]: https://www.networkworld.com/article/3192318/pluribus-recharges-expands-software-defined-network-platform.html
[7]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
[8]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
[9]: https://www.networkworld.com/article/3202699/what-is-intent-based-networking.html
[10]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
[11]: https://www.networkworld.com/article/3247672/what-is-microsegmentation-how-getting-granular-improves-network-security.html
[12]: https://www.networkworld.com/article/2159885/cloud-computing-gartner-5-things-a-private-cloud-is-not.html
[13]: https://www.networkworld.com/article/3233132/what-is-hybrid-cloud-computing.html
[14]: https://www.linkedin.com/in/boblaliberte90/
[15]: https://www.networkworld.com/article/3336075/cisco-serves-up-flexible-data-center-options.html
[16]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
[17]: https://www.networkworld.com/article/3031279/sd-wan/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
[18]: https://www.networkworld.com/article/3332027/cisco-touts-5-technologies-that-will-change-networking-in-2019.html
[19]: https://www.networkworld.com/article/3322937/what-will-be-hot-for-cisco-in-2019.html

View File

@ -1,6 +1,3 @@
ezio is translating
In Device We Trust: Measure Twice, Compute Once with Xen, Linux, TPM 2.0 and TXT
============================================================

View File

@ -1,134 +0,0 @@
FSSlc translating
5 open source fonts ideal for programmers
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documentation-type-keys-yearbook.png?itok=Q-ELM2rn)
What is the best programming font? First, you need to consider that not all fonts are created equally. When choosing a font for casual reading, the reader expects the letters to smoothly flow into one another, giving an easy and enjoyable experience. A single character for a standard font is akin to puzzle piece designed to carefully mesh with every other part of the overall typeface.
When writing code, however, your font requirements are typically more functional in nature. This is why most programmers prefer to use monospaced fonts with fixed-width letters, when given the option. Selecting a font that has distinguishable numbers and punctuation, is aesthetically pleasing, and has a copyright license that meets your needs is also important.
There are certain features that make a font optimal for programming. First, a detailed definition of what makes a monospaced font is in order. Consider the letter "w" as it compares to the letter "i" for a moment. When you are dealing with a font, it is important to think about the whitespace around the letter, as well as the letter itself. In the world of physical books and newspapers, where efficient use of space is often critical, it makes sense to assign less width to the thin "i" than the wide "w."
There are certain features that make a font optimal for programming.
Inside a terminal, however, you are blessed with no such restrictions, and it can be very useful for every character to share an identical amount of space. The main functional benefit is that you can effectively "guesstimate" how long your code is by casually glancing at a block of text. Secondary benefits include the ability to align characters and punctuation easily, highlighting is much more visually obvious, and optical character recognition on printed sheets is more effective for monospaced fonts than proportional fonts.
Inside a terminal, however, you are blessed with no such restrictions, and it can be very useful for every character to share an identical amount of space. The main functional benefit is that you can effectively "guesstimate" how long your code is by casually glancing at a block of text. Secondary benefits include the ability to align characters and punctuation easily, highlighting is much more visually obvious, and optical character recognition on printed sheets is more effective for monospaced fonts than proportional fonts.
In this article we will explore five excellent open source font options that are ideal for programming and writing code.
### 1. Firacode: The best overall programming font
### [firacode.png][1]
![FiraCode example][2]
FiraCode, Andrew Lekashman
### [firacode2.png][3]
![FiraCode compared to Fira Mono][4]
FiraCode compared to Fira Mono, [Nikita Prokopov][5] via GitHub
### 2. Inconsolata: Elegant and created by a brilliant designer
### [inconsolata.png][6]
![Inconsolata example][7]
Inconsolata, Andrew Lekashman
The first font on our list is [FiraCode][5] , a programming font that truly goes above and beyond the call of duty. FiraCode is an extension of Fira, the open source font family commissioned by Mozilla. What makes FiraCode different is that it modifies the common symbol combinations or ligatures used in code to be extraordinarily readable. This font family comes in several styles, notably including a Retina option. You can find examples for how it applies to many programming languages on its [GitHub][5] page.
[Inconsolata][8] is one of the most beautiful monospaced fonts. It has been around since 2006 as an open source and freely available option. The creator, Raph Levien designed Inconsolata with one basic statement in mind: "monospaced fonts do not have to suck." Two things that stand out about Inconsolata are its extremely clear differences between 0 and O and its well-defined punctuation.
### 3. DejaVu Sans Mono: Standard issue with many Linux distros and huge glyph coverage
### [dejavu_sans_mono.png][9]
![DejaVu Sans Mono example][10]
DejaVu Sans Mono, Andrew Lekashman
Inspired by the copyrighted and closed Vera font family used in GNOME, [DejaVu Sans Mono][11] is an extremely popular programming font that comes bundled with nearly every modern Linux distribution. DejaVu comes packed with a whopping 3,310 glyphs under the Book Variant, compared to a standard font, which normally rests easy at around 100 glyphs. You'll have no shortage of characters to work with, it has enormous coverage over Unicode, and it is actively growing all of the time.
### 4. Source Code Pro: Elegant and readable, created by a small, talented team at Adobe
### [source_code_pro.png][12]
![Source Code Pro example][13]
Source Code Pro, Andrew Lekashman
Designed by Paul Hunt and Teo Tuominen, [Source Code Pro][14] was [produced by Adobe][15] to be one of its first open source fonts. Source Code Pro is notable in that it is extremely readable and has excellent differentiation between potentially confusing characters and punctuation. Source Code Pro is also a font family and comes in seven different styles: Extralight, Light, Regular, Medium, Semibold, Bold, and Black, with italic variants of each.
### [source_code_pro2.png][16]
![Differentiating potentially confusable characters][17]
Differentiating potentially confusable characters, [Paul D. Hunt][15] via Adobe Typekit Blog.
### [source_code_pro3.png][18]
![Metacharacters with special meaning in computer languages][19]
Metacharacters with special meaning in computer languages, [Paul D. Hunt][15] via Adobe Typekit Blog
### 5. Noto Mono: Enormous language coverage, created by a large team at Google
### [noto.png][20]
![Noto Mono example][21]
Noto Mono, Andrew Lekashman
The last font on our list is [Noto Mono][22], the monospaced version of the expansive Noto font family by Google. While not specifically designed for programming, Noto Mono is available in 209 languages (including emoji!) and is actively supported and updated. The project is enormous and is an extension of Google's stated mission to organize the world's information. If you want to learn more about it, check out this excellent [video about the font][23].
### Choosing the right font
Whichever typeface you select, you will most likely spend hours each day immersed within it, so make sure it resonates with you on an aesthetic and philosophical level. Choosing the right open source font is an important part of making sure that you have the best possible environment for productivity. Any of these fonts is a fantastic choice, and each option has a powerful feature set that lets it stand out from the rest.
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/11/how-select-open-source-programming-font
作者:[Andrew Lekashman][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com
[1]:/https://opensource.comfile/377151
[2]:https://opensource.com/sites/default/files/u128651/firacode.png (FiraCode example)
[3]:https://opensource.com/file/377156
[4]:https://opensource.com/sites/default/files/u128651/firacode2.png (FiraCode compared to Fira Mono)
[5]:https://github.com/tonsky/FiraCode
[6]:https://opensource.com/file/377161
[7]:https://opensource.com/sites/default/files/u128651/inconsolata.png (Inconsolata example)
[8]:http://www.levien.com/type/myfonts/inconsolata.html
[9]:https://opensource.com/file/377146
[10]:https://opensource.com/sites/default/files/u128651/dejavu_sans_mono.png (DejaVu Sans Mono example)
[11]:https://dejavu-fonts.github.io/
[12]:https://opensource.com/file/377171
[13]:https://opensource.com/sites/default/files/u128651/source_code_pro.png (Source Code Pro example)
[14]:https://github.com/adobe-fonts/source-code-pro
[15]:https://blog.typekit.com/2012/09/24/source-code-pro/
[16]:https://opensource.com/file/377176
[17]:https://opensource.com/sites/default/files/u128651/source_code_pro2.png (Differentiating potentially confusable characters)
[18]:https://opensource.com/file/377181
[19]:https://opensource.com/sites/default/files/u128651/source_code_pro3.png (Metacharacters with special meaning in computer languages)
[20]:https://opensource.com/file/377166
[21]:https://opensource.com/sites/default/files/u128651/noto.png (Noto Mono example)
[22]:https://www.google.com/get/noto/#mono-mono
[23]:https://www.youtube.com/watch?v=AAzvk9HSi84

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,174 +0,0 @@
translating by robsean
12 Best GTK Themes for Ubuntu and other Linux Distributions
======
**Brief: Lets have a look at some of the beautiful GTK themes that you can use not only in Ubuntu but other Linux distributions that use GNOME.**
For those of us that use Ubuntu proper, the move from Unity to Gnome as the default desktop environment has made theming and customizing easier than ever. Gnome has a fairly large tweaking community, and there is no shortage of fantastic GTK themes for users to choose from. With that in mind, I went ahead and found some of my favorite themes that I have come across in recent months. These are what I believe offer some of the best experiences that you can find.
### Best themes for Ubuntu and other Linux distributions
This is not an exhaustive list and may exclude some of the themes you already use and love, but hopefully, you find at least one theme that you enjoy that you did not already know about. All themes present should work on any Gnome 3 setup, Ubuntu or not. I lost some screenshots so I have taken images from the official websites.
The themes listed here are in no particular order.
But before you see the best GNOME themes, you should learn [how to install themes in Ubuntu GNOME][1].
#### 1\. Arc-Ambiance
![][2]
Arc and Arc variant themes have been around for quite some time now, and are widely regarded as some of the best themes you can find. In this example, I have selected Arc-Ambiance because of its modern take on the default Ambiance theme in Ubuntu.
I am a fan of both the Arc theme and the default Ambiance theme, so needless to say, I was pumped when I came across a theme that merged the best of both worlds. If you are a fan of the arc themes but not a fan of this one in particular, Gnome look has plenty of other options that will most certainly suit your taste.
[Arc-Ambiance Theme][3]
#### 2\. Adapta Colorpack
![][4]
The Adapta theme has been one of my favorite flat themes I have ever found. Like Arc, Adapata is widely adopted by many-a-linux user. I have selected this color pack because in one download you have several options to choose from. In fact, there are 19 to choose from. Yep. You read that correctly. 19!
So, if you are a fan of the flat/material design language that we see a lot of today, then there is most likely a variant in this theme pack that will satisfy you.
[Adapta Colorpack Theme][5]
#### 3\. Numix Collection
![][6]
Ah, Numix! Oh, the years we have spent together! For those of us that have been theming our DE for the last couple of years, you must have come across the Numix themes or icon packs at some point in time. Numix was probably the first modern theme for Linux that I fell in love with, and I am still in love with it today. And after all these years, it still hasnt lost its charm.
The gray tone throughout the theme, especially with the default pinkish-red highlight color, makes for a genuinely clean and complete experience. You would be hard pressed to find a theme pack as polished as Numix. And in this offering, you have plenty of options to choose from, so go crazy!
[Numix Collection Theme][7]
#### 4\. Hooli
![][8]
Hooli is a theme that has been out for some time now, but only recently came across my radar. I am a fan of most flat themes but have usually strayed away from themes that come to close to the material design language. Hooli, like Adapta, takes notes from that design language, but does it in a way that I think sets it apart from the rest. The green highlight color is one of my favorite parts about the theme, and it does a good job at not overpowering the entire theme.
[Hooli Theme][9]
#### 5\. Arrongin/Telinkrin
![][10]
Bonus: Two themes in one! And they are relatively new contenders in the theming realm. They both take notes from Ubuntus soon to be finished “[communitheme][11]” and bring it to your desktop today. The only real difference I can find between the offerings are the colors. Arrongin is centered around an Ubuntu-esq orange color, while Telinkrin uses a slightly more KDE Breeze-esq blue. I personally prefer the blue, but both are great options!
[Arrongin/Telinkrin Themes][12]
#### 6\. Gnome-osx
![][13]
I have to admit, usually, when I see that a theme has “osx” or something similar in the title, I dont expect much. Most Apple inspired themes seem to have so much in common that I cant really find a reason to use them. There are two themes I can think of that break this mold: the Arc-osc them and the Gnome-osx theme that we have here.
The reason I like the Gnome-osx theme is because it truly does look at home on the Gnome desktop. It does a great job at blending into the DE without being too flat. So for those of you that enjoy a slightly less flat theme, and you like the red, yellow, and green button scheme for the close, minimize, and maximize buttons, than this theme is perfect for you.
[Gnome-osx Theme][14]
#### 7\. Ultimate Maia
![][15]
There was a time when I used Manjaro Gnome. Since then I have reverted back to Ubuntu, but one thing I wish I could have brought with me was the Manjaro theme. If you feel the same about the Manjaro theme as I do, then you are in luck because you can bring it to ANY distro you want that is running Gnome!
The rich green color, the Breeze-esq close, minimize, maximize buttons, and the over-all polish of the theme makes for one compelling option. It even offers some other color variants of you are not a fan of the green. But lets be honest…who isnt a fan of that Manjaro green color?
[Ultimate Maia Theme][16]
#### 8\. Vimix
![][17]
This was a theme I easily got excited about. It is modern, pulls from the macOS red, yellow, green buttons without directly copying them, and tones down the vibrancy of the theme, making for one unique alternative to most other themes. It comes with three dark variants and several colors to choose from so most of us will find something we like.
[Vimix Theme][18]
#### 9\. Ant
![][19]
Like Vimix, Ant pulls inspiration from macOS for the button colors without directly copying the style. Where Vimix tones down the color options, Ant adds a richness to the colors that looks fantastic on my System 76 Galago Pro screen. The variation between the three theme options is pretty dramatic, and though it may not be to everyones taste, it is most certainly to mine.
[Ant Theme][20]
#### 10\. Flat Remix
![][21]
If you havent noticed by this point, I am a sucker for someone who pays attention to the details in the close, minimize, maximize buttons. The color theme that Flat Remix uses is one I have not seen anywhere else, with a red, blue, and orange color way. Add that on top of a theme that looks almost like a mix between Arc and Adapta, and you have Flat Remix.
I am personally a fan of the dark option, but the light alternative is very nice as well. So if you like subtle transparencies, a cohesive dark theme, and a touch of color here and there, Flat Remix is for you.
[Flat Remix Theme][22]
#### 11\. Paper
![][23]
[Paper][24] has been around for some time now. I remember using it for the first back in 2014. I would say, at this point, Paper is more known for its icon pack than for its GTK theme, but that doesnt mean that the theme isnt a wonderful option in and of its self. Even though I adored the Paper icons from the beginning, I cant say that I was a huge fan of the Paper theme when I first tried it out.
I felt like the bright colors and fun approach to a theme made for an “immature” experience. Now, years later, Paper has grown on me, to say the least, and the light hearted approach that the theme takes is one I greatly appreciate.
[Paper Theme][25]
#### 12\. Pop
![][26]
Pop is one of the newer offerings on this list. Created by the folks over at [System 76][27], the Pop GTK theme is a fork of the Adapta theme listed earlier and comes with a matching icon pack, which is a fork of the previously mentioned Paper icon pack.
The theme was released soon after System 76 announced that they were releasing [their own distribution,][28] Pop!_OS. You can read my [Pop!_OS review][29] to know more about it. Needless to say, I think Pop is a fantastic theme with a superb amount of polish and offers a fresh feel to any Gnome desktop.
[Pop Theme][30]
#### Conclusion
Obviously, there way more themes to choose from than we could feature in one article, but these are some of the most complete and polished themes I have used in recent months. If you think we missed any that you really like or you just really dislike one that I featured above, then feel free to let me know in the comment section below and share why you think your favorite themes are better!
--------------------------------------------------------------------------------
via: https://itsfoss.com/best-gtk-themes/
作者:[Phillip Prado][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/phillip/
[1]:https://itsfoss.com/install-themes-ubuntu/
[2]:https://itsfoss.com/wp-content/uploads/2018/03/arcambaince-300x225.png
[3]:https://www.gnome-look.org/p/1193861/
[4]:https://itsfoss.com/wp-content/uploads/2018/03/adapta-300x169.jpg
[5]:https://www.gnome-look.org/p/1190851/
[6]:https://itsfoss.com/wp-content/uploads/2018/03/numix-300x169.png
[7]:https://www.gnome-look.org/p/1170667/
[8]:https://itsfoss.com/wp-content/uploads/2018/03/hooli2-800x500.jpg
[9]:https://www.gnome-look.org/p/1102901/
[10]:https://itsfoss.com/wp-content/uploads/2018/03/AT-800x590.jpg
[11]:https://itsfoss.com/ubuntu-community-theme/
[12]:https://www.gnome-look.org/p/1215199/
[13]:https://itsfoss.com/wp-content/uploads/2018/03/gosx-800x473.jpg
[14]:https://www.opendesktop.org/s/Gnome/p/1171688/
[15]:https://itsfoss.com/wp-content/uploads/2018/03/ultimatemaia-800x450.jpg
[16]:https://www.opendesktop.org/s/Gnome/p/1193879/
[17]:https://itsfoss.com/wp-content/uploads/2018/03/vimix-800x450.jpg
[18]:https://www.gnome-look.org/p/1013698/
[19]:https://itsfoss.com/wp-content/uploads/2018/03/ant-800x533.png
[20]:https://www.opendesktop.org/p/1099856/
[21]:https://itsfoss.com/wp-content/uploads/2018/03/flatremix-800x450.png
[22]:https://www.opendesktop.org/p/1214931/
[23]:https://itsfoss.com/wp-content/uploads/2018/04/paper-800x450.jpg
[24]:https://itsfoss.com/install-paper-theme-linux/
[25]:https://snwh.org/paper/download
[26]:https://itsfoss.com/wp-content/uploads/2018/04/pop-800x449.jpg
[27]:https://system76.com/
[28]:https://itsfoss.com/system76-popos-linux/
[29]:https://itsfoss.com/pop-os-linux-review/
[30]:https://github.com/pop-os/gtk-theme/blob/master/README.md

View File

@ -1,12 +1,3 @@
[#]: collector: (lujun9972)
[#]: translator: (pityonline)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Get Started with Snap Packages in Linux)
[#]: via: (https://www.linux.com/learn/intro-to-linux/2018/5/get-started-snap-packages-linux)
[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
Get Started with Snap Packages in Linux
======
@ -148,7 +139,7 @@ via: https://www.linux.com/learn/intro-to-linux/2018/5/get-started-snap-packages
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[pityonline](https://github.com/pityonline)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,290 +0,0 @@
Getting started with Sensu monitoring
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e)
Sensu is an open source infrastructure and application monitoring solution that monitors servers, services, and application health, and sends alerts and notifications with third-party integration. Written in Ruby, Sensu can use either [RabbitMQ][1] or [Redis][2] to handle messages. It uses Redis to store data.
If you want to monitor your cloud infrastructure in a simple and efficient manner, Sensu is a good option. It can be integrated with many of the modern DevOps stacks your organization may already be using, such as [Slack][3], [HipChat][4], or [IRC][5], and it can even send mobile/pager alerts with [PagerDuty][6].
Sensu's [modular architecture][7] means every component can be installed on the same server or on completely separate machines.
### Architecture
Sensu's main communication mechanism is the Transport. Every Sensu component must connect to the Transport in order to send messages to each other. Transport can use either RabbitMQ (recommended in production) or Redis.
Sensu Server processes event data and takes action. It registers clients and processes check results and monitoring events using filters, mutators, and handlers. The server publishes check definitions to the clients and the Sensu API provides a RESTful API, providing access to monitoring data and core functionality.
[Sensu Client][8] executes checks either scheduled by Sensu Server or local checks definitions. Sensu uses a data store (Redis) to keep all the persistent data. Finally, [Uchiwa][9] is the web interface to communicate with Sensu API.
![sensu_system.png][11]
### Installing Sensu
#### Prerequisites
* One Linux installation to act as the server node (I used CentOS 7 for this article)
* One or more Linux machines to monitor (clients)
#### Server side
Sensu requires Redis to be installed. To install Redis, enable the EPEL repository:
```
$ sudo yum install epel-release -y
```
Then install Redis:
```
$ sudo yum install redis -y
```
Modify `/etc/redis.conf` to disable protected mode, listen on every interface, and set a password:
```
$ sudo sed -i 's/^protected-mode yes/protected-mode no/g' /etc/redis.conf
$ sudo sed -i 's/^bind 127.0.0.1/bind 0.0.0.0/g' /etc/redis.conf
$ sudo sed -i 's/^# requirepass foobared/requirepass password123/g' /etc/redis.conf
```
Enable and start Redis service:
```
$ sudo systemctl enable redis
$ sudo systemctl start redis
```
Redis is now installed and ready to be used by Sensu.
Now lets install Sensu.
First, configure the Sensu repository and install the packages:
```
$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
[sensu]
name=sensu
baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
gpgcheck=0
enabled=1
EOF
$ sudo yum install sensu uchiwa -y
```
Lets create the bare minimum configuration files for Sensu:
```
$ sudo tee /etc/sensu/conf.d/api.json << EOF
{
  "api": {
        "host": "127.0.0.1",
        "port": 4567
  }
}
EOF
```
Next, configure `sensu-api` to listen on localhost, with Port 4567:
```
$ sudo tee /etc/sensu/conf.d/redis.json << EOF
{
  "redis": {
        "host": "<IP of server>",
        "port": 6379,
        "password": "password123"
  }
}
EOF
$ sudo tee /etc/sensu/conf.d/transport.json << EOF
{
  "transport": {
        "name": "redis"
  }
}
EOF
```
In these two files, we configure Sensu to use Redis as the transport mechanism and the address where Redis will listen. Clients need to connect directly to the transport mechanism. These two files will be required on each client machine.
```
$ sudo tee /etc/sensu/uchiwa.json << EOF
{
   "sensu": [
        {
        "name": "sensu",
        "host": "127.0.0.1",
        "port": 4567
        }
   ],
   "uchiwa": {
        "host": "0.0.0.0",
        "port": 3000
   }
}
EOF
```
In this file, we configure Uchiwa to listen on every interface (0.0.0.0) on Port 3000. We also configure Uchiwa to use `sensu-api` (already configured).
For security reasons, change the owner of the configuration files you just created:
```
$ sudo chown -R sensu:sensu /etc/sensu
```
Enable and start the Sensu services:
```
$ sudo systemctl enable sensu-server sensu-api sensu-client
$ sudo systemctl start sensu-server sensu-api sensu-client
$ sudo systemctl enable uchiwa
$ sudo systemctl start uchiwa
```
Try accessing the Uchiwa website: http://<IP of server>:3000
For production environments, its recommended to run a cluster of RabbitMQ as the Transport instead of Redis (a Redis cluster can be used in production too), and to run more than one instance of Sensu Server and API for load balancing and high availability.
Sensu is now installed. Now lets configure the clients.
#### Client side
To add a new client, you will need to enable Sensu repository on the client machines by creating the file `/etc/yum.repos.d/sensu.repo`.
```
$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
[sensu]
name=sensu
baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
gpgcheck=0
enabled=1
EOF
```
With the repository enabled, install the package Sensu:
```
$ sudo yum install sensu -y
```
To configure `sensu-client`, create the same `redis.json` and `transport.json` created in the server machine, as well as the `client.json` configuration file:
```
$ sudo tee /etc/sensu/conf.d/client.json << EOF
{
  "client": {
        "name": "rhel-client",
        "environment": "development",
        "subscriptions": [
        "frontend"
        ]
  }
}
EOF
```
In the name field, specify a name to identify this client (typically the hostname). The environment field can help you filter, and subscription defines which monitoring checks will be executed by the client.
Finally, enable and start the services and check in Uchiwa, as the new client will register automatically:
```
$ sudo systemctl enable sensu-client
$ sudo systemctl start sensu-client
```
### Sensu checks
Sensu checks have two components: a plugin and a definition.
Sensu is compatible with the [Nagios check plugin specification][12], so any check for Nagios can be used without modification. Checks are executable files and are run by the Sensu client.
Check definitions let Sensu know how, where, and when to run the plugin.
#### Client side
Lets install one check plugin on the client machine. Remember, this plugin will be executed on the clients.
Enable EPEL and install `nagios-plugins-http` :
```
$ sudo yum install -y epel-release
$ sudo yum install -y nagios-plugins-http
```
Now lets explore the plugin by executing it manually. Try checking the status of a web server running on the client machine. It should fail as we dont have a web server running:
```
$ /usr/lib64/nagios/plugins/check_http -I 127.0.0.1
connect to address 127.0.0.1 and port 80: Connection refused
HTTP CRITICAL - Unable to open TCP socket
```
It failed, as expected. Check the return code of the execution:
```
$ echo $?
2
```
The Nagios check plugin specification defines four return codes for the plugin execution:
| **Plugin return code** | **State** |
|------------------------|-----------|
| 0 | OK |
| 1 | WARNING |
| 2 | CRITICAL |
| 3 | UNKNOWN |
With this information, we can now create the check definition on the server.
#### Server side
On the server machine, create the file `/etc/sensu/conf.d/check_http.json`:
```
{
  "checks": {
    "check_http": {
      "command": "/usr/lib64/nagios/plugins/check_http -I 127.0.0.1",
      "interval": 10,
      "subscribers": [
        "frontend"
      ]
    }
  }
}
```
In the command field, use the command we tested before. `Interval` will tell Sensu how frequently, in seconds, this check should be executed. Finally, `subscribers` will define the clients where the check will be executed.
Restart both sensu-api and sensu-server and confirm that the new check is available in Uchiwa.
```
$ sudo systemctl restart sensu-api sensu-server
```
### Whats next?
Sensu is a powerful tool, and this article covers just a glimpse of what it can do. See the [documentation][13] to learn more, and visit the Sensu site to learn more about the [Sensu community][14].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/getting-started-sensu-monitoring-solution
作者:[Michael Zamot][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mzamot
[1]:https://www.rabbitmq.com/
[2]:https://redis.io/topics/config
[3]:https://slack.com/
[4]:https://en.wikipedia.org/wiki/HipChat
[5]:http://www.irc.org/
[6]:https://www.pagerduty.com/
[7]:https://docs.sensu.io/sensu-core/1.4/overview/architecture/
[8]:https://docs.sensu.io/sensu-core/1.4/installation/install-sensu-client/
[9]:https://uchiwa.io/#/
[10]:/file/406576
[11]:https://opensource.com/sites/default/files/uploads/sensu_system.png (sensu_system.png)
[12]:https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/pluginapi.html
[13]:https://docs.sensu.io/
[14]:https://sensu.io/community

View File

@ -1,3 +1,4 @@
liujing97 is translating
Working with data streams on the Linux command line
======
Learn to connect data streams from one utility to another using STDIO.

View File

@ -1,93 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Take to the virtual skies with FlightGear)
[#]: via: (https://opensource.com/article/19/1/flightgear)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
Take to the virtual skies with FlightGear
======
Dreaming of piloting a plane? Try open source flight simulator FlightGear.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/flightgear_cockpit_plane_sky.jpg?itok=LRy0lpOS)
If you've ever dreamed of piloting a plane, you'll love [FlightGear][1]. It's a full-featured, [open source][2] flight simulator that runs on Linux, MacOS, and Windows.
The FlightGear project began in 1996 due to dissatisfaction with commercial flight simulation programs, which were not scalable. Its goal was to create a sophisticated, robust, extensible, and open flight simulator framework for use in academia and pilot training or by anyone who wants to play with a flight simulation scenario.
### Getting started
FlightGear's hardware requirements are fairly modest, including an accelerated 3D video card that supports OpenGL for smooth framerates. It runs well on my Linux laptop with an i5 processor and only 4GB of RAM. Its documentation includes an [online manual][3]; a [wiki][4] with portals for [users][5] and [developers][6]; and extensive tutorials (such as one for its default aircraft, the [Cessna 172p][7]) to teach you how to operate it.
It's easy to install on both [Fedora][8] and [Ubuntu][9] Linux. Fedora users can consult the [Fedora installation page][10] to get FlightGear running.
On Ubuntu 18.04, I had to install a repository:
```
$ sudo add-apt-repository ppa:saiarcot895/flightgear
$ sudo apt-get update
$ sudo apt-get install flightgear
```
Once the installation finished, I launched it from the GUI, but you can also launch the application from a terminal by entering:
```
$ fgfs
```
### Configuring FlightGear
The menu on the left side of the application window provides configuration options.
![](https://opensource.com/sites/default/files/uploads/flightgear_menu.png)
**Summary** returns you to the application's home screen.
**Aircraft** shows the aircraft you have installed and offers the option to install up to 539 other aircraft available in FlightGear's default "hangar." I installed a Cessna 150L, a Piper J-3 Cub, and a Bombardier CRJ-700. Some of the aircraft (including the CRJ-700) have tutorials to teach you how to fly a commercial jet; I found the tutorials informative and accurate.
![](https://opensource.com/sites/default/files/uploads/flightgear_aircraft.png)
To select an aircraft to pilot, highlight it and click on **Fly!** at the bottom of the menu. I chose the default Cessna 172p and found the cockpit depiction extremely accurate.
![](https://opensource.com/sites/default/files/uploads/flightgear_cockpit-view.png)
The default airport is Honolulu, but you can change it in the **Location** menu by providing your favorite airport's [ICAO airport code][11] identifier. I found some small, local, non-towered airports like Olean and Dunkirk, New York, as well as larger airports including Buffalo, O'Hare, and Raleigh—and could even choose a specific runway.
Under **Environment** , you can adjust the time of day, the season, and the weather. The simulation includes advance weather modeling and the ability to download current weather from [NOAA][12].
**Settings** provides an option to start the simulation in Paused mode by default. Also in Settings, you can select multi-player mode, which allows you to "fly" with other players on FlightGear supporters' global network of servers that allow for multiple users. You must have a moderately fast internet connection to support this functionality.
The **Add-ons** menu allows you to download aircraft and additional scenery.
### Take flight
To "fly" my Cessna, I used a Logitech joystick that worked well. You can calibrate your joystick using an option in the **File** menu at the top.
Overall, I found the simulation very accurate and think the graphics are great. Try FlightGear yourself — I think you will find it a very fun and complete simulation package.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/1/flightgear
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: http://home.flightgear.org/
[2]: http://wiki.flightgear.org/GNU_General_Public_License
[3]: http://flightgear.sourceforge.net/getstart-en/getstart-en.html
[4]: http://wiki.flightgear.org/FlightGear_Wiki
[5]: http://wiki.flightgear.org/Portal:User
[6]: http://wiki.flightgear.org/Portal:Developer
[7]: http://wiki.flightgear.org/Cessna_172P
[8]: http://rpmfind.net/linux/rpm2html/search.php?query=flightgear
[9]: https://launchpad.net/~saiarcot895/+archive/ubuntu/flightgear
[10]: https://apps.fedoraproject.org/packages/FlightGear/
[11]: https://en.wikipedia.org/wiki/ICAO_airport_code
[12]: https://www.noaa.gov/

View File

@ -1,359 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Understand And Identify File types in Linux)
[#]: via: (https://www.2daygeek.com/how-to-understand-and-identify-file-types-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How To Understand And Identify File types in Linux
======
We all are knows, that everything is a file in Linux which includes Hard Disk, Graphics Card, etc.
When you are navigating the Linux filesystem most of the files are fall under regular files and directories.
But it has other file types as well for different purpose which fall in five categories.
So, its very important to understand the file types in Linux that helps you in many ways.
If you cant believe this, you just gone through the complete article then you come to know how important is.
If you dont understand the file types you cant make any changes on that without fear.
If you made the changes wrongly that damage your system very badly so be careful when you are doing that.
Files are very important in Linux because all the devices and daemons were stored as a file in Linux system.
### How Many Types of File is Available in Linux?
As per my knowledge, totally 7 types of files are available in Linux with 3 Major categories. The details are below.
* Regular File
* Directory File
* Special Files (This category having five type of files)
* Link File
* Character Device File
* Socket File
* Named Pipe File
* Block File
Refer the below table for better understanding of file types in Linux.
| Symbol | Meaning |
| | Regular File. It starts with underscore “_”. |
| d | Directory File. It starts with English alphabet letter “d”. |
| l | Link File. It starts with English alphabet letter “l”. |
| c | Character Device File. It starts with English alphabet letter “c”. |
| s | Socket File. It starts with English alphabet letter “s”. |
| p | Named Pipe File. It starts with English alphabet letter “p”. |
| b | Block File. It starts with English alphabet letter “b”. |
### Method-1: Manual Way to Identify File types in Linux
If you are having good knowledge in Linux then you can easily identify the files type with help of above table.
#### How to view the Regular files in Linux?
Use the below command to view the Regular files in Linux. Regular files are available everywhere in Linux filesystem.
The Regular files color is `WHITE`
```
# ls -la | grep ^-
-rw-------. 1 mageshm mageshm 1394 Jan 18 15:59 .bash_history
-rw-r--r--. 1 mageshm mageshm 18 May 11 2012 .bash_logout
-rw-r--r--. 1 mageshm mageshm 176 May 11 2012 .bash_profile
-rw-r--r--. 1 mageshm mageshm 124 May 11 2012 .bashrc
-rw-r--r--. 1 root root 26 Dec 27 17:55 liks
-rw-r--r--. 1 root root 104857600 Jan 31 2006 test100.dat
-rw-r--r--. 1 root root 104874307 Dec 30 2012 test100.zip
-rw-r--r--. 1 root root 11536384 Dec 30 2012 test10.zip
-rw-r--r--. 1 root root 61 Dec 27 19:05 test2-bzip2.txt
-rw-r--r--. 1 root root 61 Dec 31 14:24 test3-bzip2.txt
-rw-r--r--. 1 root root 60 Dec 27 19:01 test-bzip2.txt
```
#### How to view the Directory files in Linux?
Use the below command to view the Directory files in Linux. Directory files are available everywhere in Linux filesystem. The Directory files colour is `BLUE`
```
# ls -la | grep ^d
drwxr-xr-x. 3 mageshm mageshm 4096 Dec 31 14:24 links/
drwxrwxr-x. 2 mageshm mageshm 4096 Nov 16 15:44 perl5/
drwxr-xr-x. 2 mageshm mageshm 4096 Nov 16 15:37 public_ftp/
drwxr-xr-x. 3 mageshm mageshm 4096 Nov 16 15:37 public_html/
```
#### How to view the Link files in Linux?
Use the below command to view the Link files in Linux. Link files are available everywhere in Linux filesystem.
Two type of link files are available, its Soft link and Hard link. The Link files color is `LIGHT TURQUOISE`
```
# ls -la | grep ^l
lrwxrwxrwx. 1 root root 31 Dec 7 15:11 s-link-file -> /links/soft-link/test-soft-link
lrwxrwxrwx. 1 root root 38 Dec 7 15:12 s-link-folder -> /links/soft-link/test-soft-link-folder
```
#### How to view the Character Device files in Linux?
Use the below command to view the Character Device files in Linux. Character Device files are available only in specific location.
Its available under `/dev` directory. The Character Device files color is `YELLOW`
```
# ls -la | grep ^c
crw-------. 1 root root 5, 1 Jan 28 14:05 console
crw-rw----. 1 root root 10, 61 Jan 28 14:05 cpu_dma_latency
crw-rw----. 1 root root 10, 62 Jan 28 14:05 crash
crw-rw----. 1 root root 29, 0 Jan 28 14:05 fb0
crw-rw-rw-. 1 root root 1, 7 Jan 28 14:05 full
crw-rw-rw-. 1 root root 10, 229 Jan 28 14:05 fuse
```
#### How to view the Block files in Linux?
Use the below command to view the Block files in Linux. The Block files are available only in specific location.
Its available under `/dev` directory. The Block files color is `YELLOW`
```
# ls -la | grep ^b
brw-rw----. 1 root disk 7, 0 Jan 28 14:05 loop0
brw-rw----. 1 root disk 7, 1 Jan 28 14:05 loop1
brw-rw----. 1 root disk 7, 2 Jan 28 14:05 loop2
brw-rw----. 1 root disk 7, 3 Jan 28 14:05 loop3
brw-rw----. 1 root disk 7, 4 Jan 28 14:05 loop4
```
#### How to view the Socket files in Linux?
Use the below command to view the Socket files in Linux. The Socket files are available only in specific location.
The Socket files color is `PINK`
```
# ls -la | grep ^s
srw-rw-rw- 1 root root 0 Jan 5 16:36 system_bus_socket
```
#### How to view the Named Pipe files in Linux?
Use the below command to view the Named Pipe files in Linux. The Named Pipe files are available only in specific location. The Named Pipe files color is `YELLOW`
```
# ls -la | grep ^p
prw-------. 1 root root 0 Jan 28 14:06 replication-notify-fifo|
prw-------. 1 root root 0 Jan 28 14:06 stats-mail|
```
### Method-2: How to Identify File types in Linux Using file Command?
The file command allow us to determine various file types in Linux. There are three sets of tests, performed in this order: filesystem tests, magic tests, and language tests to identify file types.
#### How to view the Regular files in Linux Using file Command?
Simple enter the file command on your terminal and followed by Regular file. The file command will read the given file contents and display exactly what kind of file it is.
Thats why we are seeing different results for each Regular files. See the below various results for Regular files.
```
# file 2daygeek_access.log
2daygeek_access.log: ASCII text, with very long lines
# file powertop.html
powertop.html: HTML document, ASCII text, with very long lines
# file 2g-test
2g-test: JSON data
# file powertop.txt
powertop.txt: HTML document, UTF-8 Unicode text, with very long lines
# file 2g-test-05-01-2019.tar.gz
2g-test-05-01-2019.tar.gz: gzip compressed data, last modified: Sat Jan 5 18:22:20 2019, from Unix, original size 450560
```
#### How to view the Directory files in Linux Using file Command?
Simple enter the file command on your terminal and followed by Directory file. See the results below.
```
# file Pictures/
Pictures/: directory
```
#### How to view the Link files in Linux Using file Command?
Simple enter the file command on your terminal and followed by Link file. See the results below.
```
# file log
log: symbolic link to /run/systemd/journal/dev-log
```
#### How to view the Character Device files in Linux Using file Command?
Simple enter the file command on your terminal and followed by Character Device file. See the results below.
```
# file vcsu
vcsu: character special (7/64)
```
#### How to view the Block files in Linux Using file Command?
Simple enter the file command on your terminal and followed by Block file. See the results below.
```
# file sda1
sda1: block special (8/1)
```
#### How to view the Socket files in Linux Using file Command?
Simple enter the file command on your terminal and followed by Socket file. See the results below.
```
# file system_bus_socket
system_bus_socket: socket
```
#### How to view the Named Pipe files in Linux Using file Command?
Simple enter the file command on your terminal and followed by Named Pipe file. See the results below.
```
# file pipe-test
pipe-test: fifo (named pipe)
```
### Method-3: How to Identify File types in Linux Using stat Command?
The stat command allow us to check file types or file system status. This utility giving more information than file command. It shows lot of information about the given file such as Size, Block Size, IO Block Size, Inode Value, Links, File permission, UID, GID, File Access, Modify and Change time details.
#### How to view the Regular files in Linux Using stat Command?
Simple enter the stat command on your terminal and followed by Regular file.
```
# stat 2daygeek_access.log
File: 2daygeek_access.log
Size: 14406929 Blocks: 28144 IO Block: 4096 regular file
Device: 10301h/66305d Inode: 1727555 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
Access: 2019-01-03 14:05:26.430328867 +0530
Modify: 2019-01-03 14:05:26.460328868 +0530
Change: 2019-01-03 14:05:26.460328868 +0530
Birth: -
```
#### How to view the Directory files in Linux Using stat Command?
Simple enter the stat command on your terminal and followed by Directory file. See the results below.
```
# stat Pictures/
File: Pictures/
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: 10301h/66305d Inode: 1703982 Links: 3
Access: (0755/drwxr-xr-x) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
Access: 2018-11-24 03:22:11.090000828 +0530
Modify: 2019-01-05 18:27:01.546958817 +0530
Change: 2019-01-05 18:27:01.546958817 +0530
Birth: -
```
#### How to view the Link files in Linux Using stat Command?
Simple enter the stat command on your terminal and followed by Link file. See the results below.
```
# stat /dev/log
File: /dev/log -> /run/systemd/journal/dev-log
Size: 28 Blocks: 0 IO Block: 4096 symbolic link
Device: 6h/6d Inode: 278 Links: 1
Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2019-01-05 16:36:31.033333447 +0530
Modify: 2019-01-05 16:36:30.766666768 +0530
Change: 2019-01-05 16:36:30.766666768 +0530
Birth: -
```
#### How to view the Character Device files in Linux Using stat Command?
Simple enter the stat command on your terminal and followed by Character Device file. See the results below.
```
# stat /dev/vcsu
File: /dev/vcsu
Size: 0 Blocks: 0 IO Block: 4096 character special file
Device: 6h/6d Inode: 16 Links: 1 Device type: 7,40
Access: (0660/crw-rw----) Uid: ( 0/ root) Gid: ( 5/ tty)
Access: 2019-01-05 16:36:31.056666781 +0530
Modify: 2019-01-05 16:36:31.056666781 +0530
Change: 2019-01-05 16:36:31.056666781 +0530
Birth: -
```
#### How to view the Block files in Linux Using stat Command?
Simple enter the stat command on your terminal and followed by Block file. See the results below.
```
# stat /dev/sda1
File: /dev/sda1
Size: 0 Blocks: 0 IO Block: 4096 block special file
Device: 6h/6d Inode: 250 Links: 1 Device type: 8,1
Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 994/ disk)
Access: 2019-01-05 16:36:31.596666806 +0530
Modify: 2019-01-05 16:36:31.596666806 +0530
Change: 2019-01-05 16:36:31.596666806 +0530
Birth: -
```
#### How to view the Socket files in Linux Using stat Command?
Simple enter the stat command on your terminal and followed by Socket file. See the results below.
```
# stat /var/run/dbus/system_bus_socket
File: /var/run/dbus/system_bus_socket
Size: 0 Blocks: 0 IO Block: 4096 socket
Device: 15h/21d Inode: 576 Links: 1
Access: (0666/srw-rw-rw-) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2019-01-05 16:36:31.823333482 +0530
Modify: 2019-01-05 16:36:31.810000149 +0530
Change: 2019-01-05 16:36:31.810000149 +0530
Birth: -
```
#### How to view the Named Pipe files in Linux Using stat Command?
Simple enter the stat command on your terminal and followed by Named Pipe file. See the results below.
```
# stat pipe-test
File: pipe-test
Size: 0 Blocks: 0 IO Block: 4096 fifo
Device: 10301h/66305d Inode: 1705583 Links: 1
Access: (0644/prw-r--r--) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek)
Access: 2019-01-06 02:00:03.040394731 +0530
Modify: 2019-01-06 02:00:03.040394731 +0530
Change: 2019-01-06 02:00:03.040394731 +0530
Birth: -
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-understand-and-identify-file-types-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972

View File

@ -1,159 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (7 Methods To Identify Disk Partition/FileSystem UUID On Linux)
[#]: via: (https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-unique-identifier-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
7 Methods To Identify Disk Partition/FileSystem UUID On Linux
======
As a Linux administrator you should aware of that how do you check partition UUID or filesystem UUID.
Because most of the Linux systems are mount the partitions with UUID. The same has been verified in the `/etc/fstab` file.
There are many utilities are available to check UUID. In this article we will show you how to check UUID in many ways and you can choose the one which is suitable for you.
### What Is UUID?
UUID stands for Universally Unique Identifier which helps Linux system to identify a hard drives partition instead of block device file.
libuuid is part of the util-linux-ng package since kernel version 2.15.1 and its installed by default in Linux system.
The UUIDs generated by this library can be reasonably expected to be unique within a system, and unique across all systems.
Its a 128 bit number used to identify information in computer systems. UUIDs were originally used in the Apollo Network Computing System (NCS) and later UUIDs are standardized by the Open Software Foundation (OSF) as part of the Distributed Computing Environment (DCE).
UUIDs are represented as 32 hexadecimal (base 16) digits, displayed in five groups separated by hyphens, in the form 8-4-4-4-12 for a total of 36 characters (32 alphanumeric characters and four hyphens).
For example: d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
Sample of my /etc/fstab file.
```
# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a device; this may
# be used with UUID= as a more robust way to name devices that works even if
# disks are added and removed. See fstab(5).
#
#
UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f / ext4 defaults,noatime 0 1
UUID=a2092b92-af29-4760-8e68-7a201922573b swap swap defaults,noatime 0 2
```
We can check this using the following seven commands.
* **`blkid Command:`** locate/print block device attributes.
* **`lsblk Command:`** lsblk lists information about all available or the specified block devices.
* **`hwinfo Command:`** hwinfo stands for hardware information tool is another great utility that used to probe for the hardware present in the system.
* **`udevadm Command:`** udev management tool.
* **`tune2fs Command:`** adjust tunable filesystem parameters on ext2/ext3/ext4 filesystems.
* **`dumpe2fs Command:`** dump ext2/ext3/ext4 filesystem information.
* **`Using by-uuid Path:`** The directory contains UUID and real block device files, UUIDs were symlink with real block device files.
### How To Check Disk Partition/FileSystem UUID In Linux Uusing blkid Command?
blkid is a command-line utility to locate/print block device attributes. It uses libblkid library to get disk partition UUID in Linux system.
```
# blkid
/dev/sda1: UUID="d92fa769-e00f-4fd7-b6ed-ecf7224af7fa" TYPE="ext4" PARTUUID="eab59449-01"
/dev/sdc1: UUID="d17e3c31-e2c9-4f11-809c-94a549bc43b7" TYPE="ext2" PARTUUID="8cc8f9e5-01"
/dev/sdc3: UUID="ca307aa4-0866-49b1-8184-004025789e63" TYPE="ext4" PARTUUID="8cc8f9e5-03"
/dev/sdc5: PARTUUID="8cc8f9e5-05"
```
### How To Check Disk Partition/FileSystem UUID In Linux Uusing lsblk Command?
lsblk lists information about all available or the specified block devices. The lsblk command reads the sysfs filesystem and udev db to gather information.
If the udev db is not available or lsblk is compiled without udev support than it tries to read LABELs, UUIDs and filesystem types from the block device. In this case root permissions are necessary. The command prints all block devices (except RAM disks) in a tree-like format by default.
```
# lsblk -o name,mountpoint,size,uuid
NAME MOUNTPOINT SIZE UUID
sda 30G
└─sda1 / 20G d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
sdb 10G
sdc 10G
├─sdc1 1G d17e3c31-e2c9-4f11-809c-94a549bc43b7
├─sdc3 1G ca307aa4-0866-49b1-8184-004025789e63
├─sdc4 1K
└─sdc5 1G
sdd 10G
sde 10G
sr0 1024M
```
### How To Check Disk Partition/FileSystem UUID In Linux Uusing by-uuid path?
The directory contains UUID and real block device files, UUIDs were symlink with real block device files.
```
# ls -lh /dev/disk/by-uuid/
total 0
lrwxrwxrwx 1 root root 10 Jan 29 08:34 ca307aa4-0866-49b1-8184-004025789e63 -> ../../sdc3
lrwxrwxrwx 1 root root 10 Jan 29 08:34 d17e3c31-e2c9-4f11-809c-94a549bc43b7 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Jan 29 08:34 d92fa769-e00f-4fd7-b6ed-ecf7224af7fa -> ../../sda1
```
### How To Check Disk Partition/FileSystem UUID In Linux Uusing hwinfo Command?
**[hwinfo][1]** stands for hardware information tool is another great utility that used to probe for the hardware present in the system and display detailed information about varies hardware components in human readable format.
```
# hwinfo --block | grep by-uuid | awk '{print $3,$7}'
/dev/sdc1, /dev/disk/by-uuid/d17e3c31-e2c9-4f11-809c-94a549bc43b7
/dev/sdc3, /dev/disk/by-uuid/ca307aa4-0866-49b1-8184-004025789e63
/dev/sda1, /dev/disk/by-uuid/d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
```
### How To Check Disk Partition/FileSystem UUID In Linux Uusing udevadm Command?
udevadm expects a command and command specific options. It controls the runtime behavior of systemd-udevd, requests kernel events, manages the event queue, and provides simple debugging mechanisms.
```
udevadm info -q all -n /dev/sdc1 | grep -i by-uuid | head -1
S: disk/by-uuid/d17e3c31-e2c9-4f11-809c-94a549bc43b7
```
### How To Check Disk Partition/FileSystem UUID In Linux Uusing tune2fs Command?
tune2fs allows the system administrator to adjust various tunable filesystem parameters on Linux ext2, ext3, or ext4 filesystems. The current values of these options can be displayed by using the -l option.
```
# tune2fs -l /dev/sdc1 | grep UUID
Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7
```
### How To Check Disk Partition/FileSystem UUID In Linux Uusing dumpe2fs Command?
dumpe2fs prints the super block and blocks group information for the filesystem present on device.
```
# dumpe2fs /dev/sdc1 | grep UUID
dumpe2fs 1.43.5 (04-Aug-2017)
Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-unique-identifier-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/

View File

@ -0,0 +1,147 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (VA Linux: The Linux Company That Once Ruled NASDAQ)
[#]: via: (https://itsfoss.com/story-of-va-linux/)
[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/)
VA Linux: The Linux Company That Once Ruled NASDAQ
======
This is our first article in the Linux and open source history series. We will be covering more trivia, anecdotes and other nostalgic events from the past.
At its time, _VA Linux_ was indeed a crusade to free the world from Microsofts domination.
On a historical incident in December 1999, the shares of a private firm skyrocketed from just $30 to a whopping $239 within just a day of its [IPO][1]! It was a record-breaking development that day.
The company was _VA Linux_ , a firm with only 200 employees that was based on the idea of deploying Intel Hardware with Linux and FOSS, had begun a fantastic journey [on the likes of Sun and Dell][2].
It traded with a symbol LNUX and gained around 700 percent on its first day of trading. But hardly one year later, the [LNUX stocks were selling below $9 per share][3].
How come a successful Linux based company become a subsidiary of [Gamestop][4], a gaming company?
Let us look back into the highs and lows of this record-breaking Linux corporation by knowing their history in brief.
### How did it all actually begin?
In the year 1993, a graduate student at Stanford University wanted to own a powerful workstation but could not afford to buy expensive [Sun][5] Workstations, which used to be sold at extremely high prices of $7,000 per system at that time.
So, he decided to do build one on his own ([DIY][6] [FTW][7]!). Using an Intel 486-chip running at just 33 megahertz, he installed Linux and finally had a machine that was twice as fast as Suns but at a much lower price tag: $2,000.
That student was none other than _VA Research_ founder [Larry Augustin][8], whose idea was loved by many at that exciting time in the Stanford campus. People started buying machines with similar configurations from him and his friend and co-founder, James Vera. This is how _VA Research_ was formed.
![VA Linux founder, Larry Augustin][9]
> Once software goes into the GPL, you cant take it back. People can stop contributing, but the code that exists, people can continue to develop on it.
>
> Without a doubt, a futuristic quote from VA Linux founder, Larry Augustin, 10 years ago | Read the whole interview [here][10]
#### Some screenshots of their web domains from the early days
![Linux Powered Machines on sale on varesearch.com | July 15, 1997][11]
![varesearch.com reveals emerging growth | February 16, 1998][12]
![On June 26, 2001, they transitioned from hardware to software | valinux.com as on June 22, 2001][13]
### The spectacular rise and the devastating fall of VA Linux
VA Research had a big year in 1999 and perhaps it was the biggest for them as they acquired many growing companies and competitors at that time, along with starting many innovative initiatives. The next year in 2000, they created a subsidiary in Japan named _VA Linux Systems Japan K.K._ They were at their peak that year.
After they transitioned completely from hardware to software, stock prices started to fall drastically since 2002. It all happened because of slower-than-expected sales growth from new customers in the dot-com sector. In the later years they sold off a few brands and top employees also resigned in 2010.
Gamestop finally [acquired][14] Geeknet Inc. (the new name of VA Linux) for $140 million on June 2, 2015.
In case youre curious for a detailed chronicle, I have separately created this [timeline][15], highlighting events year-wise.
![Image Credit: Wikipedia][16]
### What happened to VA Linux afterward?
Geeknet owned by Gamestop is now an online retailer for the global geek community as [ThinkGeek][17].
SourceForge and Slashdot were what still kept them linked with Linux and Open Source until _Dice Holdings_ acquired Slashdot, SourceForge, and Freecode.
An [article][18] from 2016 sadly quotes in its final paragraph:
> “Being acquired by a company that caters to gamers and does not have anything in particular to do with open source software may be a lackluster ending for what was once a spectacularly valuable Linux business.”
Did we note Linux and Gamers? Does Linux really not have anything to do with Gaming? Are these two terms really so far apart? What about [Gaming on Linux][19]? What about [Open Source Games][20]?
How could have the stalwarts from _VA Linux_ with years and years of experience in the Linux arena contributed to the Linux Gaming community? What could have happened had [Valve][21] (who are currently so [dedicated][22] towards Linux Gaming) acquired _VA Linux_ instead of Gamestop? Can we ponder?
The seeds of ideas that were planted by _VA Research_ will continue to inspire the Linux and FOSS community because of its significant contributions in the world of Open Source. At _Its FOSS,_ our heartfelt salute goes out to those noble ideas!
Want to feel the nostalgia? Use the [timeline][15] dates with the [Way Back Machine][23] to check out previously owned _VA_ domains like _valinux.com_ or _varesearch.com_ in the past three decades! You can even check _linux.com_ that was once owned by _VA Linux Systems_.
But wait, are we really done here? What happened to the subsidiary named _VA Linux Systems Japan K.K._? Well, its [a different story there][24] and still going strong with the original ideologies of _VA Linux_!
![VA Linux booth circa 2000 | Image Credit: Storem][25]
#### _VA Linux_ Subsidiary Still Operational in Japan!
VA Linux is still operational through its [Japanese subsidiary][26]. It provides the following services:
* Failure Analysis and Support Services: [_VA Quest_][27]
* Entrusted Development Service
* Consulting Service
_VA_ _Quest_ , in particular, continues its services as a failure-analysis solution for tracking down and dealing with kernel bugs which might be getting in its customers way since 2005. [Tetsuro Yogo][28] took over as the New President and CEO on April 3, 2017. Check out their timeline [here][29]! They are also [on GitHub][30]!
You can also read about a recent development reported on August 2 last year, on this [translated][31] version of a Japanese IT news page. Its an update about _VA Linux_ providing technical support service of “[Kubernetes][32]” container management software in Japan.
Its good to know that their 18-year-old subsidiary is still doing well in Japan and the name of _VA Linux_ continues to flourish there even today!
What are your views? Do you want to share anything on _VA Linux_? Please let us know in the comments section below.
I hope you liked this first article in the Linux history series. If you know such interesting facts from the past that you would like us to cover here, please let us know.
--------------------------------------------------------------------------------
via: https://itsfoss.com/story-of-va-linux/
作者:[Avimanyu Bandyopadhyay][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/avimanyu/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Initial_public_offering
[2]: https://www.forbes.com/1999/05/03/feat.html
[3]: https://www.channelfutures.com/open-source/open-source-history-the-spectacular-rise-and-fall-of-va-linux
[4]: https://www.gamestop.com/
[5]: http://www.sun.com/
[6]: https://en.wikipedia.org/wiki/Do_it_yourself
[7]: https://www.urbandictionary.com/define.php?term=FTW
[8]: https://www.linkedin.com/in/larryaugustin/
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Linux-Founder-Larry-Augustin.jpg?ssl=1
[10]: https://www.linuxinsider.com/story/SourceForges-Larry-Augustin-A-Better-Way-to-Build-Web-Apps-62155.html
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Research-com-Snapshot-July-15-1997.jpg?ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Research-com-Snapshot-Feb-16-1998.jpg?ssl=1
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Linux-com-Snapshot-June-22-2001.jpg?ssl=1
[14]: http://geekgirlpenpals.com/geeknet-parent-company-to-thinkgeek-entered-agreement-with-gamestop/
[15]: https://medium.com/@avimanyu786/a-timeline-of-va-linux-through-the-years-6813e2bd4b13
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/LNUX-stock-fall.png?ssl=1
[17]: https://www.thinkgeek.com/
[18]: https://www.channelfutures.com/open-source/open-source-history-spectacular-rise-and-fall-va-linux
[19]: https://itsfoss.com/linux-gaming-distributions/
[20]: https://en.wikipedia.org/wiki/Open-source_video_game
[21]: https://www.valvesoftware.com/
[22]: https://itsfoss.com/steam-play-proton/
[23]: https://archive.org/web/web.php
[24]: https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=https%3A%2F%2Fwww.valinux.co.jp%2Fcorp%2Fstatement%2F&edit-text=
[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/va-linux-team-booth.jpg?resize=800%2C600&ssl=1
[26]: https://www.valinux.co.jp/english/
[27]: https://www.linux.com/news/va-linux-announces-linux-failure-analysis-service
[28]: https://www.linkedin.com/in/yogo45/
[29]: https://www.valinux.co.jp/english/about/timeline/
[30]: https://github.com/vaj
[31]: https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=https%3A%2F%2Fit.impressbm.co.jp%2Farticles%2F-%2F16499
[32]: https://en.wikipedia.org/wiki/Kubernetes

View File

@ -0,0 +1,98 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (CrossCode is an Awesome 16-bit Sci-Fi RPG Game)
[#]: via: (https://itsfoss.com/crosscode-game/)
[#]: author: (Phillip Prado https://itsfoss.com/author/phillip/)
CrossCode is an Awesome 16-bit Sci-Fi RPG Game
======
What starts off as an obvious sci-fi 16-bit 2D action RPG quickly turns into a JRPG inspired pseudo-MMO open-world puzzle platformer. Though at first glance this sounds like a jumbled mess, [CrossCode][1] manages to bundle all of its influences into a seamless gaming experience that feels nothing shy of excellent.
Note: CrossCode is not open source software. We have covered it because it is Linux specific.
![][2]
### Story
You play as Lea, a girl who has forgotten her identity, where she comes from, and how to speak. As you walk through the early parts of the story, you come to find that you are a character in a digital world — a video game. But not just any video game — an MMO. And you, Lea, must venture into the digital world known as CrossWorlds in order to unravel the secrets of your past.
As you progress through the game, you unveil more and more about yourself, learning how you got to this point in the first place. This doesnt sound too crazy of a story, but the gameplay implementation and appropriately paced storyline make for quite a captivating experience.
The story unfolds at a satisfying speed and the characters development is genuinely gratifying — both fictionally and mechanically. The only critique I had was that it felt like the introductory segment took a little too long — dragging the tutorial into the gameplay for quite some time, and keeping the player from getting into the real meat of the game.
All-in-all, CrossCodes story did not leave me wanting, not even in the slightest. Its deep, fun, heartwarming, intelligent, and all while never sacrificing great character development. Without spoiling anything, I will say that if you are someone that enjoys a good story, you will need to give CrossCode a look.
![][3]
### Gameplay
Yes, the story is great and all, but if there is one place that CrossCode truly shines, it has to be its gameplay. The games mechanics are fast-paced, challenging, intuitive, and downright fun!
You start off with a dodge, block, melee, and ranged attack, each slowly developing overtime as the character tree is unlocked. This all-too-familiar mix of combat elements balances skill and hack-n-slash mechanics in a way that doesnt conflict with one another.
The game utilizes this mix of skills to create some amazing puzzle solving and combat that helps CrossCodes gameplay truly stand out. Whether you are making your way through one of the four main dungeons, or you are taking a boss head on, you cant help but periodically stop and think “wow, this game is great!”
Though this has to be the games strongest feature, it can also be the games biggest downfall. Part of the reason that the story and character progression is so satisfying is because the combat and puzzle mechanics can be incredibly challenging, and thats putting it lightly.
There are times in which CrossCodes gameplay feels downright impossible. Bosses take an expert amount of focus, and dungeons require all of the patience you can muster up just to simply finish them.
![][4]
The game requires a type of dexterity I have not quite had to master yet. I mean, sure there are more challenging puzzle games out there, yes there are more difficult platformers, and of course there are more grueling RPGs, but adding all of these elements into one game while spurring the player along with an alluring story requires a level of mechanical balance that I havent found in many other games.
And though there were times I felt the gameplay was flat out punishing, I was constantly reminded that this is simply not the case. Death doesnt cause serious character regression, you can take a break from dungeons when you feel overwhelmed, and there is a plethora of checkpoints throughout the games most difficult parts to help the player along.
Where other games fall short by giving the player nothing to lose, this reality redeems CrossCode amid its rigorous gameplay. CrossCode may be one of the only games I know that takes two common flaws in games and holds the tension between them so well that it becomes one of the games best strengths.
![][5]
### Design
One of the things that surprised me most about CrossCode was how well its world and sound design come together. Right off the bat, from the moment you boot the game up, it is clear the developers meant business when designing CrossCode.
Being in a fictional MMO world, the games character ensemble is vibrant and distinctive, each having its own tone and personality. The games sound and motion graphics are tactile and responsive, giving the player a healthy amount of feedback during gameplay. And the soundtrack behind the game is simply beautiful, ebbing and flowing between intense moments of combat to blissful moments of exploration.
If I had to fault CrossCode in this category it would have to be in the size of the map. Yes, the dungeons are long, and yes, the CrossWorlds map looks gigantic, but I still wanted more to explore outside crippling dungeons. The game is beautiful and fluid, but akin to RPG games of yore — aka. Zelda games pre-Breath of the Wild — I wish there was just a little more for me to freely explore.
It is obvious that the developers really cared about this aspect of the game, and you can tell they spent an incredible amount of time developing its design. CrossCode set itself up for success here in its plot and content, and the developers capitalize on the opportunity, knocking another category out of the park.
![][6]
### Conclusion
In the end, it is obvious how I feel about this game. And just in case you havent caught on yet…I love it. It holds a near perfect balance between being difficult and rewarding, simple and complex, linear and open, making CrossCode one of [the best Linux games][7] out there.
Developed by [Radical Fish Games][8], CrossCode was officially released for Linux on September 21, 2018, seven years after development began. You can pick up the game over on [Steam][9], [GOG][10], or [Humble Bundle][11].
If you play games regularly, you may want to [subscribe to Humble Monthly][12] ([affiliate][13] link). For $12 per month, youll get games worth over $100 (not all for Linux). Over 450,000 gamers worldwide use Humble Monthly.
--------------------------------------------------------------------------------
via: https://itsfoss.com/crosscode-game/
作者:[Phillip Prado][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/phillip/
[b]: https://github.com/lujun9972
[1]: http://www.cross-code.com/en/home
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Level-up.png?fit=800%2C451&ssl=1
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Equpiment.png?fit=800%2C451&ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-character-development.png?fit=800%2C451&ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Environment.png?fit=800%2C451&ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-dungeon.png?fit=800%2C451&ssl=1
[7]: https://itsfoss.com/free-linux-games/
[8]: http://www.radicalfishgames.com/
[9]: https://store.steampowered.com/app/368340/CrossCode/
[10]: https://www.gog.com/game/crosscode
[11]: https://www.humblebundle.com/store/crosscode
[12]: https://www.humblebundle.com/monthly?partner=itsfoss
[13]: https://itsfoss.com/affiliate-policy/

View File

@ -1,91 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Enjoy Netflix? You Should Thank FreeBSD)
[#]: via: (https://itsfoss.com/netflix-freebsd-cdn/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Enjoy Netflix? You Should Thank FreeBSD
======
Netflix is one of the most popular streaming services in the world.
But you already know that. Dont you?
What you probably did not know is that Netflix uses [FreeBSD][1] to deliver its content to you.
Yes, thats right. Netflix relies on FreeBSD to build its in-house content delivery network (CDN).
A [CDN][2] is a group of servers located in various part of the world. It is mainly used to deliver heavy content like images and videos to the end-user faster than a centralized server.
Instead of opting for a commercial CDN service, Netflix has built its own in-house CDN called [Open Connect][3].
Open Connect utilizes [custom hardware][4], Open Connect Appliance. You can see it in the image below. It can handle 40Gb/s data and has a storage capacity of 248TB.
![Netflixs Open Connect Appliance runs FreeBSD][5]
Netflix provides Open Connect Appliance to qualifying Internet Service Providers (ISP) for free. This way, substantial Netflix traffic gets localized and the ISPs deliver the Netflix content more efficiently.
This Open Connect Appliance runs on FreeBSD operating system and [almost exclusively runs open source software][6].
### Open Connect uses FreeBSD “Head”
![][7]
You would expect Netflix to use a stable release of FreeBSD for such a critical infrastructure but Netflix tracks the [FreeBSD head/current version][8]. Netflix says that tracking “head” lets them “stay forward-looking and focused on innovation”.
Here are the benefits Netflix sees of tracking FreeBSD:
* Quicker feature iteration
* Quicker access to new FreeBSD features
* Quicker bug fixes
* Enables collaboration
* Minimizes merge conflicts
* Amortizes merge “cost”
> Running FreeBSD “head” lets us deliver large amounts of data to our users very efficiently, while maintaining a high velocity of feature development.
>
> Netflix
Remember, even [Google uses Debian][9] testing instead of Debian stable. Perhaps these enterprises prefer the cutting edge features more than anything else.
Like Google, Netflix also plans to upstream any code they can. This should help FreeBSD and other BSD distributions based on FreeBSD.
So what does Netflix achieves with FreeBSD? Here are some quick stats:
> Using FreeBSD and commodity parts, we achieve 90 Gb/s serving TLS-encrypted connections with ~55% CPU on a 16-core 2.6-GHz CPU.
>
> Netflix
If you want to know more about Netflix and FreeBSD, you can refer to [this presentation from FOSDEM][10]. You can also watch the video of the presentation [here][11].
These days big enterprises rely mostly on Linux for their server infrastructure but Netflix has put their trust in BSD. This is a good thing for BSD community because if an industry leader like Netflix throws its weight behind BSD, others could follow the lead. What do you think?
--------------------------------------------------------------------------------
via: https://itsfoss.com/netflix-freebsd-cdn/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.freebsd.org/
[2]: https://www.cloudflare.com/learning/cdn/what-is-a-cdn/
[3]: https://openconnect.netflix.com/en/
[4]: https://openconnect.netflix.com/en/hardware/
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/netflix-open-connect-appliance.jpeg?fit=800%2C533&ssl=1
[6]: https://openconnect.netflix.com/en/software/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/netflix-freebsd.png?resize=800%2C450&ssl=1
[8]: https://www.bsdnow.tv/tutorials/stable-current
[9]: https://itsfoss.com/goobuntu-glinux-google/
[10]: https://fosdem.org/2019/schedule/event/netflix_freebsd/attachments/slides/3103/export/events/attachments/netflix_freebsd/slides/3103/FOSDEM_2019_Netflix_and_FreeBSD.pdf
[11]: http://mirror.onet.pl/pub/mirrors/video.fosdem.org/2019/Janson/netflix_freebsd.webm

View File

@ -0,0 +1,146 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Installing Kali Linux on VirtualBox: Quickest & Safest Way)
[#]: via: (https://itsfoss.com/install-kali-linux-virtualbox/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Installing Kali Linux on VirtualBox: Quickest & Safest Way
======
_**This tutorial shows you how to install Kali Linux on Virtual Box in Windows and Linux in the quickest way possible.**_
[Kali Linux][1] is one of the [best Linux distributions for hacking][2] and security enthusiasts.
Since it deals with a sensitive topic like hacking, its like a double-edged sword. We have discussed it in the detailed Kali Linux review in the past so I am not going to bore you with the same stuff again.
While you can install Kali Linux by replacing the existing operating system, using it via a virtual machine would be a better and safer option.
With Virtual Box, you can use Kali Linux as a regular application in your Windows/Linux system. Its almost the same as running VLC or a game in your system.
Using Kali Linux in a virtual machine is also safe. Whatever you do inside Kali Linux will NOT impact your host system (i.e. your original Windows or Linux operating system). Your actual operating system will be untouched and your data in the host system will be safe.
![][3]
### How to install Kali Linux on VirtualBox
Ill be using [VirtualBox][4] here. It is a wonderful open source virtualization solution for just about anyone (professional or personal use). Its available free of cost.
In this tutorial, we will talk about Kali Linux in particular but you can install almost any other OS whose ISO file exists or a pre-built virtual machine save file is available.
**Note:** _The same steps apply for Windows/Linux running VirtualBox._
As I already mentioned, you can have either Windows or Linux installed as your host. But, in this case, I have Windows 10 installed (dont hate me!) where I try to install Kali Linux in VirtualBox step by step.
And, the best part is even if you happen to use a Linux distro as your primary OS, the same steps will be applicable!
Wondering, how? Lets see…
[Subscribe to Our YouTube Channel for More Linux Videos][5]
### Step by Step Guide to install Kali Linux on VirtualBox
_We are going to use a custom Kali Linux image made for VirtualBox specifically. You can also download the ISO file for Kali Linux and create a new virtual machine but why do that when you have an easy alternative?_
#### 1\. Download and install VirtualBox
The first thing you need to do is to download and install VirtualBox from Oracles official website.
[Download VirtualBox][6]
Once you download the installer, just double click on it to install VirtualBox. Its the same for [installing VirtualBox on Ubuntu][7]/Fedora Linux as well.
#### 2\. Download ready-to-use virtual image of Kali Linux
After installing it successfully, head to [Offensive Securitys download page][8] to download the VM image for VirtualBox. If you change your mind to utilize [VMware][9], that is available too.
![][10]
As you can see the file size is well over 3 GB, you should either use the torrent option or download it using a [download manager][11].
[Kali Linux Virtual Image][8]
#### 3\. Install Kali Linux on Virtual Box
Once you have installed VirtualBox and downloaded the Kali Linux image, you just need to import it to VirtualBox in order to make it work.
Heres how to import the VirtualBox image for Kali Linux:
**Step 1** : Launch VirtualBox. You will notice an **Import** button click on it
![Click on Import button][12]
**Step 2:** Next, browse the file you just downloaded and choose it to be imported (as you can see in the image below). The file name should start with kali linux and end with . **ova** extension.
![Importing Kali Linux image][13]
**S** Once selected, proceed by clicking on **Next**.
**Step 3** : Now, you will be shown the settings for the virtual machine you are about to import. So, you can customize them or not that is your choice. It is okay if you go with the default settings.
You need to select a path where you have sufficient storage available. I would never recommend the **C:** drive on Windows.
![Import hard drives as VDI][14]
Here, the hard drives as VDI refer to virtually mount the hard drives by allocating the storage space set.
After you are done with the settings, hit **Import** and wait for a while.
**Step 4:** You will now see it listed. So, just hit **Start** to launch it.
You might get an error at first for USB port 2.0 controller support, you can disable it to resolve it or just follow the on-screen instruction of installing an additional package to fix it. And, you are done!
![Kali Linux running in VirtualBox][15]
The default username in Kali Linux is root and the default password is toor. You should be able to login to the system with it.
Do note that you should [update Kali Linux][16] before trying to install a new applications or trying to hack your neighbors WiFi.
I hope this guide helps you easily install Kali Linux on Virtual Box. Of course, Kali Linux has a lot of useful tools in it for penetration testing good luck with that!
**Tip** : Both Kali Linux and Ubuntu are Debian-based. If you face any issues or error with Kali Linux, you may follow the tutorials intended for Ubuntu or Debian on the internet.
### Bonus: Free Kali Linux Guide Book
If you are just starting with Kali Linux, it will be a good idea to know how to use Kali Linux.
Offensive Security, the company behind Kali Linux, has created a guide book that explains the basics of Linux, basics of Kali Linux, configuration, setups. It also has a few chapters on penetration testing and security tools.
Basically, it has everything you need to get started with Kali Linux. And the best thing is that the book is available to download for free.
[Download Kali Linux Revealed for FREE][17]
Let us know in the comments below if you face an issue or simply share your experience with Kali Linux on VirtualBox.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-kali-linux-virtualbox/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.kali.org/
[2]: https://itsfoss.com/linux-hacking-penetration-testing/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box.png?resize=800%2C450&ssl=1
[4]: https://www.virtualbox.org/
[5]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[6]: https://www.virtualbox.org/wiki/Downloads
[7]: https://itsfoss.com/install-virtualbox-ubuntu/
[8]: https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox-image-download/
[9]: https://itsfoss.com/install-vmware-player-ubuntu-1310/
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box-image.jpg?resize=800%2C347&ssl=1
[11]: https://itsfoss.com/4-best-download-managers-for-linux/
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-import-kali-linux.jpg?ssl=1
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-linux-next.jpg?ssl=1
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-kali-linux-settings.jpg?ssl=1
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-on-windows-virtualbox.jpg?resize=800%2C429&ssl=1
[16]: https://linuxhandbook.com/update-kali-linux/
[17]: https://kali.training/downloads/Kali-Linux-Revealed-1st-edition.pdf

View File

@ -0,0 +1,96 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI)
[#]: via: (https://itsfoss.com/flowblade-video-editor-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI
======
[Flowblade][1] is one of the rare [video editors that are only available for Linux][2]. It is not the feature set but the simplicity, flexibility, and being an open source project that counts.
However, with Flowblade 2.0 released recently it is now more powerful and useful. A lot of new tools along with a complete overhaul in the workflow can be seen.
In this article, we shall take a look at whats new with Flowblade 2.0.
### New Features in Flowblade 2.0
Here are some of the major new changes in the latest release of Flowblade.
#### GUI Updates
![Flowblade 2.0][3]
This was a much needed change. Im always looking for open source solutions that works as expected along with a great GUI.
So, in this update, you will observe a new custom theme set as the default which looks good though.
Overall, the panel design, the toolbox and stuff has been taken care of to make it look modern. The overhaul considers small changes like the cursor icon upon tool selection and so on.
#### Workflow Overhaul
No matter what features you get to utilize, the workflow matters to people who regularly edit videos. So, it has to be intuitive.
With the recent release, they have made sure that you can configure and set the workflow as per your preference. Well, that is definitely flexible because not everyone has the same requirement.
#### New Tools
![Flowblade Video Editor Interface][4]
**Keyframe tool** : This enables editing and adjusting the Volume and Brightness [keyframes][5] on timeline.
**Multitrim** : A combination of trill, roll, and slip tool.
**Cut:** Available now as a tool in addition to the traditional cut at the playhead.
**Ripple trim:** It is a mode of Trim tool not often used by many now available as a separate tool.
#### More changes?
In addition to these major changes listed above, they have added some keyframe editing updates and compositors ( _AlphaXOR, Alpha Out, and Alpha_ ) to utilize alpha channel data to combine images.
A lot of more tiny little changes have taken place as well you can check those out in the official [changelog][6] on GitHub.
### Installing Flowblade 2.0
If you use Debian or Ubuntu based Linux distributions, there are .deb binaries available for easily installing Flowblade 2.0.
For the rest, youll have to [install it using the source code][7].
All the files are available on its GitHub page. You can download it from the page below.
[Download Flowblade 2.0][8]
### Wrapping Up
If you are interested in video editing, perhaps you would like to follow the development of [Olive][9], a new open source video editor in development.
Now that you know about the latest changes and additions. What do you think about Flowblade 2.0 as a video editor? Is it good enough for you?
Let us know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/flowblade-video-editor-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://github.com/jliljebl/flowblade
[2]: https://itsfoss.com/best-video-editing-software-linux/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/flowblade-2.jpg?ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/flowblade-2-1.jpg?resize=800%2C450&ssl=1
[5]: https://en.wikipedia.org/wiki/Key_frame
[6]: https://github.com/jliljebl/flowblade/blob/master/flowblade-trunk/docs/RELEASE_NOTES.md
[7]: https://itsfoss.com/install-software-from-source-code/
[8]: https://github.com/jliljebl/flowblade/releases/tag/v2.0
[9]: https://itsfoss.com/olive-video-editor/

View File

@ -0,0 +1,133 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Review of Debian System Administrators Handbook)
[#]: via: (https://itsfoss.com/debian-administrators-handbook/)
[#]: author: (Shirish https://itsfoss.com/author/shirish/)
Review of Debian System Administrators Handbook
======
_**Debian System Administrators Handbook is a free-to-download book that covers all the essential part of Debian that a sysadmin might need.**_
This has been on my to-do review list for quite some time. The book was started by two French Debian Developers Raphael Hertzog and Roland Mas to increase awareness about the Debian project in France. The book was a huge hit among francophone Linux users. The English translation followed soon after that.
### Debian Administrators Handbook
![][1]
[Debian Administrators Handbook][2] is targeted from a newbie who may be looking to understand what the [Debian project][3] is all about to somebody who might be running a Debian in a production server.
The latest version of the book covers Debian 8 while the current stable version is Debian 9. But it doesnt mean that book is outdated and is of no use to Debian 9 users. Most of the part of the book is valid for all Debian and Linux users.
Let me give you a quick summary of what this book covers.
#### Section 1 Debian Project
The first section sets the tone of the book where it gives a solid foundation to somebody who might be looking into Debian as to what it actually means. Some of it will probably be updated to match the current scenario.
#### Section 2 Using fictional case studies for different needs
The second section deals with the various case-scenarios as to where Debian could be used. The idea being how Debian can be used in various hierarchical or functional scenarios. One aspect which I felt that should have stressed upon is the culture mindshift and openness which at least should have been mentioned.
#### Section 3 & 4- Setups and Installation
The third section goes into looking in existing setups. I do think it should have stressed more into documenting existing setups, migrating partial services and users before making a full-fledged transition. While all of the above seem minor points, I have seen many of them come and bit me on the back during a transition.
Section Four covers the various ways you could install, how the installation process flows and things to keep in mind before installing a Debian System. Unfortunately, UEFI was not present at that point so it was not talked about.
#### Section 5 & 6 Packaging System and Updates
Section Five starts on how a binary package is structured and then goes on to tell how a source package is structured as well. It does mention several gotchas or tricky ways in which a sys-admin can be caught.
Section Six is perhaps where most of the sysadmins spend most of the time apart from troubleshooting which is another chapter altogether. While it starts from many of the most often used sysadmin commands, the interesting point which I liked was on page 156 which is on better solver algorithims.
#### Section 7 Solving Problems and finding Relevant Solutions
Section Seven, on the other hand, speaks of the various problem scenarios and various ways when you find yourself with a problem. In Debian and most GNU/Linux distributions, the keyword is patience. If you are patient then many problems in Debian are resolved or can be resolved after a good nights sleep.
#### Section 8 Basic Configuration, Network, Accounts, Printing
Section Eight introduces you to the basics of networking and having single or multiple user accounts on the workstation. It goes a bit into user and group configuration and practices then gives a brief introduction to the bash shell and gets a brief overview of the [CUPS][4] printing daemon. There is much to explore here.
#### Section 9 Unix Service
Section 9 starts with the introduction to specific Unix services. While it starts with the much controversial, hated and reviled in many quarters [systemd][5], they also shared System V which is still used by many a sysadmin.
#### Section 10, 11 & 12 Networking and Adminstration
Section 10 makes you dive into network infrastructure where it goes into the basics of Virtual Private Networks (OpenVPN), OpenSSH, the PKI credentials and some basics of information security. It also gets into basics of DNS, DHCP and IPv6 and ends with some tools which could help in troubleshooting network issues.
Section 11 starts with basic configuration and workflow of mail server and postfix. It tries to a bit into depth as there is much to play with. It then goes into the popular web server Apache, FTP File server, NFS and CIFS with Windows shares via Samba. Again, much to explore therein.
Section 12 starts with Advanced Administration topics such as RAID, LVM, when one is better than the other. Then gets into Virtualization, Xen and give brief about lxc. Again, there is much more to explore than shared herein.
![Author Raphael Hertzog at a Debian booth circa 2013 | Image Credit][6]
#### Section 13 Workstation
Section 13 shares about having schemas for xserver, display managers, window managers, menu management, the different desktops i.e. GNOME, KDE, XFCE and others. It does mention about lxde in the others. The one omission I felt which probably will be updated in a new release would be [Wayland][7] and [Xwayland][8]. Again much to explore in this section as well. This is rectified in the conclusion
#### Section 14 Security
Section 14 is somewhat comprehensive on what constitues security and bits of threats analysis but stops short as it shares in the introduction of the chapter itself that its a vast topic.
#### Section 15 Creating a Debian package
Section 15 explains the tools and processes to _debianize_ an application so it becomes part of the Debian archive and available for distribution on the 10 odd hardware architectures that Debian supports.
### Pros and Cons
Where Raphael and Roland have excelled is at breaking the visual monotony of the book by using a different style and structure wherever possible from the rest of the reading material. This compels the reader to refresh her eyes while at the same time focus on the important matter at the hand. The different visual style also indicates that this is somewhat more important from the authors point of view.
One of the drawbacks, if I may call it that, is the absolute absence of humor in the book.
### Final Thoughts
I have been [using Debian][9] for a decade so lots of it was a refresher for myself. Some of it is outdated if I look it from a buster perspective but is invaluable as a historical artifact.
If you are looking to familiarize yourself with Debian or looking to run Debian 8 or 9 as a production server for your business wouldnt be able to recommend a better book than this.
### Download Debian Administrators Handbook
The Debian Handbook has been available in every Debian release after 2012. The [liberation][10] of the Debian Handbook was done in 2012 using [ulule][11].
You can download an electronic version of the Debian Administrators Handbook in PDF, ePub or Mobi format from the link below:
[Download Debian Administrators Handbook][12]
You can also buy the book paperback edition of the book if you want to support the amazing work of the authors.
[Buy the paperback edition][13]
Lastly, if you want to motivate Raphael, you can reward by donating to his PayPal [account][14].
--------------------------------------------------------------------------------
via: https://itsfoss.com/debian-administrators-handbook/
作者:[Shirish][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/shirish/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/Debian-Administrators-Handbook-review.png?resize=800%2C450&ssl=1
[2]: https://debian-handbook.info/
[3]: https://www.debian.org/
[4]: https://www.cups.org
[5]: https://itsfoss.com/systemd-features/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/stand-debian-Raphael.jpg?resize=800%2C600&ssl=1
[7]: https://wayland.freedesktop.org/
[8]: https://en.wikipedia.org/wiki/X.Org_Server#XWayland
[9]: https://itsfoss.com/reasons-why-i-love-debian/
[10]: https://debian-handbook.info/liberation/
[11]: https://www.ulule.com/debian-handbook/
[12]: https://debian-handbook.info/get/now/
[13]: https://debian-handbook.info/get/
[14]: https://raphaelhertzog.com/

View File

@ -0,0 +1,114 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (LibreOffice 6.2 is Here: This is the Last Release with 32-bit Binaries)
[#]: via: (https://itsfoss.com/libreoffice-drops-32-bit-support/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
LibreOffice 6.2 is Here: This is the Last Release with 32-bit Binaries
======
LibreOffice is my favorite office suite as a free and powerful [alternative to Microsoft Office tools on Linux][1]. Even when I use my Windows machine I prefer to have LibreOffice installed instead of Microsoft Office tools any day.
Now, with the recent [LibreOffice][2] 6.2 update, theres a lot of good stuff to talk about along with a bad news.
### Whats New in LibreOffice 6.2?
Lets have a quick look at the major new features in the [latest release of LibreOffice][3].
If you like Linux videos, dont forget to [subscribe to our YouTube channel][4] as well.
#### The new NotebookBar
![][5]
A new addition to the interface that is optional and not enabled by default. In order to enable it, go to **View - > User Interface -> Tabbed**.
You can either set it as a tabbed layout or a grouped compact layout.
While it is not something that is mind blowing but it still counts as a significant user interface update considering a variety of user preferences.
#### Icon Theme
![][6]
A new set of icons is now available to choose from. I will definitely utilize the new set of icons they look good!
#### Platform Compatibility
With the new update, the compatibility has been improved across all the platforms (Mac, Windows, and Linux).
#### Performance Improvements
This shouldnt concern you if you didnt have any issues. But, still, the better they work on this it is a win-win for all.
They have removed unnecessary animations, worked on latency reduction, avoided repeated re-layout, and more such things to improve the performance.
#### More fixes and improvements
A lot of bugs have been fixed in this new update along with little tweaks here and there for all the tools (Writer, Calc, Draw, Impress).
To get to know all the technical details, you should check out their [release notes.
][7]
### The Sad News: Dropping the support for 32-bit binaries
Of course, this is not a feature. But, this was bound to happen because it was anticipated a few months ago. LibreOffice will no more provide 32-bit binary releases.
This is inevitable. [Ubuntu has dropped 32-bit support][8]. Many other Linux distributions have also stopped supporting 32-bit processors. The number of [Linux distributions still supporting a 32-bit architecture][9] is fast dwindling.
For the future versions of LibreOffice on 32-bit systems, youll have to rely on your distribution to provide it to you. You cannot download the binaries anymore.
### Installing LibreOffice 6.2
![][10]
Your Linux distribution should be providing this update sooner or later.
Arch-based Linux users should be getting it already while Ubuntu and Debian users would have to wait a bit longer.
If you cannot wait, you should download it and [install it from the deb file][11]. Do remove the existing LibreOffice install before using the DEB file.
[Download LibreOffice 6.2][12]
If you dont want to use the deb file, you may use the official PPA should provide you LibreOffice 6.2 before Ubuntu (it doesnt have 6.2 release at the moment). It will update your existing LibreOffice install.
```
sudo add-apt-repository ppa:libreoffice/ppa
sudo apt update
sudo apt install libreoffice
```
### Wrapping Up
LibreOffice 6.2 is definitely a major step up to keep it as a better alternative to Microsoft Office for Linux users.
Do you happen to use LibreOffice? Do these updates matter to you? Let us know in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/libreoffice-drops-32-bit-support/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/
[2]: https://www.libreoffice.org/
[3]: https://itsfoss.com/libreoffice-6-0-released/
[4]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/libreoffice-tabbed.png?resize=800%2C434&ssl=1
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/Libreoffice-style-elementary.png?ssl=1
[7]: https://wiki.documentfoundation.org/ReleaseNotes/6.2
[8]: https://itsfoss.com/ubuntu-drops-32-bit-desktop/
[9]: https://itsfoss.com/32-bit-os-list/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/libre-office-6-2-release.png?resize=800%2C450&ssl=1
[11]: https://itsfoss.com/install-deb-files-ubuntu/
[12]: https://www.libreoffice.org/download/download/

View File

@ -0,0 +1,103 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Earliest Linux Distros: Before Mainstream Distros Became So Popular)
[#]: via: (https://itsfoss.com/earliest-linux-distros/)
[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/)
The Earliest Linux Distros: Before Mainstream Distros Became So Popular
======
In this throwback history article, weve tried to look back into how some of the earliest Linux distributions evolved and came into being as we know them today.
![][1]
In here we have tried to explore how the idea of popular distros such as Red Hat, Debian, Slackware, SUSE, Ubuntu and many others came into being after the first Linux kernel became available.
As Linux was initially released in the form of a kernel in 1991, the distros we know today was made possible with the help of numerous collaborators throughout the world with the creation of shells, libraries, compilers and related packages to make it a complete Operating System.
### 1\. The first known “distro” by HJ Lu
The way we know Linux distributions today goes back to 1992, when the first known distro-like tools to get access to Linux were released by HJ Lu. It consisted of two 5.25” floppy diskettes:
![Linux 0.12 Boot and Root Disks | Photo Credit][2]
* **LINUX 0.12 BOOT DISK** : The “boot” disk was used to boot the system first.
* **LINUX 0.12 ROOT DISK** : The second “root” disk for getting a command prompt for access to the Linux file system after booting.
To install 0.12 on a hard drive, one had to use a hex editor to edit its master boot record (MBR) and that was quite a complex process, especially during that era.
Feeling too nostalgic?
You can [install cool-retro-term application][3] that gives you a Linux terminal in the vintage looks of the 90s computers.
### 2\. MCC Interim Linux
![MCC Linux 0.99.14, 1993 | Image Credit][4]
Initially released in the same year as “LINUX 0.12” by Owen Le Blanc of Manchester Computing Centre in England, MCC Interim Linux was the first Linux distribution for novice users with a menu driven installer and end user/programming tools. Also in the form of a collection of diskettes, it could be installed on a system to provide a basic text-based environment.
MCC Interim Linux was much more user-friendly than 0.12 and the installation process on a hard drive was much easier and similar to modern ways. It did not require using a hex editor to edit the MBR.
Though it was first released in February 1992, it was also available for download through FTP since November that year.
### 3\. TAMU Linux
![TAMU Linux | Image Credit][5]
TAMU Linux was developed by Aggies at Texas A&M with the Texas A&M Unix & Linux Users Group in May 1992 and was called TAMU 1.0A. It was the first Linux distribution to offer the X Window System instead of just a text based operating system.
### 4\. Softlanding Linux System (SLS)
![SLS Linux 1.05, 1994 | Image Credit][6]
“Gentle Touchdowns for DOS Bailouts” was their slogan! SLS was released by Peter McDonald in May 1992. SLS was quite widely used and popular during its time and greatly promoted the idea of Linux. But due to a decision by the developers to change the executable format in the distro, users stopped using it.
Many of the popular distros the present community is most familiar with, evolved via SLS. Two of them are:
* **Slackware** : One of the earliest Linux distros, Slackware was created by Patrick Volkerding in 1993. Slackware is based on SLS and was one of the very first Linux distributions.
* **Debian** : An initiative by Ian Murdock, Debian was also released in 1993 after moving on from the SLS model. The very popular Ubuntu distro we know today is based on Debian.
### 5\. Yggdrasil
![LGX Yggdrasil Fall 1993 | Image Credit][7]
Released on December 1992, Yggdrasil was the first distro to give birth to the idea of Live Linux CDs. It was developed by Yggdrasil Computing, Inc., founded by Adam J. Richter in Berkeley, California. It could automatically configure itself on system hardware as “Plug-and-Play”, which is a very regular and known feature in todays time. The later versions of Yggdrasil included a hack for running any proprietary MS-DOS CD-ROM driver within Linux.
![Yggdrasils Plug-and-Play Promo | Image Credit][8]
Their motto was “Free Software For The Rest of Us”.
In the late 90s, one very popular distro was [Mandriva][9], first released in 1998, by unifying the French _Mandrake Linux_ distribution with the Brazilian _Conectiva Linux_ distribution. It had a release lifetime of 18 months for updates related to Linux and system software and desktop based updates were released every year. It also had server versions with 5 years of support. Now we have [Open Mandriva][10].
If you have more nostalgic distros to share from the earliest days of Linux release, please share with us in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/earliest-linux-distros/
作者:[Avimanyu Bandyopadhyay][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/avimanyu/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/earliest-linux-distros.png?resize=800%2C450&ssl=1
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Linux-0.12-Floppies.jpg?ssl=1
[3]: https://itsfoss.com/cool-retro-term/
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/MCC-Interim-Linux-0.99.14-1993.jpg?fit=800%2C600&ssl=1
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/TAMU-Linux.jpg?ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/SLS-1.05-1994.jpg?ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/LGX_Yggdrasil_CD_Fall_1993.jpg?fit=781%2C800&ssl=1
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Yggdrasil-Linux-Summer-1994.jpg?ssl=1
[9]: https://en.wikipedia.org/wiki/Mandriva_Linux
[10]: https://www.openmandriva.org/

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Decentralized Slack Alternative Riot Releases its First Stable Version)
[#]: via: (https://itsfoss.com/riot-stable-release/)
[#]: author: (Shirish https://itsfoss.com/author/shirish/)
Decentralized Slack Alternative Riot Releases its First Stable Version
======
Remember [Riot messenger][1]? Its a decentralized, encrypted open source messaging software based on the [Matrix protocol][2].
I wrote a [detailed tutorial on using Riot on Linux desktop][3]. The software was in beta back then. The first stable version, Riot 1.0 has been released a few days ago. Wonder whats new?
![][4]
### New Features in Riot 1.0
Lets look at some of the changes which were introduced in the move to Riot 1.0.
#### New Looks and Branding
![][5]
The first thing that you see is the welcome screen which has a nice background and also a refreshed sky and dark blue logo which is cleaner and clearer than the previous logo.
The welcome screen gives you the option to sign into an existing riot account on either matrix.org or any other homeserver or create an account. There is also the option to talk with the Riot Bot and have a room directory listing.
#### Changing Homeservers and Making your own homeserver
![Make your own homeserver][6]
As you can see, here you can change the homeserver. The idea of riot as was shared before is to have [de-centralized][7] chat services, without foregoing the simplicity that centralized services offer. For those who want to run their own homeservers, you need the new [matrix-syanpse 0.99.1.1 reference homeserver][8].
You can find an unofficial list of matrix homeservers listed [here][9] although its far from complete.
#### Internationalization and Languages.
One of the more interesting things are that the UI and everything is now il8n-aware and has been translated to catala, dansk, duetsch, Spanish along with English (US) which is/was the default when I installed. We can hope to see some more improvements in language support going ahead.
#### Favoriting a channel
![Favoriting a channel in Riot][10]
One of the things that has changed from last time is how you favorite a channel. Now as you can see, you select the channel, click on the three vertical dots in it and then either favorite or do whatever you want with it.
#### Making changes to your profile and Settings
![Riot Different settings you can do. ][11]
Just clicking the drop-down box beside your Avatar you get the settings box. You click on the box and it gives a wide variety of settings you can change.
As you can see there are lot more choices and the language is easier than before.
#### Encryption and E2E
![Riot encryption screen][12]
One of the big things which riot has been talked about is Encryption and end-to-end encryption. This is still a work in progress.
The new release brings the focus on two enhancements in encryption: key backup and emoji device verification (still in progress).
With Riot 1.0, you can automatically backup your keys on your server. This key itself will be encrypted with a password so that it is stored securely. With this, youll never lose your encrypted message because you wont lose your encryption key.
You will soon be able to verify your device with emoji now which is easier than matching long strings, isnt it?
**In the end**
Using Riot requires a bit of patience. Once you get the hang of it, there is nothing like it. This decentralized messaging app becomes an important tool in the arsenal of privacy cautious people.
Riot is an important tool in the continuous effort to keep our data secure and privacy intact. The new major release makes it even more awesome. What do you think?
--------------------------------------------------------------------------------
via: https://itsfoss.com/riot-stable-release/
作者:[Shirish][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/shirish/
[b]: https://github.com/lujun9972
[1]: https://about.riot.im/
[2]: https://matrix.org/blog/home/
[3]: https://itsfoss.com/riot-desktop/
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-messenger.jpg?ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-im-web-1.0-welcome-screen.jpg?ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-web-1.0-change-homeservers.jpg?resize=800%2C420&ssl=1
[7]: https://medium.com/s/story/why-decentralization-matters-5e3f79f7638e
[8]: https://github.com/matrix-org/synapse/releases/tag/v0.99.1.1
[9]: https://www.hello-matrix.net/public_servers.php
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-web-1.0-channel-preferences.jpg?resize=800%2C420&ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-web-1.0-settings-1-e1550427251686.png?ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-web-1.0-encryption.jpg?fit=800%2C572&ssl=1

View File

@ -0,0 +1,104 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (DevOps for Network Engineers: Linux Foundations New Training Course)
[#]: via: (https://itsfoss.com/devops-for-network-engineers/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
DevOps for Network Engineers: Linux Foundations New Training Course
======
_**Linux Foundation has launched a[DevOps course for sysadmins][1] and network engineers. They are also offering a limited time 40% discount on the launch.**_
DevOps is no longer a buzzword. It has become the necessity for any IT company.
The role and responsibility of a sysadmin and a network engineer have changed as well. They are required to have knowledge of the DevOps tools popular in the IT industry.
If you are a sysadmin or a network engineer, you can no longer laugh off DevOps anymore. Its time to learn new skills to stay relevant in todays rapidly changing IT industry otherwise the automation trend might cost you your job.
And who knows it better than Linux Foundation, the official organization behind Linux project and the employer of Linux-creator Linus Torvalds?
[Linux Foundation has a number of courses on Linux and related technologies][2] that help you in getting a job or improving your existing skills at work.
The [latest course offering][1] from Linux Foundation specifically focuses on sysadmins who would like to familiarize with DevOps tools.
### DevOps for Network Engineers Course
![][3]
[This course][1] is intended for existing sysadmins and network engineers. So you need to have some knowledge of Linux system administration, shell scripting and Python.
The course will help you with:
* Integrating into a DevOps/Agile environment
* Familiarizing with commonly used DevOps tools
* Collaborating on projects as DevOps
* Confidently working with software and configuration files in version control
* Recognizing the roles of SCRUM team members
* Confidently applying Agile principles in an organization
This is the course outline:
* Chapter 1. Course Introduction
* Chapter 2. Modern Project Management
* Chapter 3. The DevOps Process: A Network Engineers Perspective
* Chapter 4. Network Simulation and Testing with [Mininet][4]
* Chapter 5. [OpenFlow][5] and [ONOS][6]
* Chapter 6. Infrastructure as Code ([Ansible][7] Basics)
* Chapter 7. Version Control ([Git][8])
* Chapter 8. Continuous Integration and Continuous Delivery ([Jenkins][9])
* Chapter 9. Using [Gerrit][10] in DevOps
* Chapter 10. Jenkins, Gerrit and Code Review for DevOps
* Chapter 11. The DevOps Process and Tools (Review)
Altogether, you get 25-30 hours of course material. The online course is self-paced and you can access the material for one year from the date of purchase.
_**Unlike most other courses on Linux Foundation, this is NOT a video course.**_
There is no certification for this course because it is more focused on learning and improving skills.
#### Get the course at a 40% discount (limited time)
The course costs $299 but since its just launched, they are offering 40% discount till March 1st, 2019. You can get the discount by using the **DEVOPSNET** coupon code at checkout.
[DevOps for Network Engineers][1]
By the way, if you are interested in Open Source development, you can benefit from the “[Introduction to Open Source Development, Git, and Linux][11]” video course. You can get a limited time 50% discount using **OSDEV50** code at the checkout.
Staying relevant is absolutely necessary in any industry, not just IT industry. Learning new skills that are in-demand in your industry is perhaps the best way in this regard.
What do you think? What are your views on the current automation trend? How would you go about it?
_Disclaimer: This post contains affiliate links. Please read our_ [_affiliate policy_][12] _for more details._
--------------------------------------------------------------------------------
via: https://itsfoss.com/devops-for-network-engineers/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: http://shrsl.com/1glcb
[2]: https://shareasale.com/r.cfm?b=1074561&u=747593&m=59485&urllink=&afftrack=
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/DevOps-for-Network-Engineers-800x450.png?resize=800%2C450&ssl=1
[4]: http://mininet.org/
[5]: https://en.wikipedia.org/wiki/OpenFlow
[6]: https://onosproject.org/
[7]: https://www.ansible.com/
[8]: https://itsfoss.com/basic-git-commands-cheat-sheet/
[9]: https://jenkins.io/
[10]: https://www.gerritcodereview.com/
[11]: https://shareasale.com/r.cfm?b=1193750&u=747593&m=59485&urllink=&afftrack=
[12]: https://itsfoss.com/affiliate-policy/

View File

@ -0,0 +1,56 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open Source is Eating the Startup Ecosystem: A Guide for Assessing the Value Creation of Startups)
[#]: via: (https://www.linux.com/BLOG/2019/3/OPEN-SOURCE-EATING-STARTUP-ECOSYSTEM-GUIDE-ASSESSING-VALUE-CREATION-STARTUPS)
[#]: author: (Ibrahim Haddad https://www.linux.com/USERS/IBRAHIM)
Open Source is Eating the Startup Ecosystem: A Guide for Assessing the Value Creation of Startups
======
![Open Source][1]
If you want a deeper understanding of defining, implementing, and improving open source compliance programs within your organizations—this ebook is a must read. Download now.
[Creative Commons Zero][2]
Unsplash
In the last few years, we have witnessed the unprecedented growth of open source in all industries—from the increased adoption of open source software in products and services, to the extensive growth in open source contributions and the releasing of proprietary technologies under an open source license. It has been an incredible experience to be a part of.
![Open Source][3]
[The Linux Foundation][4]
As many have stated, Open Source is the New Normal, Open Source is Eating the World, Open Source is Eating Software, etc. all of which are true statements. To that extent, Id like to add one more maxim: Open Source is Eating the Startup Ecosystem. It is almost impossible to find a technology startup today that does not rely in one shape or form on open source software to boot up its operation and develop its product offering. As a result, we are operating in a space where open source due diligence is now a mandatory exercise in every M&A transaction. These exercises evaluate the open source practices of an organization and scope out all open source software used in product(s)/service(s) and how it interacts with proprietary components—all of which is necessary to assess the value creation of the company in relation to open source software.
Being intimately involved in this space has allowed me to observe, learn, and apply many open source best practices. I decided to chronicle these learnings in an ebook as a contribution to the [OpenChain project][5]: [Assessment of Open Source Practices as part of Due Diligence in Merger and Acquisition Transactions][6]. This ebook addresses the basic question of: How does one evaluate open source practices in a given organization that is an acquisition target? We address this question by offering a path to evaluate these practices along with appropriate checklists for reference. Essentially, it explains how the acquirer and the target company can prepare for this due diligence, offers an explanation of the audit process, and provides general recommended practices for ensuring open source compliance.
If is important to note that not every organization will see a need to implement every practice we recommend. Some organizations will find alternative practices or implementation approaches to achieve the same results. Appropriately, an organization will adapt its open source approach based upon the nature and amount of the open source it uses, the licenses that apply to open source it uses, the kinds of products it distributes or services it offers, and the design of the products or services themselves
If you are involved in assessing the open source and compliance practices of organizations, or involved in an M&A transaction focusing on open source due diligence, or simply want to have a deeper level of understanding of defining, implementing, and improving open source compliance programs within your organizations—this ebook is a must read. [Download the Brief][6].
This article originally appeared at the [Linux Foundation.][7]
--------------------------------------------------------------------------------
via: https://www.linux.com/BLOG/2019/3/OPEN-SOURCE-EATING-STARTUP-ECOSYSTEM-GUIDE-ASSESSING-VALUE-CREATION-STARTUPS
作者:[Ibrahim Haddad][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/USERS/IBRAHIM
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/open-alexandre-godreau-510220-unsplash.jpg?itok=2udo1XKo (Open Source)
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
[3]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/assessmentofopensourcepractices_ebook_mockup-768x994.png?itok=qpLKAVGR (Open Source)
[4]: /LICENSES/CATEGORY/LINUX-FOUNDATION
[5]: https://www.openchainproject.org/
[6]: https://www.linuxfoundation.org/open-source-management/2019/03/assessment-open-source-practices/
[7]: https://www.linuxfoundation.org/blog/2019/03/open-source-is-eating-the-startup-ecosystem-a-guide-for-assessing-the-value-creation-of-startups/

View File

@ -0,0 +1,125 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mageia Linux Is a Modern Throwback to the Underdog Days)
[#]: via: (https://www.linux.com/BLOG/LEARN/2019/3/MAGEIA-LINUX-MODERN-THROWBACK-UNDERDOG-DAYS)
[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
Mageia Linux Is a Modern Throwback to the Underdog Days
======
![Welcome to Mageia][1]
The Mageia Welcome App is a boon for new Linux users.
[Used with permission][2]
Ive been using Linux long enough to remember Linux Mandrake. I recall, at one of my first-ever Linux conventions, hanging out with the MandrakeSoft crew and being starstruck to think that they were creating a Linux distribution that was sure to bring about world domination for the open source platform.
Well, that didnt happen. In fact, Linux Mandrake didnt even stand the test of time. It was renamed Mandriva and rebranded. Mandriva retained popularity but eventually came to a halt in 2011. The company disbanded, sending all those star developers to other projects. Of course, rising from the ashes of Mandrake Linux came the likes of [OpenMandriva][3], as well as another distribution called [Mageia Linux][4].
Like OpenMandriva, Mageia Linux is a fork of Mandriva. It was created (by a group of former Mandriva employees) in 2010 and first released in 2011, so there was next to no downtime between the end of Mandriva and the release of Mageia. Since then, Mageia has existed in the shadows of bigger, more popular flavors of Linux (e.g., Ubuntu, Mint, Fedora, Elementary OS, etc.), but its never faltered. As of this writing, Mageia is listed as number 26 on the [Distrowatch][5] Page Hit Ranking chart and is enjoying release number 6.1.
### What Sets Mageia Apart?
This question has become quite important when looking at Linux distributions, considering just how many distros there are—many of which are hard to tell apart. If youve seen one KDE, GNOME, or Xfce distribution, youve seen them all, right? Anyone who's used Linux enough knows this statement is not even remotely true. For many distributions, though, the differences lie in the subtleties. Its not about what you do with the desktop; its how you put everything together to improve the user experience.
Mageia Linux defaults to the KDE desktop and does as good a job as any other distribution at presenting KDE to users. But before you start using KDE, you should note some differences between Mageia and other distributions. To start, the installation is quite simple, but its slightly askew from what might expect. In similar fashion to most modern distributions, you boot up the live instance and click on the Install icon (Figure 1).
![Installing Mageia][6]
Figure 1: Installing Mageia from the Live instance.
[Used with permission][2]
Once youve launched the installation app, its fairly straightforward, although not quite as simple as some other versions of Linux. New users might hesitate when they are presented with the partition choice between Use free space or Custom disk partition (Remember, Im talking about new users here). This type of user might prefer a bit simpler verbiage. Consider this: What if you were presented (at the partition section) by two choices:
* Basic Install
* Custom Install
The Basic install path would choose a fairly standard set of options (e.g., using the whole disk for installation and placing the bootloader in the proper/logical place). In contrast, the Custom install would allow the user to install in a non-default fashion (for dual boot, etc.) and choose where the bootloader would go and what options to apply.
The next possible confusing step (again, for new users) is the bootloader (Figure 2). For those who have installed Linux before, this option is a no-brainer. For new users, even understanding what a bootloader does can be a bit of an obstacle.
![bootloader][7]
Figure 2: Configuring the Mageia bootloader.
[Used with permission][2]
The bootloader configuration screen also allows you to password protect GRUB2. Because of the layout of this screen, it could be confused as the root user password. Its not. If you dont want to password protect GRUB2, leave this blank. In the final installation screen (Figure 3), you can set any bootloader options you might want. Once again, we find a window that could cause confusion with new users.
![bootloader options][8]
Figure 3: Advanced bootloader options can be configured here.
[Used with permission][2]
Click Finish and the installation will complete. You might have noticed the absence of user configuration or root user password options. With the first stage of the installation complete, you reboot the machine, remove the installer media, and (when the machine reboots) youll then be prompted to configure both the root user password and a standard user account (Figure 4).
![Configuring your users][9]
Figure 4: Configuring your users.
[Used with permission][2]
And thats all there is to the Mageia installation.
### Welcome to Mageia
Once you log into Mageia, youll be greeted by something every Linux distribution should use—a welcome app (Figure 5).
![welcome app][10]
Figure 5: The Mageia welcome app is a new users best friend.
[Used with permission][2]
From this welcome app, you can get information about the distribution, get help, and join communities. The importance of having such an approach to greet users at login cannot be overstated. When new users log into Linux for the first time, they want to know that help is available, should they need it. Mageia Linux has done an outstanding job with this feature. Granted, all this app does is serve as a means to point users to various websites, but its important information for users to have at the ready.
Beyond the welcome app, the Mageia Control Center (Figure 6) also helps Mageia stand out. This one-stop-shop is where users can take care of installing/updating software, configuring media sources for installation, configure update frequency, manage/configure hardware, configure network devices (e.g., VPNs, proxies, and more), configure system services, view logs, open an administrator console, create network shares, and so much more. This is as close to the openSUSE YaST tool as youll find (without using either SUSE or openSUSE).
![Control Center][11]
Figure 6: The Mageia Control Center is an outstanding system management tool.
[Used with permission][2]
Beyond those two tools, youll find everything else you need to work. Mageia Linux comes with the likes of LibreOffice, Firefox, KMail, GIMP, Clementine, VLC, and more. Out of the box, youd be hard pressed to find another tool you need to install to get your work done. Its that complete a distribution.
### Target Audience
Figuring out the Mageia Linux target audience is a tough question to answer. If new users can get past the somewhat confusing installation (which isnt really that challenging, just slightly misleading), using Mageia Linux is a dream.
The slick, barely modified KDE desktop, combined with the welcome app and control center make for a desktop Linux that will let users of all skill levels feel perfectly at home. If the developers could tighten up the verbiage on the installation, Mageia Linux could be one of the greatest new user Linux experiences available. Until then, new users should make sure they understand what theyre getting into with the installation portion of this take on the Linux platform.
--------------------------------------------------------------------------------
via: https://www.linux.com/BLOG/LEARN/2019/3/MAGEIA-LINUX-MODERN-THROWBACK-UNDERDOG-DAYS
作者:[Jack Wallen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/jlwallen
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia-main.jpg?itok=ZmkbMxfM (Welcome to Mageia)
[2]: /LICENSES/CATEGORY/USED-PERMISSION
[3]: https://www.openmandriva.org/
[4]: https://www.mageia.org/en/
[5]: https://distrowatch.com/
[6]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_1.jpg?itok=RYXPU70j (Installing Mageia)
[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_2.jpg?itok=m2IPxgA4 (bootloader)
[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_3.jpg?itok=Bs2PPrMF (bootloader options)
[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_4.jpg?itok=YZBIZ0Ua (Configuring your users)
[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_5.jpg?itok=gYcTfUKv (welcome app)
[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_6.jpg?itok=eSl2qpPp (Control Center)

View File

@ -1,301 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Configure sudo Access In Linux?)
[#]: via: (https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How To Configure sudo Access In Linux?
======
The root user has all the controls in Linux system.
root user is the most powerful user in the Linux system and can perform any action in the system.
If any users wants to perform some actions, dont provide the root access to anybody because if he/she done anything wrong there is no option/way to rectify it.
To fix this, what will be the solution?
We can grant sudo permission to the corresponding user to overcome this situation.
The sudo command offers a mechanism for providing trusted users with administrative access to a system without sharing the password of the root user.
They can perform most of the administrative operations but not all operations like root.
### What Is sudo?
sudo is a program, which can be used by a normal users to execute a command as the super user or another user, as specified by the security policy.
sudo users access is controlled by `/etc/sudoers` file.
### What Is An Advantage Of sudo Users?
sudo is a safe way to run a command in Linux system if you are not familiar on it.
* The Linux system keeps a logs into the `/var/log/secure` and `/var/log/auth.log` file where you can verify what actions was made by the sudo user.
* Every time, it will prompt a password to perform the current action. So, you will be getting a time to verify the action, which you are going to perform. If you feel its not a correct action then you can safely exit there itself without perform the current action.
Its different for RHEL based systems such as Redhat (RHEL), CentOS and Oracle Enterprise Linux (OEL) and Debian based systems such as Debian, Ubuntu and LinuxMint.
We will tech you, how to perform this on both the distributions in this article.
It can be done in three ways in both the distributions.
* Add a user into corresponding groups. For RHEL based system, we need to add a user into `wheel` group. For Debian based system, we need to add a user into `sudo` or `admin` groups.
* Add a user into `/etc/group` file manually.
* Add a user into `/etc/sudoers` file using visudo.
### How To Configure sudo Access In RHEL/CentOS/OEL Systems?
It can be done on RHEL based systems such as Redhat (RHEL), CentOS and Oracle Enterprise Linux (OEL) using following three methods.
### Method-1: How To Grant The Super User Access To A Normal User In Linux Using wheel Group?
Wheel is a special group in the RHEL based systems that provides additional privileges that empower a user to execute restricted commands as the super user.
Make a note that the `wheel` group should be enabled in the `/etc/sudoers` file to gain this access.
```
# grep -i wheel /etc/sudoers
## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
# %wheel ALL=(ALL) NOPASSWD: ALL
```
I assume that we had already created an user account to perform this. In my case, Im going to use `daygeek` user account.
Run the following command to add an user into wheel group.
```
# usermod -aG wheel daygeek
```
We can doube confirm this by running the following command.
```
# getent group wheel
wheel:x:10:daygeek
```
Im going to check whether `daygeek` user can access a file which is owned by the root user.
```
$ tail -5 /var/log/secure
tail: cannot open _/var/log/secure_ for reading: Permission denied
```
I was getting an error when i try to access the `/var/log/secure` file as a normal user. Im going to access the same file with sudo, lets see the magic.
```
$ sudo tail -5 /var/log/secure
[sudo] password for daygeek:
Mar 17 07:01:56 CentOS7 sudo: daygeek : TTY=pts/0 ; PWD=/home/daygeek ; USER=root ; COMMAND=/bin/tail -5 /var/log/secure
Mar 17 07:01:56 CentOS7 sudo: pam_unix(sudo:session): session opened for user root by daygeek(uid=0)
Mar 17 07:01:56 CentOS7 sudo: pam_unix(sudo:session): session closed for user root
Mar 17 07:05:10 CentOS7 sudo: daygeek : TTY=pts/0 ; PWD=/home/daygeek ; USER=root ; COMMAND=/bin/tail -5 /var/log/secure
Mar 17 07:05:10 CentOS7 sudo: pam_unix(sudo:session): session opened for user root by daygeek(uid=0)
```
### Method-2: How To Grant The Super User Access To A Normal User In RHEL/CentOS/OEL using /etc/group file?
We can manually add an user into the wheel group by editing the `/etc/group` file.
Just open the file then append the corresponding user in the appropriate group to achieve this.
```
$ grep -i wheel /etc/group
wheel:x:10:daygeek,user1
```
In this example, Im going to use `user1` user account.
Im going to check whether `user1` user has sudo access or not by restarting the `Apache` service in the system. lets see the magic.
```
$ sudo systemctl restart httpd
[sudo] password for user1:
$ sudo grep -i user1 /var/log/secure
[sudo] password for user1:
Mar 17 07:09:47 CentOS7 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/bin/systemctl restart httpd
Mar 17 07:10:40 CentOS7 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/bin/systemctl restart httpd
Mar 17 07:12:35 CentOS7 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/bin/grep -i httpd /var/log/secure
```
### Method-3: How To Grant The Super User Access To A Normal User In Linux Using /etc/sudoers file?
sudo users access is controlled by `/etc/sudoers` file. So, simply add an user into the sudoers file under wheel group.
Just append the desired user into /etc/suoders file by using visudo command.
```
# grep -i user2 /etc/sudoers
user2 ALL=(ALL) ALL
```
In this example, Im going to use `user2` user account.
Im going to check whether `user2` user has sudo access or not by restarting the `MariaDB` service in the system. lets see the magic.
```
$ sudo systemctl restart mariadb
[sudo] password for user2:
$ sudo grep -i mariadb /var/log/secure
[sudo] password for user2:
Mar 17 07:23:10 CentOS7 sudo: user2 : TTY=pts/0 ; PWD=/home/user2 ; USER=root ; COMMAND=/bin/systemctl restart mariadb
Mar 17 07:26:52 CentOS7 sudo: user2 : TTY=pts/0 ; PWD=/home/user2 ; USER=root ; COMMAND=/bin/grep -i mariadb /var/log/secure
```
### How To Configure sudo Access In Debian/Ubuntu Systems?
It can be done on Debian based systems such as Debian based systems such as Debian, Ubuntu and LinuxMint using following three methods.
### Method-1: How To Grant The Super User Access To A Normal User In Linux Using sudo or admin Groups?
sudo or admin is a special group in the Debian based systems that provides additional privileges that empower a user to execute restricted commands as the super user.
Make a note that the `sudo` or `admin` group should be enabled in the `/etc/sudoers` file to gain this access.
```
# grep -i 'sudo\|admin' /etc/sudoers
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
```
I assume that we had already created an user account to perform this. In my case, Im going to use `2gadmin` user account.
Run the following command to add an user into sudo group.
```
# usermod -aG sudo 2gadmin
```
We can doube confirm this by running the following command.
```
# getent group sudo
sudo:x:27:2gadmin
```
Im going to check whether `2gadmin` user can access a file which is owned by the root user.
```
$ less /var/log/auth.log
/var/log/auth.log: Permission denied
```
I was getting an error when i try to access the `/var/log/auth.log` file as a normal user. Im going to access the same file with sudo, lets see the magic.
```
$ sudo tail -5 /var/log/auth.log
[sudo] password for 2gadmin:
Mar 17 20:39:47 Ubuntu18 sudo: 2gadmin : TTY=pts/0 ; PWD=/home/2gadmin ; USER=root ; COMMAND=/bin/bash
Mar 17 20:39:47 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by 2gadmin(uid=0)
Mar 17 20:40:23 Ubuntu18 sudo: pam_unix(sudo:session): session closed for user root
Mar 17 20:40:48 Ubuntu18 sudo: 2gadmin : TTY=pts/0 ; PWD=/home/2gadmin ; USER=root ; COMMAND=/usr/bin/tail -5 /var/log/auth.log
Mar 17 20:40:48 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by 2gadmin(uid=0)
```
Alternatively we can perform the same by adding an user to `admin` group.
Run the following command to add an user into sudo group.
```
# usermod -aG admin user1
```
We can doube confirm this by running the following command.
```
# getent group admin
admin:x:1011:user1
```
Lets see the output.
```
$ sudo tail -2 /var/log/auth.log
[sudo] password for user1:
Mar 17 20:53:36 Ubuntu18 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/usr/bin/tail -2 /var/log/auth.log
Mar 17 20:53:36 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by user1(uid=0)
```
### Method-2: How To Grant The Super User Access To A Normal User In Debian/Ubuntu using /etc/group file?
We can manually add an user into the sudo or admin group by editing the `/etc/group` file.
Just open the file then append the corresponding user in the appropriate group to achieve this.
```
$ grep -i sudo /etc/group
sudo:x:27:2gadmin,user2
```
In this example, Im going to use `user2` user account.
Im going to check whether `user2` user has sudo access or not by restarting the `Apache` service in the system. lets see the magic.
```
$ sudo systemctl restart apache2
[sudo] password for user2:
$ sudo tail -f /var/log/auth.log
[sudo] password for user2:
Mar 17 21:01:04 Ubuntu18 systemd-logind[559]: New session 22 of user user2.
Mar 17 21:01:04 Ubuntu18 systemd: pam_unix(systemd-user:session): session opened for user user2 by (uid=0)
Mar 17 21:01:33 Ubuntu18 sudo: user2 : TTY=pts/0 ; PWD=/home/user2 ; USER=root ; COMMAND=/bin/systemctl restart apache2
```
### Method-3: How To Grant The Super User Access To A Normal User In Linux Using /etc/sudoers file?
sudo users access is controlled by `/etc/sudoers` file. So, simply add an user into the sudoers file under sudo or admin group.
Just append the desired user into /etc/suoders file by using visudo command.
```
# grep -i user3 /etc/sudoers
user3 ALL=(ALL:ALL) ALL
```
In this example, Im going to use `user3` user account.
Im going to check whether `user3` user has sudo access or not by restarting the `MariaDB` service in the system. lets see the magic.
```
$ sudo systemctl restart mariadb
[sudo] password for user3:
$ sudo tail -f /var/log/auth.log
[sudo] password for user3:
Mar 17 21:12:32 Ubuntu18 systemd-logind[559]: New session 24 of user user3.
Mar 17 21:12:49 Ubuntu18 sudo: user3 : TTY=pts/0 ; PWD=/home/user3 ; USER=root ; COMMAND=/bin/systemctl restart mariadb
Mar 17 21:12:49 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by user3(uid=0)
Mar 17 21:12:53 Ubuntu18 sudo: pam_unix(sudo:session): session closed for user root
Mar 17 21:13:08 Ubuntu18 sudo: user3 : TTY=pts/0 ; PWD=/home/user3 ; USER=root ; COMMAND=/usr/bin/tail -f /var/log/auth.log
Mar 17 21:13:08 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by user3(uid=0)
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,150 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Lets try dwm — dynamic window manager)
[#]: via: (https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/)
[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/)
Lets try dwm — dynamic window manager
======
![][1]
If you like efficiency and minimalism, and are looking for a new window manager for your Linux desktop, you should try _dwm_ — dynamic window manager. Written in under 2000 standard lines of code, dwm is extremely fast yet powerful and highly customizable window manager.
You can dynamically choose between tiling, monocle and floating layouts, organize your windows into multiple workspaces using tags, and quickly navigate through using keyboard shortcuts. This article helps you get started using dwm.
## **Installation**
To install dwm on Fedora, run:
```
$ sudo dnf install dwm dwm-user
```
The _dwm_ package installs the window manager itself, and the _dwm-user_ package significantly simplifies configuration which will be explained later in this article.
Additionally, to be able to lock the screen when needed, well also install _slock_ — a simple X display locker.
```
$ sudo dnf install slock
```
However, you can use a different one based on your personal preference.
## **Quick start**
To start dwm, choose the _dwm-user_ option on the login screen.
![][2]
After you log in, youll see a very simple desktop. In fact, the only thing there will be a bar at the top listing our nine tags that represent workspaces and a _[]=_ symbol that represents the layout of your windows.
### Launching applications
Before looking into the layouts, first launch some applications so you can play with the layouts as you go. Apps can be started by pressing _Alt+p_ and typing the name of the app followed by _Enter_. Theres also a shortcut _Alt+Shift+Enter_ for opening a terminal.
Now that some apps are running, have a look at the layouts.
### Layouts
There are three layouts available by default: the tiling layout, the monocle layout, and the floating layout.
The tiling layout, represented by _[]=_ on the bar, organizes windows into two main areas: master on the left, and stack on the right. You can activate the tiling layout by pressing _Alt+t._
![][3]
The idea behind the tiling layout is that you have your primary window in the master area while still seeing the other ones in the stack. You can quickly switch between them as needed.
To swap windows between the two areas, hover your mouse over one in the stack area and press _Alt+Enter_ to swap it with the one in the master area.
![][4]
The monocle layout, represented by _[N]_ on the top bar, makes your primary window take the whole screen. You can switch to it by pressing _Alt+m_.
Finally, the floating layout lets you move and resize your windows freely. The shortcut for it is _Alt+f_ and the symbol on the top bar is _> <>_.
### Workspaces and tags
Each window is assigned to a tag (1-9) listed at the top bar. To view a specific tag, either click on its number using your mouse or press _Alt+1..9._ You can even view multiple tags at once by clicking on their number using the secondary mouse button.
Windows can be moved between different tags by highlighting them using your mouse, and pressing _Alt+Shift+1..9._
## **Configuration**
To make dwm as minimalistic as possible, it doesnt use typical configuration files. Instead, you modify a C header file representing the configuration, and recompile it. But dont worry, in Fedora its as simple as just editing one file in your home directory and everything else happens in the background thanks to the _dwm-user_ package provided by the maintainer in Fedora.
First, you need to copy the file into your home directory using a command similar to the following:
```
$ mkdir ~/.dwm
$ cp /usr/src/dwm-VERSION-RELEASE/config.def.h ~/.dwm/config.h
```
You can get the exact path by running _man dwm-start._
Second, just edit the _~/.dwm/config.h_ file. As an example, lets configure a new shortcut to lock the screen by pressing _Alt+Shift+L_.
Considering weve installed the _slock_ package mentioned earlier in this post, we need to add the following two lines into the file to make it work:
Under the _/* commands */_ comment, add:
```
static const char *slockcmd[] = { "slock", NULL };
```
And the following line into _static Key keys[]_ :
```
{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
```
In the end, it should look like as follows: (added lines are highlighted)
```
...
/* commands */
static char dmenumon[2] = "0"; /* component of dmenucmd, manipulated in spawn() */
static const char *dmenucmd[] = { "dmenu_run", "-m", dmenumon, "-fn", dmenufont, "-nb", normbgcolor, "-nf", normfgcolor, "-sb", selbgcolor, "-sf", selfgcolor, NULL };
static const char *termcmd[] = { "st", NULL };
static const char *slockcmd[] = { "slock", NULL };
static Key keys[] = {
/* modifier key function argument */
{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
{ MODKEY, XK_p, spawn, {.v = dmenucmd } },
{ MODKEY|ShiftMask, XK_Return, spawn, {.v = termcmd } },
...
```
Save the file.
Finally, just log out by pressing _Alt+Shift+q_ and log in again. The scripts provided by the _dwm-user_ package will recognize that you have changed the _config.h_ file in your home directory and recompile dwm on login. And becuse dwm is so tiny, its fast enough you wont even notice it.
You can try locking your screen now by pressing _Alt+Shift+L_ , and then logging back in again by typing your password and pressing enter.
## **Conclusion**
If you like minimalism and want a very fast yet powerful window manager, dwm might be just what youve been looking for. However, it probably isnt for beginners. There might be a lot of additional configuration youll need to do in order to make it just as you like it.
To learn more about dwm, see the projects homepage at <https://dwm.suckless.org/>.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/
作者:[Adam Šamalík][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/asamalik/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-magazine-image-816x345.png
[2]: https://fedoramagazine.org/wp-content/uploads/2019/03/choosing-dwm-1024x469.png
[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-desktop-1024x593.png
[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-2019-03-15-at-11.12.32-1024x592.png

View File

@ -1,188 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command?)
[#]: via: (https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-using-nc-command/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command?
======
We had recently written an article to check if a port is open on the remote Linux server. It will help you to check for single server.
If you want to check for five servers then no issues, you can use any of the one following command such as nc (netcat), nmap and telnet.
If you would like to check for 50+ servers then what will be the solution?
Its not easy to check all servers, if you do the same then there is no point and you will be wasting a lots of time unnecessarily.
To overcome this situation, i had coded a small shell script using nc command that will allow us to scan any number of servers with given port.
If you are looking for a single server scan then you have multiple options, to know more about it. Simply navigate to the following URL to **[Check Whether A Port Is Open On The Remote Linux System?][1]**
There are two scripts available in this tutorial and both the scripts are useful.
Both scripts are used for different purpose, which you can easily understand by reading a head line.
I will ask you few questions before you are reading this article, just answer yourself if you know or you can get it by reading this article.
How to check, if a port is open on the remote Linux server?
How to check, if a port is open on the multiple remote Linux server?
How to check, if multiple ports are open on the multiple remote Linux server?
### What Is nc (netcat) Command?
nc stands for netcat. Netcat is a simple Unix utility which reads and writes data across network connections, using TCP or UDP protocol.
It is designed to be a reliable “back-end” tool that can be used directly or easily driven by other programs and scripts.
At the same time, it is a feature-rich network debugging and exploration tool, since it can create almost any kind of connection you would need and has several interesting built-in capabilities.
Netcat has three main modes of functionality. These are the connect mode, the listen mode, and the tunnel mode.
**Common Syntax for nc (netcat):**
```
$ nc [-options] [HostName or IP] [PortNumber]
```
### How To Check If A Port Is Open On Multiple Remote Linux Server?
Use the following shell script if you would like to check the given port is open on multiple remote Linux servers or not.
In my case, we are going to check whether the port 22 is open in the following remote servers or not? Make sure you have to update your servers list in the file instead of us.
Make sure you have to update the servers list into `server-list.txt file`. Each server should be in separate line.
```
# cat server-list.txt
192.168.1.2
192.168.1.3
192.168.1.4
192.168.1.5
192.168.1.6
192.168.1.7
```
Use the following script to achieve this.
```
# vi port_scan.sh
#!/bin/sh
for server in `more server-list.txt`
do
#echo $i
nc -zvw3 $server 22
done
```
Set an executable permission to `port_scan.sh` file.
```
$ chmod +x port_scan.sh
```
Finally run the script to achieve this.
```
# sh port_scan.sh
Connection to 192.168.1.2 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.3 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.4 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.5 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.6 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.7 22 port [tcp/ssh] succeeded!
```
### How To Check If Multiple Ports Are Open On Multiple Remote Linux Server?
Use the following script if you want to check the multiple ports in multiple servers.
In my case, we are going to check whether the port 22 and 80 is open or not in the given servers. Make sure you have to replace your required ports and servers name instead of us.
Make sure you have to update the port lists into `port-list.txt` file. Each port should be in a separate line.
```
# cat port-list.txt
22
80
```
Make sure you have to update the servers list into `server-list.txt` file. Each server should be in separate line.
```
# cat server-list.txt
192.168.1.2
192.168.1.3
192.168.1.4
192.168.1.5
192.168.1.6
192.168.1.7
```
Use the following script to achieve this.
```
# vi multiple_port_scan.sh
#!/bin/sh
for server in `more server-list.txt`
do
for port in `more port-list.txt`
do
#echo $server
nc -zvw3 $server $port
echo ""
done
done
```
Set an executable permission to `multiple_port_scan.sh` file.
```
$ chmod +x multiple_port_scan.sh
```
Finally run the script to achieve this.
```
# sh multiple_port_scan.sh
Connection to 192.168.1.2 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.2 80 port [tcp/http] succeeded!
Connection to 192.168.1.3 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.3 80 port [tcp/http] succeeded!
Connection to 192.168.1.4 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.4 80 port [tcp/http] succeeded!
Connection to 192.168.1.5 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.5 80 port [tcp/http] succeeded!
Connection to 192.168.1.6 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.6 80 port [tcp/http] succeeded!
Connection to 192.168.1.7 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.7 80 port [tcp/http] succeeded!
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-using-nc-command/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/how-to-check-whether-a-port-is-open-on-the-remote-linux-system-server/

View File

@ -0,0 +1,98 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (NVIDIA Jetson Nano is a $99 Raspberry Pi Rival for AI Development)
[#]: via: (https://itsfoss.com/nvidia-jetson-nano/)
[#]: author: (Atharva Lele https://itsfoss.com/author/atharva/)
NVIDIA Jetson Nano is a $99 Raspberry Pi Rival for AI Development
======
At the [GPU Technology Conference][1] NVIDIA announced the [Jetson Nano Module][2] and the [Jetson Nano Developer Kit][3]. Compared to other Jetson boards which cost between $299 and $1099, the Jetson Nano bears a low cost of $99. This puts it within the reach of many developers, educators, and researchers who could not spend hundreds of dollars to get such a product.
![The Jetson Nano Development Kit \(left\) and the Jetson Nano Module \(right\)][4]
### Bringing back AI development from cloud
In the last few years, we have seen a lot of [advances in AI research][5]. Traditionally AI computing was always done in the cloud, where there was plenty of processing power available.
Recently, theres been a trend in shifting this computation away from the cloud and do it locally. This is called [Edge Computing][6]. Now at the embedded level, products which could do such complex calculations required for AI and Machine Learning were sparse, but were seeing a great explosion these days in this product segment.
Products like the [SparkFun Edge][7] and [OpenMV Board][8] are good examples. The Jetson Nano, is NVIDIAs latest offering in this market. When connected to your system, it will be able to supply the processing power needed for Machine Learning and AI tasks without having to rely on the cloud.
This is great for privacy as well as saving on internet bandwidth. It is also more secure since your data always stays on the device itself.
### Jetson Nano focuses on smaller AI projects
![Jetson Nano powered JetBot][9]
Previously released Jetson Boards like the [TX2][10] and [AGX Xavier][11] were used in products like drones and cars, the Jetson Nano is targeting smaller projects, projects where you need to have the processing power which boards like the [Raspberry Pi][12] cannot provide.
Did you know?
NVIDIAs JetPack SDK provides a complete desktop Linux environment based on Ubuntu 18.04 LTS. In other words, the Jetson Nano is powered by Ubuntu Linux.
### NVIDIA Jetson Nano Specifications
For $99, you get 472 GFLOPS of processing power due to 128 NVIDIA Maxwell Architecture CUDA Cores, a quad-core ARM A57 processor, 4GB of LP-DDR4 RAM, 16GB of on-board storage, and 4k video encode/decode capabilities. The port selection is also pretty decent with the Nano having Gigabit Ethernet, MIPI Camera, Display outputs, and a couple of USB ports (1×3.0, 3×2.0). Full range of specifications can be found [here][13].
CPU | Quad-core ARM® Cortex®-A57 MPCore processor
---|---
GPU | NVIDIA Maxwell™ architecture with 128 NVIDIA CUDA® cores
RAM | 4 GB 64-bit LPDDR4
Storage | 16 GB eMMC 5.1 Flash
Camera | 12 lanes (3×4 or 4×2) MIPI CSI-2 DPHY 1.1 (1.5 Gbps)
Connectivity | Gigabit Ethernet
Display Ports | HDMI 2.0 and DP 1.2
USB Ports | 1 USB 3.0 and 3 USB 2.0
Other | 1 x1/2/4 PCIE, 1x SDIO / 2x SPI / 6x I2C / 2x I2S / GPIOs
Size | 69.6 mm x 45 mm
Along with good hardware, you get support for the majority of popular AI frameworks like TensorFlow, PyTorch, Keras, etc. It also has support for NVIDIAs [JetPack][14] and [DeepStream][15] SDKs, same as the more expensive TX2 and AGX Boards.
“Jetson Nano makes AI more accessible to everyone — and is supported by the same underlying architecture and software that powers our nations supercomputer. Bringing AI to the maker movement opens up a whole new world of innovation, inspiring people to create the next big thing.” said Deepu Talla, VP and GM of Autonomous Machines at NVIDIA.
[Subscribe to Its FOSS YouTube Channel][16]
**What do you think of Jetson Nano?**
The availability of Jetson Nano differs from country to country.
The [Intel Neural Stick][17], is also one such accelerator which is competitively prices at $79. Its good to see competition stirring up at these lower price points from the big manufacturers.
Im looking forward to getting my hands on the product if possible.
What do you guys think about a product like this? Let us know in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/nvidia-jetson-nano/
作者:[Atharva Lele][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/atharva/
[b]: https://github.com/lujun9972
[1]: https://www.nvidia.com/en-us/gtc/
[2]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/
[3]: https://developer.nvidia.com/embedded/buy/jetson-nano-devkit
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/jetson-nano-family-press-image-hd.jpg?ssl=1
[5]: https://itsfoss.com/nanotechnology-open-science-ai/
[6]: https://en.wikipedia.org/wiki/Edge_computing
[7]: https://www.sparkfun.com/news/2886
[8]: https://openmv.io/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/nvidia_jetson_bot.jpg?ssl=1
[10]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-tx2/
[11]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/
[12]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/
[13]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/#specifications
[14]: https://developer.nvidia.com/embedded/jetpack
[15]: https://developer.nvidia.com/deepstream-sdk
[16]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[17]: https://software.intel.com/en-us/movidius-ncs-get-started

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Top 10 New Linux SBCs to Watch in 2019)
[#]: via: (https://www.linux.com/blog/2019/3/top-10-new-linux-sbcs-watch-2019)
[#]: author: (Eric Brown https://www.linux.com/users/ericstephenbrown)
Top 10 New Linux SBCs to Watch in 2019
======
![UP Xtreme][1]
Aaeon's Linux-ready UP Xtreme SBC.
[Used with permission][2]
A recent [Global Market Insights report][3] projects the single board computer market will grow from $600 million in 2018 to $1 billion by 2025. Yet, you dont need to read a market research report to realize the SBC market is booming. Driven by the trends toward IoT and AI-enabled edge computing, new boards keep rolling off the assembly lines, many of them [tailored for highly specific applications][4].
Much of the action has been in Linux-compatible boards, including the insanely popular Raspberry Pi. The number of different vendors and models has exploded thanks in part to the rise of [community-backed, open-spec SBCs][5].
Here we examine 10 of the most intriguing, Linux-driven SBCs among the many products announced in the last four weeks that bookended the recent [Embedded World show][6] in Nuremberg. (There was also some [interesting Linux software news][7] at the show.) Two of the SBCs—the Intel Whiskey Lake based UP Xtreme and Nvidia Jetson Nano driven Jetson Nano Dev Kit—were announced only this week.
Our mostly open source list also includes a few commercial boards. Processors range from the modest, Cortex-A7 driven STM32MP1 to the high-powered Whiskey Lake and Snapdragon 845. Mid-range models include Googles i.MX8M powered Coral Dev Board and a similarly AI-enhanced, TI AM5729 based BeagleBone AI. Deep learning acceleration chips—and standard RPi 40-pin or 96Boards expansion connectors—are common themes among most of these boards.
The SBCs are listed in reverse chronological order according to their announcement dates. The links in the product names go to recent LinuxGizmos reports, which link to vendor product pages.
**[UP Xtreme][8]** —The latest in Aaeons line of community-backed SBCs taps Intels 8th Gen Whiskey Lake-U CPUs, which maintain a modest 15W TDP while boosting performance with up to quad-core, dual threaded configurations. Depending on when it ships, this Linux-ready model will likely be the most powerful community-backed SBC around -- and possibly the most expensive.
The SBC supports up to 16GB DDR4 and 128GB eMMC and offers 4K displays via HDMI, DisplayPort, and eDP. Other features include SATA, 2x GbE, 4x USB 3.0, and 40-pin “HAT” and 100-pin GPIO add-on board connectors. You also get mini-PCIe and dual M.2 slots that support wireless modems and more SATA options. The slots also support Aaeons new AI Core X modules, which offer Intels latest Movidius Myriad X VPUs for 1TOPS neural processing acceleration.
**[Jetson Nano Dev Kit][9]** —Nvidia just announced a low-end Jetson Nano compute module thats sort of like a smaller (70 x 45mm) version of the old Jetson TX1. It offers the same 4x Cortex-A57 cores but has an even lower-end 128-core Maxwell GPU. The module has half the RAM and flash (4GB/16GB) of the TX1 and TX2, and no WiFi/Bluetooth radios. Like the hexa-core Jetson TX2, however, it supports 4K video and the GPU offers similar CUDA-X deep learning libraries.
Although Nvidia has backed all its Linux-driven Jetson modules with development kits, the Jetson Nano Dev Kit is its first community-backed, maker-oriented kit. It does not appear to offer open specifications, but it costs only $99 and theres a forum and other community resources. Many of the specs match or surpass the Raspberry Pi 3B+, including the addition of a 40-pin GPIO. Highlights include an M.2 slot, GbE with Power-over-Ethernet, HDMI 2.0 and eDP links, and 4x USB 3.0 ports.
**[Coral Dev Board][10]** —Googles very first Linux maker board arrived earlier this month featuring an NXP i.MX8M and Googles Edge TPU AI chip—a stripped-down version of Googles TPU Unit is designed to run TensorFlow Lite ML models. The $150, Raspberry Pi-like Coral Dev Board was joined by a similarly Edge TPU-enabled Coral USB Accelerator USB stick. These will be followed by an Edge TPU based Coral PCIe Accelerator and a Coral SOM compute module. All these devices are backed with schematics, community resources, and other open-spec resources.
The Coral Dev Board combines the Edge TPU chip with NXPs quad-core, 1.5GHz Cortex-A53 i.MX8M with a 3D Vivante GPU/VPU and a Cortex-M4 MCU. The SBC is even more like the Raspberry Pi 3B+ than Nvidias Dev Kit, mimicking the size and much of the layout and I/O, including the 40-pin GPIO connector. Highlights include 4K-ready GbE, HDMI 2.0a, 4-lane MIPI-DSI and CSI, and USB 3.0 host and Type-C ports.
**[SBC-C43][11]** —Secos commercial, industrial temperature SBC-C43 board is the first SBC based on NXPs high-end, up to hexa-core i.MX8. The 3.5-inch SBC supports the i.MX8 QuadMax with 2x Cortex-A72 cores and 4x Cortex-A53 cores, the QuadPlus with a single Cortex-A72 and 4x -A53, and the Quad with no -A72 cores and 4x -A53. There are also 2x Cortex-M4F real-time cores and 2x Vivante GPU/VPU cores. Yocto Project, Wind River Linux, and Android are available.
The feature-rich SBC-C43 supports up to 8GB DDR4 and 32GB eMMC, both soldered for greater reliability. Highlights include dual GbE, HDMI 2.0a in and out ports, WiFi/Bluetooth, and a variety of industrial interfaces. Dual M.2 slots support SATA, wireless, and more.
**[Nitrogen8M_Mini][12]** —This Boundary Devices cousin to the earlier, i.MX8M based Nitrogen8M is available for $135, with shipments due this Spring. The open-spec Nitrogen8M_Mini is the first SBC to feature NXPs new i.MX8M Mini SoC. The Mini uses a more advanced 14LPC FinFET process than the i.MX8M, resulting in lower power consumption and higher clock rates for both the 4x Cortex-A53 (1.5GHz to 2GHz) and Cortex-M4 (400MHz) cores. The drawback is that youre limited to HD video resolution.
Supported with Linux and Android, the Nitrogen8M_Mini ships with 2GB to 4GB LPDDR4 RAM and 8GB to 128GB eMMC. MIPI-DSI and -CSI interfaces support optional touchscreens and cameras, respectively. A GbE port is standard and PoE and WiFi/BT are optional. Other features include 3x USB ports, one or two PCIe slots, and optional -40 to 85°C support. A Nitrogen8M_Mini SOM module with similar specs is also in the works.
**[Pine H64 Model B][13]** —Pine64s latest hacker board was teased in late January as part of an [ambitious roll-out][14] of open source products, including a laptop, tablet, and phone. The Raspberry Pi semi-clone, which recently went on sale for $39 (2GB) or $49 (3GB), showcases the high-end, but low-cost Allwinner H64. The quad -A53 SoC is notable for its 4K video with HDR support.
The Pine H64 Model B offers up to 128GB eMMC storage, WiFi/BT, and a GbE port. I/O includes 2x USB 2.0 and single USB 3.0 and HDMI 2.0a ports plus SPDIF audio and an RPi-like 40-pin connector. Images include Android 7.0 and an “in progress” Armbian Debian Stretch.
**[AI-ML Board][15]** —Arrow unveiled this i.MX8X based SBC early this month along with a similarly 96Boards CE Extended format, i.MX8M based Thor96 SBC. While there are plenty of i.MX8M boards these days, were more intrigued with the lowest-end i.MX8X member of the i.MX8 family. The AI-ML Board is the first SBC weve seen to feature the low-power i.MX8X, which offers up to 4x 64-bit, 1.2GHz Cortex-A35 cores, a 4-shader, 4K-ready Vivante GPU/VPU, a Cortex-M4F chip, and a Tensilica HiFi 4 DSP.
The open-spec, Yocto Linux driven AI-ML Board is targeted at low-power, camera-equipped applications such as drones. The board has 2GB LPDDR4, Ethernet, WiFi/BT, and a pair each of MIPI-DSI and USB 3.0 ports. Cameras are controlled via the 96Boards 60-pin, high-power GPIO connector, which is joined by the usual 40-pin low-power link. The launch is expected June 1.
**[BeagleBone AI][16]** —The long-awaited successor to the Cortex-A8 AM3358 based BeagleBone family of boards advances to TIs dual-core Cortex-A15 AM5729, with similar PowerVR GPU and MCU-like PRU cores. The real story, however, is the AI firepower enabled by the SoCs dual TI C66x DSPs and four embedded-vision-engine (EVE) neural processing cores. BeagleBoard.org claims that calculations for computer-vision models using EVE run at 8x times the performance per watt compared to the similar, but EVE-less, AM5728. The EVE and DSP chips are supported through a TIDL machine learning OpenCL API and pre-installed tools.
Due to go on sale in April for about $100, the Linux-powered BeagleBone AI is based closely on the BeagleBone Black and offers backward header, mechanical, and software compatibility. It doubles the RAM to 1GB and quadruples the eMMC storage to 16GB. You now get GbE and high-speed WiFi, as well as a USB Type-C port.
**[Robotics RB3 Platform (DragonBoard 845c)][17]** —Qualcomm and Thundercomm are initially launching their 96Boards CE form factor, Snapdragon 845-based upgrade to the Snapdragon 820-based [DragonBoard 820c][18] SBC as part of a Qualcomm Robotics RB3 Platform. Yet, 96Boards.org has already posted a [DragonBoard 845c product page][17], and we imagine the board will be available in the coming months without all the robotics bells and whistles. A compute module version is also said to be in the works.
The 10nm, octa-core, “Kryo” based Snapdragon 845 is one of the most powerful Arm SoCs around. It features an advanced Adreno 630 GPU with “eXtended Reality” (XR) VR technology and a Hexagon 685 DSP with a third-gen Neural Processing Engine (NPE) for AI applications. On the RB3 kit, the boards expansion connectors are pre-stocked with Qualcomm cellular and robotics camera mezzanines. The $449 and up kit also includes standard 4K video and tracking cameras, and there are optional Time-of-Flight (ToF) and stereo SLM camera depth cameras. The SBC runs Linux with ROS (Robot Operating System).
**[Avenger96][19]** —Like Arrows AI-ML Board, the Avenger96 is a 96Boards CE Extended SBC aimed at low-power IoT applications. Yet, the SBC features an even more power-efficient (and slower) SoC: STs recently announced [STM32MP153][20]. The Avenger96 runs Linux on the high-end STM32MP157 model, which has dual, 650MHz Cortex-A7 cores, a Cortex-M4, and a Vivante 3D GPU.
This sandwich-style board features an Avenger96 module with the STM32MP157 SoC, 1GB of DDR3L, 2MB SPI flash, and a power management IC. Its unclear if the 8GB eMMC and WiFi-ac/Bluetooth 4.2 module are on the module or carrier board. The Avenger96 SBC is further equipped with GbE, HDMI, micro-USB OTG, and dual USB 2.0 host ports. Theres also a microSD slot and the usual 40- and 60-pin GPIO connectors. The board is expected to go on sale in April.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2019/3/top-10-new-linux-sbcs-watch-2019
作者:[Eric Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/ericstephenbrown
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/aaeon_upxtreme.jpg?itok=QnwAt3mp (UP Xtreme)
[2]: /LICENSES/CATEGORY/USED-PERMISSION
[3]: https://www.globenewswire.com/news-release/2019/02/13/1724445/0/en/Single-Board-Computer-Market-to-surpass-1bn-by-2025-Global-Market-Insights-Inc.html
[4]: https://www.linux.com/blog/2019/1/linux-hacker-board-trends-2018-and-beyond
[5]: http://linuxgizmos.com/catalog-of-122-open-spec-linux-hacker-boards/
[6]: https://www.embedded-world.de/en
[7]: https://www.linux.com/news/2019/2/embedded-linux-software-highlights-embedded-world
[8]: http://linuxgizmos.com/latest-up-board-combines-whiskey-lake-with-ai-core-x-modules/
[9]: http://linuxgizmos.com/trimmed-down-jetson-nano-modules-ships-on-99-linux-dev-kit/
[10]: http://linuxgizmos.com/google-launches-i-mx8m-dev-board-with-edge-tpu-ai-chip/
[11]: http://linuxgizmos.com/first-i-mx8-quadmax-sbc-breaks-cover/
[12]: http://linuxgizmos.com/open-spec-nitrogen8m_mini-sbc-ships-along-with-new-mini-based-som/
[13]: http://linuxgizmos.com/revised-allwiner-h64-based-pine-h64-sbc-has-rpi-size-and-gpio/
[14]: https://www.linux.com/blog/2019/2/pine64-launch-open-source-phone-laptop-tablet-and-camera
[15]: http://linuxgizmos.com/arrows-latest-96boards-sbcs-tap-i-mx8x-and-i-mx8m/
[16]: http://linuxgizmos.com/beaglebone-ai-sbc-features-dual-a15-soc-with-eve-ai-cores/
[17]: http://linuxgizmos.com/robotics-kit-runs-linux-on-new-dragonboard-845c-96boards-sbc/
[18]: http://linuxgizmos.com/debian-driven-dragonboard-expands-to-96boards-extended-spec/
[19]: http://linuxgizmos.com/sandwich-style-96boards-sbc-runs-linux-on-sts-new-cortex-a7-m4-soc/
[20]: https://www.linux.com/news/2019/2/st-spins-its-first-linux-powered-cortex-soc

View File

@ -0,0 +1,100 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to set up Fedora Silverblue as a gaming station)
[#]: via: (https://fedoramagazine.org/set-up-fedora-silverblue-gaming-station/)
[#]: author: (Michal Konečný https://fedoramagazine.org/author/zlopez/)
How to set up Fedora Silverblue as a gaming station
======
![][1]
This article gives you a step by step guide to turn your Fedora Silverblue into an awesome gaming station with the help of Flatpak and Steam.
Note: Do you need the NVIDIA proprietary driver on Fedora 29 Silverblue for a complete experience? Check out [this blog post][2] for pointers.
### Add the Flathub repository
This process starts with a clean Fedora 29 Silverblue installation with a user already created for you.
First, go to <https://flathub.org/home> and enable the Flathub repository on your system. To do this, click the _Quick setup_ button on the main page.
![Quick setup button on flathub.org/home][3]
This redirects you to <https://flatpak.org/setup/> where you should click on the Fedora icon.
![Fedora icon on flatpak.org/setup][4]
Now you just need to click on _Flathub repository file._ Open the downloaded file with the _Software Install_ application.
![Flathub repository file button on flatpak.org/setup/Fedora][5]
The GNOME Software application opens. Next, click on the _Install_ button. This action needs _sudo_ permissions, because it installs the Flathub repository for use by the whole system.
![Install button in GNOME Software][6]
### Install the Steam flatpak
You can now search for the S _team_ flatpak in _GNOME Software_. If you cant find it, try rebooting — or logout and login — in case _GNOME Software_ didnt read the metadata. That happens automatically when you next login.
![Searching for Steam][7]
Click on the _Steam_ row and the _Steam_ page opens in _GNOME Software._ Next, click on _Install_.
![Steam page in GNOME Software][8]
And now you have installed _Steam_ flatpak on your system.
### Enable Steam Play in Steam
Now that you have _Steam_ installed, launch it and log in. To play Windows games too, you need to enable _Steam Play_ in _Steam._ To enable it, choose _Steam > Settings_ from the menu in the main window.
![Settings button in Steam][9]
Navigate to the _Steam Play_ section. You should see the option _Enable Steam Play for supported titles_ is already ticked, but its recommended you also tick the _Enable Steam Play_ option for all other titles. There are plenty of games that are actually playable, but not whitelisted yet on _Steam._ To see which games are playable, visit [ProtonDB][10] and search for your favorite game. Or just look for the games with the most platinum reports.
![Steam Play settings menu on Steam][11]
If you want to know more about Steam Play, you can read the [article][12] about it here on Fedora Magazine:
> [Play Windows games on Fedora with Steam Play and Proton][12]
### Appendix
Youre now ready to play plenty of games on Linux. Please remember to share your experience with others using the _Contribute_ button on [ProtonDB][10] and report bugs you find on [GitHub][13], because sharing is nice. 🙂
* * *
_Photo by _[ _Hardik Sharma_][14]_ on _[_Unsplash_][15]_._
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/set-up-fedora-silverblue-gaming-station/
作者:[Michal Konečný][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/zlopez/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/silverblue-gaming-816x345.jpg
[2]: https://blogs.gnome.org/alexl/2019/03/06/nvidia-drivers-in-fedora-silverblue/
[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-29-00.png
[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-36-35-1024x713.png
[5]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-45-12.png
[6]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-57-37.png
[7]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-08-21.png
[8]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-13-59-1024x769.png
[9]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-30-20.png
[10]: https://www.protondb.com/
[11]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-41-53.png
[12]: https://fedoramagazine.org/play-windows-games-steam-play-proton/
[13]: https://github.com/ValveSoftware/Proton
[14]: https://unsplash.com/photos/I7rXyzBNVQM?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[15]: https://unsplash.com/search/photos/video-game-laptop?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,50 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Contribute at the Fedora Test Day for Fedora Modularity)
[#]: via: (https://fedoramagazine.org/contribute-at-the-fedora-test-day-for-fedora-modularity/)
[#]: author: (Sumantro Mukherjee https://fedoramagazine.org/author/sumantrom/)
Contribute at the Fedora Test Day for Fedora Modularity
======
![][1]
Modularity lets you keep the right version of an application, language runtime, or other software on your Fedora system even as the operating system is updated. You can read more about Modularity in general on the [Fedora documentation site][2].
The Modularity folks have been working on Modules for everyone. As a result, the Fedora Modularity and QA teams have organized a test day for **Tuesday, March 26, 2019**. Refer to the [wiki page][3] for links to the test images youll need to participate. Read on for more information on the test day.
### How do test days work?
A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If youve never contributed before, this is a perfect way to get started.
To contribute, you only need to be able to do the following things:
* Download test materials, which include some large files
* Read and follow directions step by step
The [wiki page][3] for the modularity test day has a lot of good information on what and how to test. After youve done some testing, you can log your results in the test day [web application][4]. If youre available on or around the day of the event, please do some testing and report your results.
Happy testing, and we hope to see you on test day.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/contribute-at-the-fedora-test-day-for-fedora-modularity/
作者:[Sumantro Mukherjee][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/sumantrom/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2015/03/test-days-945x400.png
[2]: https://docs.fedoraproject.org/en-US/modularity/
[3]: https://fedoraproject.org/wiki/Test_Day:2019-03-26_Modularity_Test_Day
[4]: http://testdays.fedorainfracloud.org/events/61

View File

@ -1,222 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Modrisco)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with Vim: The basics)
[#]: via: (https://opensource.com/article/19/3/getting-started-vim)
[#]: author: (Bryant Son (Red Hat, Community Moderator) https://opensource.com/users/brson)
Getting started with Vim: The basics
======
Learn to use Vim enough to get by at work or for a new project.
![Person standing in front of a giant computer screen with numbers, data][1]
I remember the very first time I encountered Vim. I was a university student, and the computers in the computer science department's lab were installed with Ubuntu Linux. While I had been exposed to different Linux variations (like RHEL) even before my college years (Red Hat sold its CDs at Best Buy!), this was the first time I needed to use the Linux operating system regularly, because my classes required me to do so. Once I started using Linux, like many others before and after me, I began to feel like a "real programmer."
![Real Programmers comic][2]
Real Programmers, by [xkcd][3]
Students could use a graphical text editor like [Kate][4], which was installed on the lab computers by default. For students who could use the shell but weren't used to the console-based editor, the popular choice was [Nano][5], which provided good interactive menus and an experience similar to Windows' graphical text editor.
I used Nano sometimes, but I heard awesome things about [Vi/Vim][6] and [Emacs][7] and really wanted to give them a try (mainly because they looked cool, and I was also curious to see what was so great about them). Using Vim for the first time scared me—I did not want to mess anything up! But once I got the hang of it, things became much easier and I could appreciate the editor's powerful capabilities. As for Emacs, well, I sort of gave up, but I'm happy I stuck with Vim.
In this article, I will walk through Vim (based on my personal experience) just enough so you can get by with it as an editor on a Linux system. This will neither make you an expert nor even scratch the surface of many of Vim's powerful capabilities. But the starting point always matters, and I want to make the beginning experience as easy as possible, and you can explore the rest on your own.
### Step 0: Open a console window
Before jumping into Vim, you need to do a little preparation. Open a console terminal from your Linux operating system. (Since Vim is also available on MacOS, Mac users can use these instructions, also.)
Once a terminal window is up, type the **ls** command to list the current directory. Then, type **mkdir Tutorial** to create a new directory called **Tutorial**. Go inside the directory by typing **cd Tutorial**.
![Create a folder][8]
That's it for preparation. Now it's time to move on to the fun part—starting to use Vim.
### Step 1: Create and close a Vim file without saving
Remember when I said I was scared to use Vim at first? Well, the scary part was thinking, "what if I change an existing file and mess things up?" After all, several computer science assignments required me to work on existing files by modifying them. I wanted to know: _How can I open and close a file without saving my changes?_
The good news is you can use the same command to create or open a file in Vim: **vim <FILE_NAME**>, where **< FILE_NAME>** represents the target file name you want to create or modify. Let's create a file named **HelloWorld.java** by typing **vim HelloWorld.java**.
Hello, Vim! Now, here is a very important concept in Vim, possibly the most important to remember: Vim has multiple modes. Here are three you need to know to do Vim basics:
Mode | Description
---|---
Normal | Default; for navigation and simple editing
Insert | For explicitly inserting and modifying text
Command Line | For operations like saving, exiting, etc.
Vim has other modes, like Visual, Select, and Ex-Mode, but Normal, Insert, and Command Line modes are good enough for us.
You are now in Normal mode. If you have text, you can move around with your arrow keys or other navigation keystrokes (which you will see later). To make sure you are in Normal mode, simply hit the **Esc** (Escape) key.
> **Tip:** **Esc** switches to Normal mode. Even though you are already in Normal mode, hit **Esc** just for practice's sake.
Now, this will be interesting. Press **:** (the colon key) followed by **q!** (i.e., **:q!** ). Your screen will look like this:
![Editing Vim][9]
Pressing the colon in Normal mode switches Vim to Command Line mode, and the **:q!** command quits the Vim editor without saving. In other words, you are abandoning all changes. You can also use **ZQ** ; choose whichever option is more convenient.
Once you hit **Enter** , you should no longer be in Vim. Repeat the exercise a few times, just to get the hang of it. Once you've done that, move on to the next section to learn how to make a change to this file.
### Step 2: Make and save modifications in Vim
Reopen the file by typing **vim HelloWorld.java** and pressing the **Enter** key. Insert mode is where you can make changes to a file. First, hit **Esc** to make sure you are in Normal mode, then press **i** to go into Insert mode. (Yes, that is the letter **i**.)
In the lower-left, you should see **\-- INSERT --**. This means you are in Insert mode.
![Vim insert mode][10]
Type some Java code. You can type anything you want, but here is an example for you to follow. Your screen will look like this:
```
public class HelloWorld {
public static void main([String][11][] args) {
}
}
```
Very pretty! Notice how the text is highlighted in Java syntax highlight colors. Because you started the file in Java, Vim will detect the syntax color.
Save the file. Hit **Esc** to leave Insert mode and enter Command Line mode. Type **:** and follow that with **x!** (i.e., a colon followed by x and !). Hit **Enter** to save the file. You can also type **wq** to perform the same operation.
Now you know how to enter text using Insert mode and save the file using **:x!** or **:wq**.
### Step 3: Basic navigation in Vim
While you can always use your friendly Up, Down, Left, and Right arrow buttons to move around a file, that would be very difficult in a large file with almost countless lines. It's also helpful to be able to be able to jump around within a line. Although Vim has a ton of awesome navigation features, the first one I want to show you is how to go to a specific line.
Press the **Esc** key to make sure you are in Normal mode, then type **:set number** and hit **Enter** .
Voila! You see line numbers on the left side of each line.
![Showing Line Numbers][12]
OK, you may say, "that's cool, but how do I jump to a line?" Again, make sure you are in Normal mode, then press **: <LINE_NUMBER>**, where **< LINE_NUMBER>** is the number of the line you want to go to, and press **Enter**. Try moving to line 2.
```
:2
```
Now move to line 3.
![Jump to line 3][13]
But imagine a scenario where you are dealing with a file that is 1,000 lines long and you want to go to the end of the file. How do you get there? Make sure you are in Normal mode, then type **:$** and press **Enter**.
You will be on the last line!
Now that you know how to jump among the lines, as a bonus, let's learn how to move to the end of a line. Make sure you are on a line with some text, like line 3, and type **$**.
![Go to the last character][14]
You're now at the last character on the line. In this example, the open curly brace is highlighted to show where your cursor moved to, and the closing curly brace is highlighted because it is the opening curly brace's matching character.
That's it for basic navigation in Vim. Wait, don't exit the file, though. Let's move to basic editing in Vim. Feel free to grab a cup of coffee or tea, though.
### Step 4: Basic editing in Vim
Now that you know how to navigate around a file by hopping onto the line you want, you can use that skill to do some basic editing in Vim. Switch to Insert mode. (Remember how to do that, by hitting the **i** key?) Sure, you can edit by using the keyboard to delete or insert characters, but Vim offers much quicker ways to edit files.
Move to line 3, where it shows **public static void main(String[] args) {**. Quickly hit the **d** key twice in succession. Yes, that is **dd**. If you did it successfully, you will see a screen like this, where line 3 is gone, and every following line moved up by one (i.e., line 4 became line 3).
![Deleting A Line][15]
That's the _delete_ command. Don't fear! Hit **u** and you will see the deleted line recovered. Whew. This is the _undo_ command.
![Undoing a change in Vim][16]
The next lesson is learning how to copy and paste text, but first, you need to learn how to highlight text in Vim. Press **v** and move your Left and Right arrow buttons to select and deselect text. This feature is also very useful when you are showing code to others and want to identify the code you want them to see.
![Highlighting text in Vim][17]
Move to line 4, where it says **System.out.println("Hello, Opensource");**. Highlight all of line 4. Done? OK, while line 4 is still highlighted, press **y**. This is called _yank_ mode, and it will copy the text to the clipboard. Next, create a new line underneath by entering **o**. Note that this will put you into Insert mode. Get out of Insert mode by pressing **Esc** , then hit **p** , which stands for _paste_. This will paste the copied text from line 3 to line 4.
![Pasting in Vim][18]
As an exercise, repeat these steps but also modify the text on your newly created lines. Also, make sure the lines are aligned well.
> **Hint:** You need to switch back and forth between Insert mode and Command Line mode to accomplish this task.
Once you are finished, save the file with the **x!** command. That's all for basic editing in Vim.
### Step 5: Basic searching in Vim
Imagine your team lead wants you to change a text string in a project. How can you do that quickly? You might want to search for the line using a certain keyword.
Vim's search functionality can be very useful. Go into the Command Line mode by (1) pressing **Esc** key, then (2) pressing colon **:** ****key. We can search a keyword by entering : **/ <SEARCH_KEYWORD>**, where **< SEARCH_KEYWORD>** is the text string you want to find. Here we are searching for the keyword string "Hello." In the image below, the colon is missing but required.
![Searching in Vim][19]
However, a keyword can appear more than once, and this may not be the one you want. So, how do you navigate around to find the next match? You simply press the **n** key, which stands for _next_. Make sure that you aren't in Insert mode when you do this!
### Bonus step: Use split mode in Vim
That pretty much covers all the Vim basics. But, as a bonus, I want to show you a cool Vim feature called _split mode_.
Get out of _HelloWorld.java_ and create a new file. In a terminal window, type **vim GoodBye.java** and hit **Enter** to create a new file named _GoodBye.java_.
Enter any text you want; I decided to type "Goodbye." Save the file. (Remember you can use **:x!** or **:wq** in Command Line mode.)
In Command Line mode, type **:split HelloWorld.java** , and see what happens.
![Split mode in Vim][20]
Wow! Look at that! The **split** command created horizontally divided windows with _HelloWorld.java_ above and _GoodBye.java_ below. How can you switch between the windows? Hold **Control** (on a Mac) or **CTRL** (on a PC) then hit **ww** (i.e., **w** twice in succession).
As a final exercise, try to edit _GoodBye.java_ to match the screen below by copying and pasting from _HelloWorld.java_.
![Modify GoodBye.java file in Split Mode][21]
Save both files, and you are done!
> **TIP 1:** If you want to arrange the files vertically, use the command **:vsplit <FILE_NAME>** (instead of **:split <FILE_NAME>**, where **< FILE_NAME>** is the name of the file you want to open in Split mode.
>
> **TIP 2:** You can open more than two files by calling as many additional **split** or **vsplit** commands as you want. Try it and see how it looks.
### Vim cheat sheet
In this article, you learned how to use Vim just enough to get by for work or a project. But this is just the beginning of your journey to unlock Vim's powerful capabilities. Be sure to check out other great tutorials and tips on Opensource.com.
To make things a little easier, I've summarized everything you've learned into [a handy cheat sheet][22].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/getting-started-vim
作者:[Bryant Son (Red Hat, Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/brson
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://opensource.com/sites/default/files/uploads/1_xkcdcartoon.jpg (Real Programmers comic)
[3]: https://xkcd.com/378/
[4]: https://kate-editor.org
[5]: https://www.nano-editor.org
[6]: https://www.vim.org
[7]: https://www.gnu.org/software/emacs
[8]: https://opensource.com/sites/default/files/uploads/2_createtestfolder.jpg (Create a folder)
[9]: https://opensource.com/sites/default/files/uploads/4_existingvim.jpg (Editing Vim)
[10]: https://opensource.com/sites/default/files/uploads/6_insertionmode.jpg (Vim insert mode)
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[12]: https://opensource.com/sites/default/files/uploads/10_setnumberresult_0.jpg (Showing Line Numbers)
[13]: https://opensource.com/sites/default/files/uploads/12_jumpintoline3.jpg (Jump to line 3)
[14]: https://opensource.com/sites/default/files/uploads/14_gotolastcharacter.jpg (Go to the last character)
[15]: https://opensource.com/sites/default/files/uploads/15_deletinglines.jpg (Deleting A Line)
[16]: https://opensource.com/sites/default/files/uploads/16_undoingtheline.jpg (Undoing a change in Vim)
[17]: https://opensource.com/sites/default/files/uploads/17_highlighting.jpg (Highlighting text in Vim)
[18]: https://opensource.com/sites/default/files/uploads/19_pasting.jpg (Pasting in Vim)
[19]: https://opensource.com/sites/default/files/uploads/22_searchmode.jpg (Searching in Vim)
[20]: https://opensource.com/sites/default/files/uploads/26_copytonewfiles.jpg (Split mode in Vim)
[21]: https://opensource.com/sites/default/files/uploads/27_exercise.jpg (Modify GoodBye.java file in Split Mode)
[22]: https://opensource.com/downloads/cheat-sheet-vim

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How Open Source Is Accelerating NFV Transformation)
[#]: via: (https://www.linux.com/blog/2019/3/how-open-source-accelerating-nfv-transformation)
[#]: author: (Pam Baker https://www.linux.com/users/pambaker)
How Open Source Is Accelerating NFV Transformation
======
![NFV][1]
In anticipation of the upcoming Open Networking Summit, we talked with Thomas Nadeau, Technical Director NFV at Red Hat, about the role of open source in innovation for telecommunications service providers.
[Creative Commons Zero][2]
Red Hat is noted for making open source a culture and business model, not just a way of developing software, and its message of [open source as the path to innovation][3] resonates on many levels.
In anticipation of the upcoming [Open Networking Summit][4], we talked with [Thomas Nadeau][5], Technical Director NFV at Red Hat, who gave a [keynote address][6] at last years event, to hear his thoughts regarding the role of open source in innovation for telecommunications service providers.
One reason for open sources broad acceptance in this industry, he said, was that some very successful projects have grown too large for any one company to manage, or single-handedly push their boundaries toward additional innovative breakthroughs.
“There are projects now, like Kubernetes, that are too big for any one company to do. There's technology that we as an industry need to work on, because no one company can push it far enough alone,” said Nadeau. “Going forward, to solve these really hard problems, we need open source and the open source software development model.”
Here are more insights he shared on how and where open source is making an innovative impact on telecommunications companies.
**Linux.com: Why is open source central to innovation in general for telecommunications service providers?**
**Nadeau:** The first reason is that the service providers can be in more control of their own destiny. There are some service providers that are more aggressive and involved in this than others. Second, open source frees service providers from having to wait for long periods for the features they need to be developed.
And third, open source frees service providers from having to struggle with using and managing monolith systems when all they really wanted was a handful of features. Fortunately, network equipment providers are responding to this overkill problem. They're becoming much more flexible, more modular, and open source is the best means to achieve that.
**Linux.com: In your ONS keynote presentation, you said open source levels the playing field for traditional carriers in competing with cloud-scale companies in creating digital services and revenue streams. Please explain how open source helps.**
**Nadeau:** Kubernetes again. OpenStack is another one. These are tools that these businesses really need, not to just expand, but to exist in today's marketplace. Without open source in that virtualization space, youre stuck with proprietary monoliths, no control over your future, and incredibly long waits to get the capabilities you need to compete.
There are two parts in the NFV equation: the infrastructure and the applications. NFV is not just the underlying platforms, but this constant push and pull between the platforms and the applications that use the platforms.
NFV is really virtualization of functions. It started off with monolithic virtual machines (VMs). Then came "disaggregated VMs" where individual functions, for a variety of reasons, were run in a more distributed way. To do so meant separating them, and this is where SDN came in, with the separation of the control plane from the data plane. Those concepts were driving changes in the underlying platforms too, which drove up the overhead substantially. That in turn drove interest in container environments as a potential solution, but it's still NFV.
You can think of it as the latest iteration of SOA with composite applications. Kubernetes is the kind of SOA model that they had at Google, which dropped the worry about the complicated networking and storage underneath and simply allowed users to fire up applications that just worked. And for the enterprise application model, this works great.
But not in the NFV case. In the NFV case, in the previous iteration of the platform at OpenStack, everybody enjoyed near one-for-one network performance. But when we move it over here to OpenShift, we're back to square one where you lose 80% of the performance because of the latest SOA model that they've implemented. And so now evolving the underlying platform rises in importance, and so the pendulum swing goes, but it's still NFV. Open source allows you to adapt to these changes and influences effectively and quickly. Thus innovations happen rapidly and logically, and so do their iterations.
**Linux.com: Tell us about the underlying Linux in NFV, and why that combo is so powerful.**
**Nadeau:** Linux is open source and it always has been in some of the purest senses of open source. The other reason is that it's the predominant choice for the underlying operating system. The reality is that all major networks and all of the top networking companies run Linux as the base operating system on all their high-performance platforms. Now it's all in a very flexible form factor. You can lay it on a Raspberry Pi, or you can lay it on a gigantic million-dollar router. It's secure, it's flexible, and scalable, so operators can really use it as a tool now.
**Linux.com: Carriers are always working to redefine themselves. Indeed, many are actively seeking ways to move out of strictly defensive plays against disruptors, and onto offense where they ARE the disruptor. How can network function virtualization (NFV) help in either or both strategies?**
**Nadeau:** Telstra and Bell Canada are good examples. They are using open source code in concert with the ecosystem of partners they have around that code which allows them to do things differently than they have in the past. There are two main things they do differently today. One is they design their own network. They design their own things in a lot of ways, whereas before they would possibly need to use a turnkey solution from a vendor that looked a lot, if not identical, to their competitors businesses.
These telcos are taking a real “in-depth, roll up your sleeves” approach. ow that they understand what they're using at a much more intimate level, they can collaborate with the downstream distro providers or vendors. This goes back to the point that the ecosystem, which is analogous to partner programs that we have at Red Hat, is the glue that fills in gaps and rounds out the network solution that the telco envisions.
_Learn more at[Open Networking Summit][4], happening April 3-5 at the San Jose McEnery Convention Center._
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2019/3/how-open-source-accelerating-nfv-transformation
作者:[Pam Baker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/pambaker
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/nfv-443852_1920.jpg?itok=uFbzmEPY (NFV)
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
[3]: https://www.linuxfoundation.org/blog/2018/02/open-source-standards-team-red-hat-measures-open-source-success/
[4]: https://events.linuxfoundation.org/events/open-networking-summit-north-america-2019/
[5]: https://www.linkedin.com/in/tom-nadeau/
[6]: https://onseu18.sched.com/event/Fmpr

View File

@ -0,0 +1,74 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (An inside look at an IIoT-powered smart factory)
[#]: via: (https://www.networkworld.com/article/3384378/an-inside-look-at-tempo-automations-iiot-powered-smart-factory.html#tk.rss_all)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
An inside look at an IIoT-powered smart factory
======
### Despite housing some 50 robots and 50 people, Tempo Automations gleaming connected factory relies on industrial IoT and looks more like a high-tech startup office than a manufacturing plant.
![Tempo Automation][1]
As someone whos spent his whole career working in offices, not factories, I had very little idea what a modern “smart factory” powered by the industrial Internet of Things (IIoT) might look like. Thats why I was so interested in [Tempo Automation][2]s new 42,000-square-foot facility in San Franciscos trendy Design District.
Frankly, I pictured the companys facility, which uses IIoT to automatically configure, operate, and monitor the prototyping and low-volume production of printed circuit board assemblies (PCBAs), as a cacophony of robots and conveyor belts attended to by a grizzled band of grease-stained technicians. You know, a 21stcentury update of Charlie Chaplins 1936 classic *Modern Times *making equipment for customers in the aerospace, medtech, industrial automation, consumer electronics, and automotive industries. (The company just inked a [new contract with Lockheed Martin][3].)
**[ Learn more about the[industrial Internet of Things][4]. | Get regularly scheduled insights by [signing up for Network World newsletters][5]. ]**
Not exactly. As you can see from the below pictures, despite housing some 50 robots and 50 people, this gleaming “connected factory” looks more like a high-tech startup office, with just as many computers and few more hard-to-identify machines, including Solder Jet and Stencil Printers, zone reflow ovens, 3D X-ray devices and many more.
![Tempo Automation office space][6]
![Tempo Automation factory floor][7]
## How Tempo Automation's 'smart factory' works
On the front end, Tempos customers upload CAD files with their board designs and Bills of Materials (BOM) listing the required parts to be used. After performing feature extraction on the design and developing a virtual model of the finished product, the Tempo system, the platform (called Tempocom) creates a manufacturing plan and automatically programs the factorys machines. Tempocom also creates work plans for the factory employees, uploading them to the networked IIoT mobile devicesthey all carry. Updated in real time based on design and process changes, this“digital traveler” tells workers where to go and what to work on next.
While Tempocom is planning and organizing the internal work of production, the system is also connected to supplier databases, seeking and ordering the parts that will be used in assembly, optimizing for speed of delivery to the Tempo factory.
## Connecting the digital thread
“There could be up to 20 robots, 400 unique parts, and 25 people working on the factory floor to produce one order start to finish in a matter of hours,” explained [Shashank Samala][8], Tempos co-founder and vice president of product in an email. Tempo “employs IIoT to automatically configure, operate, and monitor” the entire process, coordinated by a “connected manufacturing system” that creates an “unbroken digital thread from design intent of the engineer captured on the website, to suppliers distributed across the country, to robots and people on the factory floor.”
Rather than the machines on the floor functioning as “isolated islands of technology,” Samala added, Tempo Automation uses [Amazon Web Services (AWS) GovCloud][9] to network everything in a bi-directional feedback loop.
“After customers upload their design to the Tempo platform, our software extracts the design features and then streams relevant data down to all the devices, processes, and robots on the factory floor,” he said. “This loop then works the other way: As the robots build the products, they collect data and feedback about the design during production. This data is then streamed back through the Tempo secure cloud architecture to the customer as a Production Forensics report.”
Samala claimed the system has “streamlined operations, improved collaboration, and simplified remote management and control.”
## Traditional IoT, too
Of course, the Tempo factory isnt all fancy, cutting-edge IIoT implementations. According to Ryan Saul, vice president of manufacturing,the plant also includes an array of IoT sensors that track temperature, humidity, equipment status, job progress, reported defects, and so on to help engineers and executives understand how the facility is operating.
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384378/an-inside-look-at-tempo-automations-iiot-powered-smart-factory.html#tk.rss_all
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Fredric-Paul/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-floor-100791923-large.jpg
[2]: http://www.tempoautomation.com/
[3]: https://www.businesswire.com/news/home/20190325005097/en/Tempo-Automation-Announces-Contract-Lockheed-Martin
[4]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html#nww-fsb
[5]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
[6]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-2-100791921-large.jpg
[7]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-100791922-large.jpg
[8]: https://www.linkedin.com/in/shashanksamala/
[9]: https://aws.amazon.com/govcloud-us/
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,65 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Changes in SD-WAN Purchase Drivers Show Maturity of the Technology)
[#]: via: (https://www.networkworld.com/article/3384103/changes-in-sd-wan-purchase-drivers-show-maturity-of-the-technology.html#tk.rss_all)
[#]: author: (Cliff Grossner https://www.networkworld.com/author/Cliff-Grossner/)
Changes in SD-WAN Purchase Drivers Show Maturity of the Technology
======
![istock][1]
[SD-WANs][2] have been available now for the past five years, but adoption has been light compared to that of the overall WAN market. This should be no surprise, as the technology was immature, and customers were dipping their toes in the water first as a test. Recently, however, there are signs that the market is maturing, which also happens to coincide with an acceleration of the market.
Evidence of the maturation of SD-WANs can be seen in the most recent IHS Markit _Campus LAN and WAN SDN Strategies and Leadership North American Enterprise Survey_. Exhibit 1 shows that the top drivers of SD-WAN deployments are the simplification of WAN provisioning, automation capabilities. and direct cloud connectivity—all of which require an architectural change.
This is in stark contrast to the approach of early adopters looking for a reduction in opex and capex savings, doing so in the past by shifting to cheap broadband and low-cost branch hardware. The survey data finds that opex savings now ranks tied in fifth place among the purchase drivers of SD-WAN; and that reduced capex is last, indicating that cost savings no longer possess the same level of importance as with early adopters.
The shift in purchase drivers indicates companies are looking for SD-WAN to provide more value than legacy WAN.
With [SD-WAN][3], the “software defined” indicates that the control plane has been separated from the data plane, enabling the control plane to be abstracted away from the hardware and allowing centralized, distributed, and hybrid control architectures, working alongside the centralized management of those architectures. This provides many benefits, the biggest of which is to make WAN provisioning easier.
![Exhibit 1: Simplification and automation are top drivers for SD-WAN.][4]
With SD-WAN, most mainstream buyers now demand Zero Touch Provisioning, where the SD-WAN appliance automatically calls home when it attaches to the network and pulls its configuration down from a centralized location. Also, changes can be made through a centralized console and then immediately pushed out to every device. This can automate many of the mundane and repetitive tasks associated with running a network.
Such a setup carries many benefits—the most important being that highly skilled network engineers can dedicate more time to innovation and less time to working on tasks associated with “keeping the lights on.”
At present, most resources—time and money—associated with running the WAN are allocated to maintaining the status quo. In the cloud era, however, business leaders embracing digital transformation are looking to their IT organization to help drive innovation and leapfrog the competition. SD-WANs can modernize the network, and the technology will tip the IT resource scale back in favor of innovation.
### Mainstream buyers set new expectations for SD-WAN
With early adopters, technology innovation is key because adopters are generally tech-savvy buyers and are always looking to use the latest and greatest to gain an edge. With mainstream buyers, other concerns arise. Exhibit 2 from the IHS Markit survey shows that technological innovation now ranks tied in fourth place in what buyers look for from an SD-WAN provider. While innovation is still important, factors such as security, financial stability, and product service and reliability rank higher. And although businesses need a strong technical solution, it cannot be achieved at the expense of security, vendor stability, or quality without putting operations at risk.
Its not surprising, then, that security turned out to be the overwhelming top evaluation criterion, as SD-WANs enable businesses to implement local internet breakout and cloud on-ramp features. Overall, SD-WANs help make applications perform better, especially as enterprises deploy workloads in off-premises, cloud-service-provider-operated data centers as they build their hybrid and multi-clouds.
Another security capability of SD-WANs is their ability to easily implement segmentation, which enables businesses to establish centrally defined and globally consistent security policies that isolate traffic. For example, a retailer could isolate point-of-sale systems from its guest Wi-Fi network. [SD-WAN vendors][5] can also establish partnerships with well-known security vendors that enable the SD-WAN software to be service chained into application traffic flows, in the process allowing mainstream buyers their choice of security technology.
![Exhibit 2: SD-WAN buyers now want security and financially viable vendors.][6]
### The bottom line
The SD-WAN market is maturing, and the shift from early adopters to mainstream businesses will create a “rising tide” that will benefit all SD-WAN buyers in the WAN ecosystem. As a result, vendors will work to meet calls emphasizing greater simplicity and risk reduction, as well as bring about features that provide an integrated connectivity fabric for enterprise edge, hybrid, and multi-clouds.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384103/changes-in-sd-wan-purchase-drivers-show-maturity-of-the-technology.html#tk.rss_all
作者:[Cliff Grossner][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Cliff-Grossner/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/istock-998475736-100791932-large.jpg
[2]: https://www.silver-peak.com/sd-wan
[3]: https://www.silver-peak.com/sd-wan/sd-wan-explained
[4]: https://images.idgesg.net/images/article/2019/03/chart-1_post-10-100791930-large.jpg
[5]: https://www.silver-peak.com/sd-wan/choosing-an-sd-wan-vendor
[6]: https://images.idgesg.net/images/article/2019/03/chart-2_post-10-100791931-large.jpg

View File

@ -0,0 +1,52 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Todays Retailer is Turning to the Edge for CX)
[#]: via: (https://www.networkworld.com/article/3384202/today-s-retailer-is-turning-to-the-edge-for-cx.html#tk.rss_all)
[#]: author: (Cindy Waxer https://www.networkworld.com/author/Cindy-Waxer/)
Todays Retailer is Turning to the Edge for CX
======
### Despite the increasing popularity and convenience of ecommerce, 92% of purchases continue to be made off-line, according to the U.S. Census.
![iStock][1]
Despite the increasing popularity and convenience of ecommerce, 92% of purchases continue to be made off-line, according to the [U.S. Census][2]. Thats putting enormous pressure on retailers to meet new consumer expectations around real-time access to merchandise and order information. In fact, 85.3% of shoppers expect retailers to provide associates with handheld or fixed devices to check inventory and price within a store, a nearly 51% increase over 2017, according to a [survey from SOTI][3].
With an eye on transforming the customer experience of spending time in a store, retailers are investing aggressively in compute power located closer to the buyer, also known as [edge computing][4].
So what new and innovative technologies are edge environments supporting? Heres where retail is headed with customer service and how edge computing will help them get there.
**Face forward** : Facial recognition technology is on the rise in retail as brands search for new ways to engage customers. Take, CaliBurger, for example. The restaurant chain recently tested out self-ordering kiosks that use AI and facial-recognition technology to identify registered customers and pull up their loyalty accounts and order preferences. By automatically displaying a customers most popular purchases, the system aims to help patrons complete their orders in seconds flat for greater speed and convenience.
**Customer experience on display** : Forget about traditional counter displays. Savvy retailers are experimenting with high-tech, in-store digital signage solutions to attract consumers and gather valuable data. For instance, Glass Medias projection-based, end-to-end digital retail signage combines display technology, a cloud-based IoT platform, and data analytic capabilities. Through projection, the solution can influence customers at the point-of-decision.
**Backroom access** : Tracking inventory manually requires substantial human resources. IoT-powered backroom technologies such as RFID, real-time point of sale (POS), and smart shelving systems promise to change that by improving the accuracy of inventory tracking throughout the supply chain. These automated solutions can track and reorder items automatically, eliminating the need for humans to take inventory and reducing the risk of product shortages.
**Robots to the rescue** : Hoping to transform the branch experience, HSBC recently unveiled Pepper, a concierge robot whose job is to help customers with simple tasks, from answering commonly asked questions to directing them to available tellers. Pepper also acts as an online banking station where customers can log into their mobile banking account or access information about products. By putting Pepper on the payroll, HSBC hopes to reduce customer wait times and free up its “human” bankers.
These innovative technologies provide retailers with unique opportunities to enhance customer experience, develop new revenue streams, and boost customer loyalty. But many of them require edge computing to work properly. Bandwidth-intensive content and vast volumes of data can lead to latency issues, outages, and other IT headaches. Fortunately, by placing computing power and storage capabilities directly on the edge of the network, edge computing can help retailers deliver the best customer experience possible.
To find out more about how edge computing is transforming the customer experience in retail, visit [APC.com][5].
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384202/today-s-retailer-is-turning-to-the-edge-for-cx.html#tk.rss_all
作者:[Cindy Waxer][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Cindy-Waxer/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/istock-508154656-100791924-large.jpg
[2]: https://ycharts.com/indicators/ecommerce_sales_as_percent_retail_sales
[3]: https://www.soti.net/resources/newsroom/2019/annual-connected-retailer-survey-new-soti-survey-reveals-us-consumers-prefer-speed-and-convenience-when-shopping-with-limited-human-interaction/
[4]: https://www.hpe.com/us/en/servers/edgeline-iot-systems.html?pp=false&jumpid=ps_83cqske5um_aid-510380402&gclid=CjwKCAjw6djYBRB8EiwAoAF6oWwk-M6LWcfCbbZ331fXhEHShXGbLWoSwTIzue6mxQg4gDvYx59XZxoC_4oQAvD_BwE&gclsrc=aw.ds
[5]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp

View File

@ -1,154 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using Square Brackets in Bash: Part 1)
[#]: via: (https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1)
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
Using Square Brackets in Bash: Part 1
======
![square brackets][1]
This tutorial tackle square brackets and how they are used in different contexts at the command line.
[Creative Commons Zero][2]
After taking a look at [how curly braces (`{}`) work on the command line][3], now its time to tackle brackets (`[]`) and see how they are used in different contexts.
### Globbing
The first and easiest use of square brackets is in _globbing_. You have probably used globbing before without knowing it. Think of all the times you have listed files of a certain type, say, you wanted to list JPEGs, but not PNGs:
```
ls *.jpg
```
Using wildcards to get all the results that fit a certain pattern is precisely what we call globbing.
In the example above, the asterisk means " _zero or more characters_ ". There is another globbing wildcard, `?`, which means " _exactly one character_ ", so, while
```
ls d*k*
```
will list files called _darkly_ and _ducky_ (and _dark_ and _duck_ \-- remember `*` can also be zero characters),
```
ls d*k?
```
will not list _darkly_ (or _dark_ or _duck_ ), but it will list _ducky_.
Square brackets are used in globbing for sets of characters. To see what this means, make directory in which to carry out tests, `cd` into it and create a bunch of files like this:
```
touch file0{0..9}{0..9}
```
(If you don't know why that works, [take a look at the last installment that explains curly braces `{}`][3]).
This will create files _file000_ , _file001_ , _file002_ , etc., through _file097_ , _file098_ and _file099_.
Then, to list the files in the 70s and 80s, you can do this:
```
ls file0[78]?
```
To list _file022_ , _file027_ , _file028_ , _file052_ , _file057_ , _file058_ , _file092_ , _file097_ , and _file98_ you can do this:
```
ls file0[259][278]
```
Of course, you can use globbing (and square brackets for sets) for more than just `ls`. You can use globbing with any other tool for listing, removing, moving, or copying files, although the last two may require a bit of lateral thinking.
Let's say you want to create duplicates of files _file010_ through _file029_ and call the copies _archive010_ , _archive011_ , _archive012_ , etc..
You can't do:
```
cp file0[12]? archive0[12]?
```
Because globbing is for matching against existing files and directories and the _archive..._ files don't exist yet.
Doing this:
```
cp file0[12]? archive0[1..2][0..9]
```
won't work either, because `cp` doesn't let you copy many files to other many new files. Copying many files only works if you are copying them to a directory, so this:
```
mkdir archive
cp file0[12]? archive
```
would work, but it would copy the files, using their same names, into a directory called _archive/_. This is not what you set out to do.
However, if you look back at [the article on curly braces (`{}`)][3], you will remember how you can use `%` to lop off the end of a string contained in a variable.
Of course, there is a way you can also lop of the beginning of string contained in a variable. Instead of `%`, you use `#`.
For practice, you can try this:
```
myvar="Hello World"
echo Goodbye Cruel ${myvar#Hello}
```
It prints " _Goodbye Cruel World_ " because `#Hello` gets rid of the _Hello_ part at the beginning of the string stored in `myvar`.
You can use this feature alongside your globbing tools to make your _archive_ duplicates:
```
for i in file0[12]?;\
do\
cp $i archive${i#file};\
done
```
The first line tells the Bash interpreter that you want to loop through all the files that contain the string _file0_ followed by the digits _1_ or _2_ , and then one other character, which can be anything. The second line `do` indicates that what follows is the instruction or list of instructions you want the interpreter to loop through.
Line 3 is where the actually copying happens, and you use the contents of the loop variable _`i`_ **twice: First, straight out, as the first parameter of the `cp` command, and then you add _archive_ to its contents, while at the same time cutting of _file_. So, if _`i`_ contains, say, _file019_...
```
"archive" + "file019" - "file" = "archive019"
```
the `cp` line is expanded to this:
```
cp file019 archive019
```
Finally, notice how you can use the backslash `\` to split a chain of commands over several lines for clarity.
In part two, well look at more ways to use square brackets. Stay tuned.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1
作者:[Paul Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/bro66
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/square-gabriele-diwald-475007-unsplash.jpg?itok=cKmysLfd (square brackets)
[2]: https://www.linux.com/LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
[3]: https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash

View File

@ -0,0 +1,66 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco forms VC firm looking to weaponize fledgling technology companies)
[#]: via: (https://www.networkworld.com/article/3385039/cisco-forms-vc-firm-looking-to-weaponize-fledgling-technology-companies.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco forms VC firm looking to weaponize fledgling technology companies
======
### Decibel, an investment firm focused on early stage funding for enterprise-product startups, will back technologies related to Cisco's core interests.
![BrianaJackson / Getty][1]
Cisco this week stepped deeper into the venture capital world by announcing Decibel, an early-stage investment firm that will focus on bringing enterprise-oriented startups to market.
Veteran VC groundbreaker and former general partner at New Enterprise Associates [Jon Sakoda][2] will lead Decibel. Sakoda had been with NEA since 2006 and focused on startup investments in software and Internet companies.
**[ Now see[7 free network tools you must have][3]. ]**
Of Decibel Sakoda said: “We want to invest in companies that are helping our customers use innovation as a weapon in the game to transform their respective industries.”
“Decibel combines the speed, agility, and independent risk-taking traditionally found in the best VC firms, while offering differentiated access to the scale, entrepreneurial talent, and deep customer relationships found in one of the largest tech companies in the world,” [Sakoda said][4]. “This approach is an industry first and provides a unique way for entrepreneurs to get access to unparalleled resources at a time and stage when they need it most.”
“As one of the most prolific strategic venture capitalists in the world, Cisco already has a view into future technologies shaping our markets through our rich portfolio of companies,” wrote Rob Salvagno, vice president of Corporate Development and Cisco Investments in a [blog about Decibel][5]. “But we realized we could do even more by engaging with the startup community earlier in its lifecycle.”
Indeed Cisco already has an investment arm, Cisco Investments, that focuses on later stage startups, the company says. Cisco said this arm invests $200 to $300 million annually, and it will continue its charter of investing and partnering with best-in-class companies in core and adjacent markets.
Cisco didnt talk about how much money would be involved in Decibel, but according to a [CNBC report][6], Cisco is setting up Decibel as an independent firm with a separate pool of cash, an unusual model for corporate investors. The fund hasnt closed yet, but a [Securities and Exchange Commission filing][7] from October indicated that Sakoda was setting out to [raise $500 million][8], CNBC wrote.
**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][9] ]**
Decibel does plan to invest anywhere from $5M 15M in each start up in their portfolio, Cisco says.
“Cisco has a culture of leveraging both internal and external innovation accelerating our rich internal development capabilities by our ability to also partner, invest and acquire, Salvagno said.
He said the company recognizes that significant innovation happens outside of the walls of Cisco. Cisco has acquired more than 200 companies, accounting for more than one in eight Cisco employees have joined as a result. "We have a deep bench of acquired founders, many of which play leadership roles within the company today, which continues to reinforce this entrepreneurial spirit," Salvagno said.
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385039/cisco-forms-vc-firm-looking-to-weaponize-fledgling-technology-companies.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/money_salary_magnet_flying-money_money-magnet-by-brianajackson-getty-100787974-large.jpg
[2]: https://twitter.com/jonsakoda
[3]: https://www.networkworld.com/article/2825879/7-free-open-source-network-monitoring-tools.html
[4]: https://www.decibel.vc/the-blast/announcingdecibel
[5]: https://blogs.cisco.com/news/cisco-fuels-innovation-engine-with-investment-in-new-early-stage-vc-fund
[6]: https://www.cnbc.com/2019/03/26/cisco-introduces-decibel-an-early-stage-venture-firm-with-jon-sakoda.html
[7]: https://www.sec.gov/Archives/edgar/data/1754260/000175426018000002/xslFormDX01/primary_doc.xml
[8]: https://www.cnbc.com/2018/10/08/cisco-lead-investor-jon-sakoda-catalyst-labs-500-million.html
[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,59 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (HPE introduces hybrid cloud consulting business)
[#]: via: (https://www.networkworld.com/article/3384919/hpe-introduces-hybrid-cloud-consulting-business.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
HPE introduces hybrid cloud consulting business
======
### HPE's Right Mix Advisor is designed to find a balance between on-premises and cloud systems.
![Hewlett Packard Enterprise][1]
Hybrid cloud is pretty much the de facto way to go, with only a few firms adopting a pure cloud play to replace their data center and only suicidal firms refusing to go to the cloud. But picking the right balance between on-premises and the cloud is tricky, and a mistake can be costly.
Enter Right Mix Advisor from Hewlett Packard Enterprise, a combination of consulting from HPE's Pointnext division and software tools. It draws on quite a bit of recent acquisitions. Another part of Right Mix Advisor is a British cloud consultancy RedPixie, Amazon Web Services (AWS) specialists Cloud Technology Partners, and automated discovery capabilities from an Irish startup iQuate.
Right Mix Advisor gathers data points from the companys entire enterprise, ranging from configuration management database systems (CMDBs), such as ServiceNow, to external sources, such as cloud providers. HPE says that in a recent customer engagement it scanned 9 million IP addresses across six data centers.
**[ Read also:[What is hybrid cloud computing][2]. | Learn [what you need to know about multi-cloud][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
HPE Pointnext consultants then work with the clients IT teams to analyze the data to determine the optimal configuration for workload placement. Pointnext has become HPEs main consulting outfit following its divestiture of EDS, which it acquired in 2008 but spun off in a merger with CSC to form DXC Consulting. Pointnext now has 25,000 consultants in 88 countries.
In a typical engagement, HPE claims it can deliver a concrete action plan within weeks, whereas previously businesses may have needed months to come to a conclusion using a manual processes. HPE has found migrating the right workloads to the right mix of hybrid cloud can typically result in 40 percent total cost of ownership savings*. *
Although HPE has thrown its weight behind AWS, that doesnt mean it doesnt support competitors. Erik Vogel, vice president of hybrid IT for HPE Pointnext, notes in the blog post announcing Right Mix Advisor that target environments could be Microsoft Azure or Azure Stack, AWS, Google or Ali Cloud.
“New service providers are popping up every day, and we see the big public cloud providers constantly producing new services and pricing models. As a result, the calculus for determining your right mix is constantly changing. If Azure, for example, offers a new service capability or a 10 percent pricing discount and it makes sense to leverage it, you want to be able to move an application seamlessly into that new environment,” he wrote.
Key to Right Mix Advisor is app migration, and Pointnext follows the 50/30/20 rule: About 50 percent of apps are suitable for migration to the cloud, and for about 30 percent, migration is not a good choice for migration to be worth the effort. The remaining 20 percent should be retired.
“With HPE Right Mix Advisor, you can identify that 50 percent,” he wrote. “Rather than hand you a laundry list of 10,000 apps to start migrating, HPE Right Mix Advisor hones in on whats most impactful right now to meet your business goals the 10 things you can do on Monday morning that you can be confident will really help your business.”
HPE has already done some pilot projects with the Right Mix service and expects to expand it to include channel partners.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384919/hpe-introduces-hybrid-cloud-consulting-business.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2015/11/hpe_building-100625424-large.jpg
[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,126 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Identifying exceptional user experience (UX) in IoT platforms)
[#]: via: (https://www.networkworld.com/article/3384738/identifying-exceptional-user-experience-ux-in-iot-platforms.html#tk.rss_all)
[#]: author: (Steven Hilton https://www.networkworld.com/author/Steven-Hilton/)
Identifying exceptional user experience (UX) in IoT platforms
======
### Examples of excellent IoT platform UX from the perspectives of 5 typical IoT platform personas.
![Leo Wolfert / Getty Images][1]
Enterprises are inundated with information about IoT platforms features and capabilities. But to find a long-lived IoT platform that minimizes ongoing development costs, enterprises must focus on exceptional user experience (UX) for 5 types of IoT platform users.
Marketing and sales literature from IoT platform vendors is filled with information about IoT platform features. And no doubt, enterprises choosing to buy IoT platform services need to understand the actual capabilities of IoT platforms preferably by [testing a variety of IoT platforms][2] before making a purchase decision.
However, it is a lot harder to gauge the quality of an IoT platform UX than itemizing an IoT platforms features. Having excellent UX leads to lower platform deployment and management costs and higher customer satisfaction and retention. So enterprises should make UX one of their top criteria when selecting an IoT platform.
[RELATED: Storage tank operator turns to IoT for energy savings][3]
One of the ways to determine excellent IoT platform UX is to simulate the tasks conducted by typical IoT platform users. By completing these tasks, it becomes readily apparent when an IoT platform is exceptional or annoyingly bad.
In this blog, I describe excellent IoT platform UX from the perspectives of five typical IoT platform users or personas.
## Persona 1: platform administrator
A platform administrators primary role is to configure, monitor, and maintain the functionality of an IoT platform. A platform administrator is typically an IT employee responsible for maintaining and configuring the various data management, device management, access control, external integration, and monitoring services that comprise an IoT platform.
Typical platform administrator tasks include
* configuration of the on-platform data visualization and data aggregation tools
* configuration of available device management functionality or execution of in-bulk device management tasks
* configuration and creation of on-platform complex event processing (CEP) workflows
* management and configuration of platform service orchestration
Enterprises should pick IoT platforms with superlative access to on-platform configuration functionality with an emphasis on declarative interfaces for configuration management. Although many platform administrators are capable of working with RESTful API endpoints, good UX design should not require that platform administrators use third-party tools to automate basic functionality or execute bulk tasks. Some programmatic interfaces, such as SQL syntax for limiting monitoring views or dashboards for setting event processing trigger criteria, are acceptable and expected, although a fully declarative solution that maintains similar functionality is preferred.
## Persona 2: platform operator
A platform operators primary role is to leverage an IoT platform to execute common day-to-day business-centric operations and services. While the responsibilities of a platform operator will vary based on enterprise vertical and use case, all platform operators conduct business rather than IoT domain tasks.
Typical platform operator tasks include
* visualizing and aggregating on-platform data to view key business KPIs
* using device management functionality on a per-device basis
* creating, managing, and monitoring per-device and per-location event processing rules
* executing self-service administrative tasks, such as enrolling downstream operators
Enterprises should pick IoT platforms centered on excellent ease-of-use for a business user. In general, the UX should be focused on providing information immediately required for the execution of day-to-day operational tasks while removing more complex functionality. These platforms should have easy access to well-defined and well-constrained operational functions or data visualization. An effective UX should enable easy creation and modification of data views, graphs, dashboards, and other visualizations by allowing operators to select devices using a declarative rather than SQL or other programmatic interfaces.
## Persona 3: hardware and systems developer
A hardware and systems developers primary role is the integration and configuration of IoT assets into an IoT platform. The hardware and systems developer possesses very specific, detailed knowledge about IoT hardware (e.g., specific multipoint control units, embedded platforms, or PLC/SCADA control systems), and leverages this knowledge to enable protocol and asset compatibility with northbound platform services.
Typical hardware and systems developer tasks include
* designing and implementing firmware for IoT assets based on either standardized IoT SDKs or platform-specific SDKs
* updating firmware or software packages over deployment lifecycles
* integrating manufacturer-specific protocols adapters into either IoT assets or the northbound platform
Enterprises should pick IoT platforms that allow hardware and systems developers to most efficiently design and implement low-level device and protocol functionality. An effective developer experience provides well-documented and fully-featured SDKs supporting a variety of languages and device architectures to enable integration with various types of IoT hardware.
## Persona 4: platform and backend developer
A platform and backend developers primary role is to execute customer-specific application logic and integrations within an IoT deployment. Customer-specific logic may include on-platform or on-edge custom applications, such as those used for analytics, data aggregation and normalization, or any type of event processing workflow. In addition, a platform and backend developer is responsible for integrating the IoT platform with external databases, analytic solutions, or business systems such as MES, ERP, or CRM applications.
Typical platform and backend developer tasks include
* integrating streaming data from the IoT platform into external systems and applications
* configuring inbound and outbound platform actions and interactions with external systems
* configuring complex code-based event processing capabilities beyond the scope of a platform administrators knowledge or ability
* debugging low-level platform functionalities that require coding to detect or resolve
Enterprises should pick excellent IoT platforms that provide access to well-documented and well-featured platform-level SDKs for application or service development. A best-in-class platform UX should provide real-time logging tools, debugging tools, and indexed and searchable access to all platform logs. Finally, a platform and backend developer is particularly dependent upon high-quality, platform-level documentation, especially for platform APIs.
## Persona 5: user interface and experience (UI/UX) developer
A UI/UX developers primary role is to design the various operator interfaces and monitoring views for an IoT platform. In more complex IoT deployments, various operator audiences will need to be addressed, including solution domain experts such as a factory manager; role-specific experts such as an equipment operator or factory technician; and business experts such as a supply-chain analyst or company executive.
Typical UI/UX developer tasks include
* building and maintaining customer-specific dashboards and monitoring views on either the IoT platform or edge devices
* designing, implementing, and maintaining various operator consoles for a variety of operator audiences and customer-specific use cases
* ensuring good user experience for customers over the lifetime of an IoT implementation
Enterprises should pick IoT platforms that provide an exceptional variety and quality of UI/UX tools, such as dashboarding frameworks for on-platform monitoring solutions that are declaratively or programmatically customizable, as well as various widget and display blocks to help the developer rapidly implement customer-specific views. An IoT platform must also provide a UI/UX developer with appropriate debugging and logging tools for monitoring and operator console frameworks and platform APIs. Finally, a best-in-class platform should provide a sample dashboard, operator console, and on-edge monitoring implementation in order to enable the UI/UX developer to quickly become accustomed with platform paradigms and best practices.
Enterprises should make UX one of their top criteria when selecting an IoT platform. Having excellent UX allows enterprises to minimize platform deployment and management costs. At the same time, excellent UX allows enterprises to more readily launch new solutions to the market thereby increasing customer satisfaction and retention.
**This article is published as part of the IDG Contributor Network.[Want to Join?][4]**
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384738/identifying-exceptional-user-experience-ux-in-iot-platforms.html#tk.rss_all
作者:[Steven Hilton][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Steven-Hilton/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_by_leowolfert_gettyimages-689799380_2400x1600-100788464-large.jpg
[2]: https://www.machnation.com/2018/09/25/announcing-mit-e-2-0-hands-on-benchmarking-for-iot-cloud-edge-and-analytics-platforms/
[3]: https://www.networkworld.com/article/3169384/internet-of-things/storage-tank-operator-turns-to-iot-for-energy-savings.html#tk.nww-fsb
[4]: /contributor-network/signup.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,71 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (IoT roundup: Keeping an eye on energy use and Volkswagen teams with AWS)
[#]: via: (https://www.networkworld.com/article/3384697/iot-roundup-keeping-an-eye-on-energy-use-and-volkswagen-teams-with-aws.html#tk.rss_all)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
IoT roundup: Keeping an eye on energy use and Volkswagen teams with AWS
======
### This week's roundup features new tech from MIT, big news in the automotive sector and a handy new level of centralization from a smaller IoT-focused company.
![Getty Images][1]
Much of whats exciting about IoT technology has to do with getting data from a huge variety of sources into one place so it can be mined for insight, but sensors used to gather that data are frequently legacy devices from the early days of industrial automation or cheap, lightweight, SoC-based gadgets without a lot of sophistication of their own.
Researchers at MIT have devised a system that can gather a certain slice of data from unsophisticated devices that are grouped on the same electrical circuit without adding sensors to each device.
**[ Check out our[corporate guide to addressing IoT security][2]. ]**
The technologys called non-intrusive load monitoring, and sits directly on a given building's, vehicle's or other piece of infrastructures electrical circuits, identifies devices based on their power usage, and sends alerts when there are irregularities.
It seems likely to make IIoT-related waves once its out of testing and onto the market.
NLIM was recently tested, said MITs news service, on a U.S. Coast Guard cutter based in Boston, where it was attached to the outside of an electrical wire “at a single point, without requiring any cutting or splicing of wires.”
Two such connections allowed the scientists to monitor roughly 20 separate devices on an electrical circuit, and the system was able to detect an anomalous amount of energy use from a component of the ships diesel engines known as a jacket water heater.
“[C]rewmembers were skeptical about the reading but went to check it anyway. The heaters are hidden under protective metal covers, but as soon as the cover was removed from the suspect device, smoke came pouring out, and severe corrosion and broken insulation were clearly revealed,” the MIT report stated. Two other important but slightly less critical faults were also detected by the system.
Its easy to see why NLIM could easily prove to be an attractive technology for IIoT use in the future. It sounds as though its very simple to install, can operate without any kind of Internet connection (though most implementers will probably want to connect it to a wider monitoring setup for a more holistic picture of their systems) and does all of its computational work locally. It can even be used for general energy audits. What, in short, is not to like?
**Volkswagen teams up with Amazon**
AWS has got a new flagship client for its growing IoT services in the form of the Volkswagen Group, which [announced][3] that AWS is going to design and build the Volkswagen Industrial Cloud, a floor-to-ceiling industrial IoT implementation aimed at improving uptime, flexibility, productivity and vehicle quality.
Real-time data from all 122 of VWs manufacturing plants around the world will be available to the system, everything from part tracking to comparative analysis of efficiency to even deeper forms of analytics will take place in the companys “data lake,” as the announcement calls it. Oh, and machine learning is part of it, too.
The German carmaker clearly believes that AWSs technology can provide a lot of help to its operations across the board, [even in the wake of a partnership with Microsoft for Azure-based cloud services announced last year.][4]
**IoT-in-a-box**
IoT can be very complicated. While individual components of any given implementation are often quite simple, each implementation usually contains a host of technologies that have to work in close concert. That means a lot of orchestration work has to go into making this stuff work.
Enter Digi International, which rolled out an IoT-in-a-box package called Digi Foundations earlier this month. The idea is to take a lot of the logistical legwork out of IoT implementations by integrating cloud-connection software and edge-computing capabilities into the companys core industrial router business. Foundations, which is packaged as a software subscription that adds these capabilities and more to the companys devices, also includes a built-in management layer, allowing for simplified configuration and monitoring.
OK, so its not quite all-in-one, but its still an impressive level of integration, particularly from a company that many might not have heard of before. Its also a potential bellwether for other smaller firms upping their technical sophistication in the IoT sector.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384697/iot-roundup-keeping-an-eye-on-energy-use-and-volkswagen-teams-with-aws.html#tk.rss_all
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/08/nw_iot-news_internet-of-things_smart-city_smart-home7-100768495-large.jpg
[2]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html
[3]: https://www.volkswagen-newsroom.com/en/press-releases/volkswagen-and-amazon-web-services-to-develop-industrial-cloud-4780
[4]: https://www.volkswagenag.com/en/news/2018/09/volkswagen-and-microsoft-announce-strategic-partnership.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -1,85 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Setting kernel command line arguments with Fedora 30)
[#]: via: (https://fedoramagazine.org/setting-kernel-command-line-arguments-with-fedora-30/)
[#]: author: (Laura Abbott https://fedoramagazine.org/makes-fedora-kernel/)
Setting kernel command line arguments with Fedora 30
======
![][1]
Adding options to the kernel command line is a common task when debugging or experimenting with the kernel. The upcoming Fedora 30 release made a change to use Bootloader Spec ([BLS][2]). Depending on how you are used to modifying kernel command line options, your workflow may now change. Read on for more information.
To determine if your system is running with BLS or the older layout, look in the file
```
/etc/default/grub
```
If you see
```
GRUB_ENABLE_BLSCFG=true
```
in there, you are running with the BLS setup and you may need to change how you set kernel command line arguments.
If you only want to modify a single kernel entry (for example, to temporarily work around a display problem) you can use a grubby command
```
$ grubby --update-kernel /boot/vmlinuz-5.0.1-300.fc30.x86_64 --args="amdgpu.dc=0"
```
To remove a kernel argument, you can use the
```
--remove-args
```
argument to grubby
```
$ grubby --update-kernel /boot/vmlinuz-5.0.1-300.fc30.x86_64 --remove-args="amdgpu.dc=0"
```
If there is an option that should be added to every kernel command line (for example, you always want to disable the use of the rdrand instruction for random number generation) you can run a grubby command:
```
$ grubby --update-kernel=ALL --args="nordrand"
```
This will update the command line of all kernel entries and save the option to the saved kernel command line for future entries.
If you later want to remove the option from all kernels, you can again use
```
--remove-args
```
with
```
--update-kernel=ALL
```
```
$ grubby --update-kernel=ALL --remove-args="nordrand"
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/setting-kernel-command-line-arguments-with-fedora-30/
作者:[Laura Abbott][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/makes-fedora-kernel/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/f30-kernel-1-816x345.jpg
[2]: https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault

View File

@ -0,0 +1,133 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Can Better Task Stealing Make Linux Faster?)
[#]: via: (https://www.linux.com/blog/can-better-task-stealing-make-linux-faster)
[#]: author: (Oracle )
Can Better Task Stealing Make Linux Faster?
======
_Oracle Linux kernel developer Steve Sistare contributes this discussion on kernel scheduler improvements._
### Load balancing via scalable task stealing
The Linux task scheduler balances load across a system by pushing waking tasks to idle CPUs, and by pulling tasks from busy CPUs when a CPU becomes idle. Efficient scaling is a challenge on both the push and pull sides on large systems. For pulls, the scheduler searches all CPUs in successively larger domains until an overloaded CPU is found, and pulls a task from the busiest group. This is very expensive, costing 10's to 100's of microseconds on large systems, so search time is limited by the average idle time, and some domains are not searched. Balance is not always achieved, and idle CPUs go unused.
I have implemented an alternate mechanism that is invoked after the existing search in idle_balance() limits itself and finds nothing. I maintain a bitmap of overloaded CPUs, where a CPU sets its bit when its runnable CFS task count exceeds 1. The bitmap is sparse, with a limited number of significant bits per cacheline. This reduces cache contention when many threads concurrently set, clear, and visit elements. There is a bitmap per last-level cache. When a CPU becomes idle, it searches the bitmap to find the first overloaded CPU with a migratable task, and steals it. This simple stealing yields a higher CPU utilization than idle_balance() alone, because the search is cheap, costing 1 to 2 microseconds, so it may be called every time the CPU is about to go idle. Stealing does not offload the globally busiest queue, but it is much better than running nothing at all.
### Results
Stealing improves utilization with only a modest CPU overhead in scheduler code. In the following experiment, hackbench is run with varying numbers of groups (40 tasks per group), and the delta in /proc/schedstat is shown for each run, averaged per CPU, augmented with these non-standard stats:
* %find - percent of time spent in old and new functions that search for idle CPUs and tasks to steal and set the overloaded CPUs bitmap.
* steal - number of times a task is stolen from another CPU. Elapsed time improves by 8 to 36%, costing at most 0.4% more find time.
![load balancing][1]
[Used with permission][2]
CPU busy utilization is close to 100% for the new kernel, as shown by the green curve in the following graph, versus the orange curve for the baseline kernel:
![][3]
Stealing improves Oracle database OLTP performance by up to 9% depending on load, and we have seen some nice improvements for mysql, pgsql, gcc, java, and networking. In general, stealing is most helpful for workloads with a high context switch rate.
### The code
As of this writing, this work is not yet upstream, but the latest patch series is at [https://lkml.org/lkml/2018/12/6/1253. ][4]If your kernel is built with CONFIG_SCHED_DEBUG=y, you can verify that it contains the stealing optimization using
```
# grep -q STEAL /sys/kernel/debug/sched_features && echo Yes
Yes
```
If you try it, note that stealing is disabled for systems with more than 2 NUMA nodes, because hackbench regresses on such systems, as I explain in [https://lkml.org/lkml/2018/12/6/1250 .][5]However, I suspect this effect is specific to hackbench and that stealing will help other workloads on many-node systems. To try it, reboot with kernel parameter sched_steal_node_limit = 8 (or larger).
### Future work
After the basic stealing algorithm is pushed upstream, I am considering the following enhancements:
* If stealing within the last-level cache does not find a candidate, steal across LLC's and NUMA nodes.
* Maintain a sparse bitmap to identify stealing candidates in the RT scheduling class. Currently pull_rt_task() searches all run queues.
* Remove the core and socket levels from idle_balance(), as stealing handles those levels. Remove idle_balance() entirely when stealing across LLC is supported.
* Maintain a bitmap to identify idle cores and idle CPUs, for push balancing.
_This article originally appeared at[Oracle Developers Blog][6]._
_Oracle Linux kernel developer Steve Sistare contributes this discussion on kernel scheduler improvements._
### Load balancing via scalable task stealing
The Linux task scheduler balances load across a system by pushing waking tasks to idle CPUs, and by pulling tasks from busy CPUs when a CPU becomes idle. Efficient scaling is a challenge on both the push and pull sides on large systems. For pulls, the scheduler searches all CPUs in successively larger domains until an overloaded CPU is found, and pulls a task from the busiest group. This is very expensive, costing 10's to 100's of microseconds on large systems, so search time is limited by the average idle time, and some domains are not searched. Balance is not always achieved, and idle CPUs go unused.
I have implemented an alternate mechanism that is invoked after the existing search in idle_balance() limits itself and finds nothing. I maintain a bitmap of overloaded CPUs, where a CPU sets its bit when its runnable CFS task count exceeds 1. The bitmap is sparse, with a limited number of significant bits per cacheline. This reduces cache contention when many threads concurrently set, clear, and visit elements. There is a bitmap per last-level cache. When a CPU becomes idle, it searches the bitmap to find the first overloaded CPU with a migratable task, and steals it. This simple stealing yields a higher CPU utilization than idle_balance() alone, because the search is cheap, costing 1 to 2 microseconds, so it may be called every time the CPU is about to go idle. Stealing does not offload the globally busiest queue, but it is much better than running nothing at all.
### Results
Stealing improves utilization with only a modest CPU overhead in scheduler code. In the following experiment, hackbench is run with varying numbers of groups (40 tasks per group), and the delta in /proc/schedstat is shown for each run, averaged per CPU, augmented with these non-standard stats:
* %find - percent of time spent in old and new functions that search for idle CPUs and tasks to steal and set the overloaded CPUs bitmap.
* steal - number of times a task is stolen from another CPU. Elapsed time improves by 8 to 36%, costing at most 0.4% more find time.
![load balancing][1]
[Used with permission][2]
CPU busy utilization is close to 100% for the new kernel, as shown by the green curve in the following graph, versus the orange curve for the baseline kernel:
![][3]
Stealing improves Oracle database OLTP performance by up to 9% depending on load, and we have seen some nice improvements for mysql, pgsql, gcc, java, and networking. In general, stealing is most helpful for workloads with a high context switch rate.
### The code
As of this writing, this work is not yet upstream, but the latest patch series is at [https://lkml.org/lkml/2018/12/6/1253. ][4]If your kernel is built with CONFIG_SCHED_DEBUG=y, you can verify that it contains the stealing optimization using
```
# grep -q STEAL /sys/kernel/debug/sched_features && echo Yes
Yes
```
If you try it, note that stealing is disabled for systems with more than 2 NUMA nodes, because hackbench regresses on such systems, as I explain in [https://lkml.org/lkml/2018/12/6/1250 .][5]However, I suspect this effect is specific to hackbench and that stealing will help other workloads on many-node systems. To try it, reboot with kernel parameter sched_steal_node_limit = 8 (or larger).
### Future work
After the basic stealing algorithm is pushed upstream, I am considering the following enhancements:
* If stealing within the last-level cache does not find a candidate, steal across LLC's and NUMA nodes.
* Maintain a sparse bitmap to identify stealing candidates in the RT scheduling class. Currently pull_rt_task() searches all run queues.
* Remove the core and socket levels from idle_balance(), as stealing handles those levels. Remove idle_balance() entirely when stealing across LLC is supported.
* Maintain a bitmap to identify idle cores and idle CPUs, for push balancing.
_This article originally appeared at[Oracle Developers Blog][6]._
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/can-better-task-stealing-make-linux-faster
作者:[Oracle][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-load-balancing.png?itok=2Uk1yALt (load balancing)
[2]: /LICENSES/CATEGORY/USED-PERMISSION
[3]: https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b7a700fe-edc3-4ea0-876a-c91e1850b59b/Image/00c074f4282bcbaf0c10dd153c5dfa76/steal_graph.png
[4]: https://lkml.org/lkml/2018/12/6/1253
[5]: https://lkml.org/lkml/2018/12/6/1250
[6]: https://blogs.oracle.com/linux/can-better-task-stealing-make-linux-faster

View File

@ -0,0 +1,65 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Elizabeth Warren's right-to-repair plan fails to consider data from IoT equipment)
[#]: via: (https://www.networkworld.com/article/3385122/elizabeth-warrens-right-to-repair-plan-fails-to-consider-data-from-iot-equipment.html#tk.rss_all)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
Elizabeth Warren's right-to-repair plan fails to consider data from IoT equipment
======
### Senator and presidential candidate Elizabeth Warren suggests national legislation focused on farm equipment. But thats only a first step. The data collected by that equipment must also be considered.
![Thinkstock][1]
Theres a surprising battle being fought on Americas farms, between farmers and the companies that sell them tractors, combines, and other farm equipment. Surprisingly, the outcome of that war could have far-reaching implications for the internet of things (IoT) — and now Massachusetts senator and Democratic presidential candidate Elizabeth Warren has weighed in with a proposal that could shift the balance of power in this largely under-the-radar struggle.
## Right to repair farm equipment
Heres the story: As part of a new plan to support family farms, Warren came out in support of a national right-to-repair law for farm equipment. That might not sound like a big deal, but it raises the stakes in a long-simmering fight between farmers and equipment makers over who really controls access to the equipment — and to the increasingly critical data gathered by the IoT capabilities built into it.
**[ Also read:[Right-to-repair smartphone ruling loosens restrictions on industrial, farm IoT][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
[Warrens proposal reportedly][4] calls for making all diagnostics tools and manuals freely available to the equipment owners, as well as independent repair shops — not just vendors and their authorized agents — and focuses solely on farm equipment.
Thats a great start, and kudos to Warren for being by far the most prominent politician to weigh in on the issue.
## Part of a much bigger IoT data issue
But Warren's proposal merely scratches the surface of the much larger issue of who actually controls the equipment and devices that consumers and businesses buy. Even more important, it doesnt address the critical data gathered by IoT sensors in everything ranging from smartphones, wearables, and smart-home devices to private and commercial vehicles and aircraft to industrial equipment.
And as many farmers can tell you, this isnt some academic argument. That data has real value — not to mention privacy implications. For farmers, its GPS-equipped smart sensors tracking everything — from temperature to moisture to soil acidity — that can determine the most efficient times to plant and harvest crops. For consumers, it might be data that affects their home or auto insurance rates, or even divorce cases. For manufacturers, it might cover everything from which equipment needs maintenance to potential issues with raw materials or finished products.
The solution is simple: IoT users need consistent regulations that ensure free access to what is really their own data, and give them the option to share that data with the equipment vendors — if they so choose and on their own terms.
At the very least, users need clear statements of the rules, so they know exactly what theyre getting — and not getting — when they buy IoT-enhanced devices and equipment. And if theyre being honest, most equipment vendors would likely admit that clear rules would benefit them as well by creating a level playing field, reducing potential liabilities and helping to avoid making customers unhappy.
Sen. Warren made headlines earlier this month by proposing to ["break up" tech giants][5] such as Amazon, Apple, and Facebook. If she really wants to help technology buyers, prioritizing the right-to-repair and the associated right to own your own data seems like a more effective approach.
**[ Now read this:[Big trouble down on the IoT farm][6] ]**
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385122/elizabeth-warrens-right-to-repair-plan-fails-to-consider-data-from-iot-equipment.html#tk.rss_all
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Fredric-Paul/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2017/03/ai_agriculture_primary-100715481-large.jpg
[2]: https://www.networkworld.com/article/3317696/the-recent-right-to-repair-smartphone-ruling-will-also-affect-farm-and-industrial-equipment.html
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://appleinsider.com/articles/19/03/27/presidential-candidate-elizabeth-warren-focusing-right-to-repair-on-farmers-not-tech
[5]: https://www.nytimes.com/2019/03/08/us/politics/elizabeth-warren-amazon.html
[6]: https://www.networkworld.com/article/3262631/big-trouble-down-on-the-iot-farm.html
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,63 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Microsoft introduces Azure Stack for HCI)
[#]: via: (https://www.networkworld.com/article/3385078/microsoft-introduces-azure-stack-for-hci.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Microsoft introduces Azure Stack for HCI
======
### Azure Stack is great for your existing hardware, so Microsoft is covering the bases with a turnkey solution.
![Thinkstock/Microsoft][1]
Microsoft has introduced Azure Stack HCI Solutions, a new implementation of its on-premises Azure product specifically for [Hyper Converged Infrastructure][2] (HCI) hardware.
[Azure Stack][3] is an on-premises version of its Azure cloud service. It gives companies a chance to migrate to an Azure environment within the confines of their own enterprise rather than onto Microsofts data centers. Once you have migrated your apps and infrastructure to Azure Stack, moving between your systems and Microsofts cloud service is easy.
HCI is the latest trend in server hardware. It uses scale-out hardware systems and a full software-defined platform to handle [virtualization][4] and management. Its designed to reduce the complexity of a deployment and on-going management, since everything ships fully integrated, hardware and software.
**[ Read also:[12 most powerful hyperconverged infrasctructure vendors][5] | Get regularly scheduled insights: [Sign up for Network World newsletters][6] ]**
It makes sense for Microsoft to take this step. Azure Stack was ideal for an existing enterprise. Now you can deploy a whole new hardware configuration setup to run Azure in-house, complete with Hyper-V-based software-defined compute, storage, and networking.
The Windows Admin Center is the main management tool for Azure Stack HCI. It connects to other Azure tools, such as Azure Monitor, Azure Security Center, Azure Update Management, Azure Network Adapter, and Azure Site Recovery.
“We are bringing our existing HCI technology into the Azure Stack family for customers to run virtualized applications on-premises with direct access to Azure management services such as backup and disaster recovery,” wrote Julia White, corporate vice president of Microsoft Azure, in a [blog post announcing Azure Stack HCI][7].
Its not so much a new product launch as a rebranding. When Microsoft launched Server 2016, it introduced a version called Windows Server Software-Defined Data Center (SDDC), which was built on the Hyper-V hypervisor, and says so in a [FAQ][8] as part of the announcement.
"Azure Stack HCI is the evolution of Windows Server Software-Defined (WSSD) solutions previously available from our hardware partners. We brought it into the Azure Stack family because we have started to offer new options to connect seamlessly with Azure for infrastructure management services,” the company said.
Microsoft introduced Azure Stack in 2017, but it was not the first to offer an on-premises cloud option. That distinction goes to [OpenStack][9], a joint project between Rackspace and NASA built on open-source code. Amazon followed with its own product, called [Outposts][10].
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385078/microsoft-introduces-azure-stack-for-hci.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2017/08/5_microsoft-azure-100733132-large.jpg
[2]: https://www.networkworld.com/article/3207567/what-is-hyperconvergence.html
[3]: https://www.networkworld.com/article/3207748/microsoft-introduces-azure-stack-its-answer-to-openstack.html
[4]: https://www.networkworld.com/article/3234795/what-is-virtualization-definition-virtual-machine-hypervisor.html
[5]: https://www.networkworld.com/article/3112622/hardware/12-most-powerful-hyperconverged-infrastructure-vendors.htmll
[6]: https://www.networkworld.com/newsletters/signup.html
[7]: https://azure.microsoft.com/en-us/blog/enabling-customers-hybrid-strategy-with-new-microsoft-innovation/
[8]: https://azure.microsoft.com/en-us/blog/announcing-azure-stack-hci-a-new-member-of-the-azure-stack-family/
[9]: https://www.openstack.org/
[10]: https://www.networkworld.com/article/3324043/aws-does-hybrid-cloud-with-on-prem-hardware-vmware-help.html
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,68 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Motorola taps freed-up wireless spectrum for enterprise LTE networks)
[#]: via: (https://www.networkworld.com/article/3385117/motorola-taps-cbrs-spectrum-to-create-private-broadband-lmr-system.html#tk.rss_all)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Motorola taps freed-up wireless spectrum for enterprise LTE networks
======
### Citizens Broadband Radio Service (CBRS) is developing. Out of the gate, Motorola is creating a land mobile radio (LMR) system that includes enterprise-level, voice handheld devices and fast, private data networks.
![Jiraroj Praditcharoenkul / Getty Images][1]
In a move that could upend how workers access data in the enterprise, Motorola has announced a broadband product that it says will deliver data at double the capacity and four-times the range of Wi-Fi for end users. The handheld, walkie-talkie-like device, called Mototrbo Nitro, will, importantly, also include a voice channel. “Business-critical voice with private broadband data,” as [Motorola describes it on its website][2].
The company sees the product being implemented in traditional, moving-around, voice communications environments, such as factories and warehouses, that increasingly need data supplementation, too. A shop floor that has an electronically delivered repair manual, with included video demonstration, could be one example. Video could be two-way, even.
**[ Also read:[Wi-Fi 6 is coming to a router near you][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
The product takes advantage of upcoming Citizens Broadband Radio Service (CBRS) spectrum. Thats a swath of radio bandwidth thats being released by the Federal Communications Commission (FCC) in the 3.5GHz band. Its a frequency chunk that is also expected to be used heavily for 5G. In this case, though, Motorola is creating a private LTE network for the enterprise.
The CBRS band is the first time publicly available broadband spectrum has been available, [Motorola explains in a white paper][5] (pdf) — organizations dont have to buy licenses, yet they can get access to useful spectrum: [A tiered sharing system, where auction winners will get priority access licenses, but others will have some access too is proposed][6] by the FCC. The non-prioritized open access could be used by any enterprise for whatever — internet of things (IoT) or private networks.
## Motorola's pitch for using a private broadband network
Why a private broadband network and not simply cell phones? One giveaway line is in Motorolas promotional video: “Without sacrificing control,” it says. What it means is that the firm thinks theres a market for companies who want to run entire business communications systems — data and voice — without involvement from possibly nosy Mobile Network Operator phone companies. [Ive written before about how control over security is prompting large industrials to explore private networks][7] more. Motorola manages the network in this case, though, for the enterprise.
Motorola also refers to potentially limited or intermittent onsite coverage and congestion for public, commercial, single-platform voice and data networks. Thats particularly the case in factories, [Motorola says in an ebook][8]. Heavy machinery containing radio-unfriendly metal can hinder Wi-Fi and cellular, it claims. Or that traditional Land Mobile Radios (LMRs), such as walkie-talkies and vehicle-mounted mobile radios, dont handle data natively. In particular, it says that if you want to get into artificial intelligence (AI) and analytics, say, you need a more evolving voice and fast data communications setup.
## Industrial IoT uses for Motorola's Nitro network
Industrial IoT will be another beneficiary, Motorola says. It says its CBRS Nitro network could include instant notifications of equipment failures that traditional products cant provide. It also suggests merging fixed security cameras with “photos and videos of broken machines and sending real-time video to an expert.”
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][9] ]**
Motorola also suggests that by separating consumer Wi-Fi (as is offered in hospitality and transport verticals, for example) from business-critical systems, one reduces traffic congestion risks.
The highly complicated CBRS band-sharing system is still not through its government testing. “However, we could deploy customer systems under an experimental license,” a Motorola representative told me.
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385117/motorola-taps-cbrs-spectrum-to-create-private-broadband-lmr-system.html#tk.rss_all
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_automation_robotic_arm_gear_engineer_tablet_by_jiraroj_praditcharoenkul_gettyimages-1091790364_2400x1600-100788459-large.jpg
[2]: https://www.motorolasolutions.com/en_us/products/two-way-radios/mototrbo/nitro.html
[3]: https://www.networkworld.com/article/3311921/mobile-wireless/wi-fi-6-is-coming-to-a-router-near-you.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.motorolasolutions.com/content/dam/msi/docs/products/mototrbo/nitro/cbrs-white-paper.pdf
[6]: https://www.networkworld.com/article/3300339/private-lte-using-new-spectrum-approaching-market-readiness.html
[7]: https://www.networkworld.com/article/3319176/private-5g-networks-are-coming.html
[8]: https://img04.en25.com/Web/MotorolaSolutionsInc/%7B293ce809-fde0-4619-8507-2b42076215c3%7D_radio_evolution_eBook_Nitro_03.13.19_MS_V3.pdf?elqTrackId=850d56c6d53f4013afa2290a66d6251f&elqaid=2025&elqat=2
[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -1,70 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (3 cool text-based email clients)
[#]: via: (https://fedoramagazine.org/3-cool-text-based-email-clients/)
[#]: author: (Clément Verna https://fedoramagazine.org/author/cverna/)
3 cool text-based email clients
======
![][1]
Writing and receiving email is a big part of everyones daily routine and choosing an email client is usually a major decision. The Fedora OS provides a large choice of email clients and among these are text-based email applications.
### Mutt
Mutt is probably one of the most popular text-based email clients. It supports all the common features that one would expect from an email client. Color coding, mail threading, POP3, and IMAP are all supported by Mutt. But one of its best features is its highly configurable. Indeed, the user can easily change the keybindings, and create macros to adapt the tool to a particular workflow.
To give Mutt a try, install it [using sudo][2] and dnf:
```
$ sudo dnf install mutt
```
To help newcomers get started, Mutt has a very comprehensive [wiki][3] full of macro examples and configuration tricks.
### Alpine
Alpine is also among the most popular text-based email clients. Its more beginner friendly than Mutt, and you can configure most of Alpine via the application itself — no need to edit a configuration file. One powerful feature of Alpine is the ability to score emails. This is particularly interesting for users that are registered to a high volume mailing list like Fedoras [devel list][4]. Using scores, Alpine can sort the email based on the users interests, showing emails with a high score first.
Alpine is also available to install from Fedoras repository using dnf.
```
$ sudo dnf install alpine
```
While using Alpine, you can easily access the documentation by pressing the _Ctrl+G_ key combination.
### nmh
nmh (new Mail Handling) follows the UNIX tools philosophy. It provides a collection of single purpose programs to send, receive, save, retrieve, and manipulate e-mail messages. This lets you swap the _nmh_ command with other programs, or create scripts around _nmh_ to create more customized tools. For example, you can use Mutt with nmh.
nmh can be easily installed using dnf.
```
$ sudo dnf install nmh
```
To learn more about nmh and mail handling in general you can read this GPL licenced [book][5].
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/3-cool-text-based-email-clients/
作者:[Clément Verna][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/cverna/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2018/07/email-clients-816x345.png
[2]: https://fedoramagazine.org/howto-use-sudo/
[3]: https://gitlab.com/muttmua/mutt/wikis/home
[4]: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/
[5]: https://rand-mh.sourceforge.io/book/

View File

@ -0,0 +1,103 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Intel's Agilex FPGA family targets data-intensive workloads)
[#]: via: (https://www.networkworld.com/article/3386158/intels-agilex-fpga-family-targets-data-intensive-workloads.html#tk.rss_all)
[#]: author: (Marc Ferranti https://www.networkworld.com)
Intel's Agilex FPGA family targets data-intensive workloads
======
Agilex processors are the first Intel FPGAs to use 10nm manufacturing, achieving a performance boost for AI, financial and IoT workloads
![Intel][1]
After teasing out details about the technology for a year and half under the code name Falcon Mesa, Intel has unveiled the Agilex family of FPGAs, aimed at data-center and network applications that are processing increasing amounts of data for AI, financial, database and IoT workloads.
The Agilex family, expected to start appearing in devices in the third quarter, is part of a new wave of more easily programmable FPGAs that is beginning to take an increasingly central place in computing as data centers are called on to handle an explosion of data.
**Learn about edge networking**
* [How edge networking and IoT will reshape data centers][2]
* [Edge computing best practices][3]
* [How edge computing can help secure the IoT][4]
FPGAs, or field programmable gate arrays, are built around around a matrix of configurable logic blocks (CLBs) linked via programmable interconnects that can be programmed after manufacturing and even reprogrammed after being deployed in devices to run algorithms written for specific workloads. They can thus be more efficient on a performance-per-watt basis than general-purpose CPUs, even while driving higher performance.
### Accelerated computing takes center stage
CPUs can be packaged with FPGAs, offloading specific tasks to them and enhancing overall data-center and network efficiency. The concept, known as accelerated computing, is increasingly viewed by data-center and network managers as a cost-efficient way to handle increasing data and network traffic.
"This data is creating what I call an innovation race across from the edge to the network to the cloud," said Dan McNamara, general manager of the Programmable Solutions Group (PSG) at Intel. "We believe that were in the largest adoption phase for FPGAs in our history."
The Agilex family is the first line of FPGAs developed from the ground up in the wake of [Intels $16.7 billion 2015 acquisition of Altera.][5] It's the first FPGA line to be made with Intel's 10nm manufacturing process, which adds billions of transistors to the FPGAs compared to earlier generations. Along with Intel's second-generation HyperFlex architecture, it helps give Agilex 40 percent higher performance than the company's current high-end FPGA family, the Stratix 10 line, Intel says.
HyperFlex architecture includes additional registers places on a processor that temporarily hold data called Hyper-Registers, located everywhere throughout the core fabric to enhance bandwidth as well as area and power efficiency.
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][6] ]**
### Memory coherency is key
Agilex FPGAs are also the first processors to support [Compute Express Link (CXL), a high-speed interconnect][7] designed to maintain memory coherency among CPUs like Intel's second-generation Xeon Scalable processors and purpose-built accelerators like FPGAs and GPUs. It ensures that different processors don't clash when trying to write to the same memory space, essentially allowing CPUs and accelerators to share memory.
"By having this CXL bus you can actually write applications that will use all the real memory so what that does is it simplifies the programming model in large memory workloads," said Patrick Moorhead, founder and principal at Moor Insights & Strategy.
The ability to integrate FPGAs, other accelerators and CPUs is key to Intel's accelerated computing strategy for the data center. Intel calls it "any to any" integration.
### 'Any-to-any' integration is crucial for the data center
The Agilex family uses embedded multi-die interconnect bridge (EMIB) packaging technology to integrate, for example, Xeon Scalable CPUs or ASICs special-function processors that are not reprogammable alongside FPGA fabric. Intel last year bought eASIC, a maker of structured ASICs, which the company describes as an intermediary technology between FPGAs and ASICs. The idea is to deliver products that offer a mix of functionality to achieve optimal cost and performance efficiency for data-intensive workloads.
Intel underscored the importance of processor integration for the data center by unveiling Agilex on Tuesday at its Data Centric Innovation Day in San Francisco, when it also discussed plans for its second generation Xeon Scalable line.
Traditionally, FPGAs were mainly used in embedded devices, communications equipment and in hyperscale data centers, and not sold directly to enterprises. But several products based on Intel Stratix 10 and Arria 10 FPGAs are now being sold to enterprises, including in Dell EMC and Fujitsu off-the-shelf servers.
Making FPGAs easier to program is key to making them more mainstream. "What's really, really important is the software story," said Intel's McNamara. "None of this really matters if we can't generate more users and make it easier to program FPGA's."
Intel's Quartus Prime design tool will be available for Agilex hardware developers but the real breakthrough for FPGA software development will be Intel's OneAPI concept, announced in December.
"OneAPI is is an effort by Intel to be able to have programmers write to OneAPI and OneAPI determines the best piece of silicon to run it on," Moorhead said. "I lovingly refer to it as the magic API; this is the big play I always thought Intel was gonna be working on ever since it bought Altera. The first thing I expect to happen are the big enterprise developers like SAP and Oracle to write to Agilex, then smaller ISVs, then custom enterprise applications."
![][8]
Intel plans three different product lines in the Agilex family from low to high end, the F-, I- and M-series aimed at different applications and processing requirements. The Agilex family, depending on the series, supports PCIe (peripheral component interconnect express) Gen 5, and different types of memory including DDR5 RAM, HBM (high-bandwidth memory) and Optane DC persistent memory. It will offer up to 112G bps transceiver data rates and a greater mix of arithmetic precision for AI, including bfloat16 number format.
In addition to accelerating server-based workloads like AI, genomics, financial and database applications, FPGAs play an important part in networking. Their cost-per-watt efficiency makes them suitable for edge networks, IoT devices as well as deep packet inspection. In addition, they can be used in 5G base stations; as 5G standards evolve, they can be reprogrammed. Once 5G standards are hardened, the "any to any" integration will allow processing to be offloaded to special-purpose ASICs for ultimate cost efficiency.
### Agilex will compete with Xylinx's ACAPs
Agilex will likely vie with Xylinx's upcoming [Versal product family][9], due out in devices in the second half of the year. Xylinx competed for years with Altera in the FPGA market, and with Versal has introduced what it says is [a new product category, the Adaptive Compute Acceleration Platform (ACAP)][10]. Versal ACAPs will be made using TSMC's 7nm manufacturing process technology, though because Intel achieves high transistor density, the number of transistors offered by Agilex and Versal chips will likely be equivalent, noted Moorhead.
Though Agilex and Versal differ in details, the essential pitch is similar: the programmable processors offer a wider variety of programming options than prior generations of FPGA, work with CPUs to accelerate data-intensive workloads, and offer memory coherence. Rather than CXL, though, the Versal family uses the cache coherent interconnect for accelerators (CCIX) interconnect fabric.
Neither Intel or Xylinx for the moment have announced OEM support for Agilex or Versal products that will be sold to the enterprise, but that should change as the year progresses.
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3386158/intels-agilex-fpga-family-targets-data-intensive-workloads.html#tk.rss_all
作者:[Marc Ferranti][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/agilex-100792596-large.jpg
[2]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[3]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[4]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[5]: https://www.networkworld.com/article/2903454/intel-could-strengthen-its-server-product-stack-with-altera.html
[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
[7]: https://www.networkworld.com/article/3359254/data-center-giants-announce-new-high-speed-interconnect.html
[8]: https://images.idgesg.net/images/article/2019/04/agilex-family-100792597-large.jpg
[9]: https://www.xilinx.com/news/press/2018/xilinx-unveils-versal-the-first-in-a-new-category-of-platforms-delivering-rapid-innovation-with-software-programmability-and-scalable-ai-inference.html
[10]: https://www.networkworld.com/article/3263436/fpga-maker-xilinx-aims-range-of-software-programmable-chips-at-data-centers.html
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -1,124 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 useful open source log analysis tools)
[#]: via: (https://opensource.com/article/19/4/log-analysis-tools)
[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta)
5 useful open source log analysis tools
======
Monitoring network activity is as important as it is tedious. These
tools can make it easier.
![People work on a computer server][1]
Monitoring network activity can be a tedious job, but there are good reasons to do it. For one, it allows you to find and investigate suspicious logins on workstations, devices connected to networks, and servers while identifying sources of administrator abuse. You can also trace software installations and data transfers to identify potential issues in real time rather than after the damage is done.
Those logs also go a long way towards keeping your company in compliance with the [General Data Protection Regulation][2] (GDPR) that applies to any entity operating within the European Union. If you have a website that is viewable in the EU, you qualify.
Logging—both tracking and analysis—should be a fundamental process in any monitoring infrastructure. A transaction log file is necessary to recover a SQL server database from disaster. Further, by tracking log files, DevOps teams and database administrators (DBAs) can maintain optimum database performance or find evidence of unauthorized activity in the case of a cyber attack. For this reason, it's important to regularly monitor and analyze system logs. It's a reliable way to re-create the chain of events that led up to whatever problem has arisen.
There are quite a few open source log trackers and analysis tools available today, making choosing the right resources for activity logs easier than you think. The free and open source software community offers log designs that work with all sorts of sites and just about any operating system. Here are five of the best I've used, in no particular order.
### Graylog
[Graylog][3] started in Germany in 2011 and is now offered as either an open source tool or a commercial solution. It is designed to be a centralized log management system that receives data streams from various servers or endpoints and allows you to browse or analyze that information quickly.
![Graylog screenshot][4]
Graylog has built a positive reputation among system administrators because of its ease in scalability. Most web projects start small but can grow exponentially. Graylog can balance loads across a network of backend servers and handle several terabytes of log data each day.
IT administrators will find Graylog's frontend interface to be easy to use and robust in its functionality. Graylog is built around the concept of dashboards, which allows you to choose which metrics or data sources you find most valuable and quickly see trends over time.
When a security or performance incident occurs, IT administrators want to be able to trace the symptoms to a root cause as fast as possible. Search functionality in Graylog makes this easy. It has built-in fault tolerance that can run multi-threaded searches so you can analyze several potential threats together.
### Nagios
[Nagios][5] started with a single developer back in 1999 and has since evolved into one of the most reliable open source tools for managing log data. The current version of Nagios can integrate with servers running Microsoft Windows, Linux, or Unix.
![Nagios Core][6]
Its primary product is a log server, which aims to simplify data collection and make information more accessible to system administrators. The Nagios log server engine will capture data in real-time and feed it into a powerful search tool. Integrating with a new endpoint or application is easy thanks to the built-in setup wizard.
Nagios is most often used in organizations that need to monitor the security of their local network. It can audit a range of network-related events and help automate the distribution of alerts. Nagios can even be configured to run predefined scripts if a certain condition is met, allowing you to resolve issues before a human has to get involved.
As part of network auditing, Nagios will filter log data based on the geographic location where it originates. That means you can build comprehensive dashboards with mapping technology to understand how your web traffic is flowing.
### Elastic Stack (the "ELK Stack")
[Elastic Stack][7], often called the ELK Stack, is one of the most popular open source tools among organizations that need to sift through large sets of data and make sense of their system logs (and it's a personal favorite, too).
![ELK Stack][8]
Its primary offering is made up of three separate products: Elasticsearch, Kibana, and Logstash:
* As its name suggests, _**Elasticsearch**_ is designed to help users find matches within datasets using a wide range of query languages and types. Speed is this tool's number one advantage. It can be expanded into clusters of hundreds of server nodes to handle petabytes of data with ease.
* _**Kibana**_ is a visualization tool that runs alongside Elasticsearch to allow users to analyze their data and build powerful reports. When you first install the Kibana engine on your server cluster, you will gain access to an interface that shows statistics, graphs, and even animations of your data.
* The final piece of ELK Stack is _**Logstash**_ , which acts as a purely server-side pipeline into the Elasticsearch database. You can integrate Logstash with a variety of coding languages and APIs so that information from your websites and mobile applications will be fed directly into your powerful Elastic Stalk search engine.
A unique feature of ELK Stack is that it allows you to monitor applications built on open source installations of WordPress. In contrast to most out-of-the-box security audit log tools that [track admin and PHP logs][9] but little else, ELK Stack can sift through web server and database logs.
Poor log tracking and database management are one of the [most common causes of poor website performance][10]. Failure to regularly check, optimize, and empty database logs can not only slow down a site but could lead to a complete crash as well. Thus, the ELK Stack is an excellent tool for every WordPress developer's toolkit.
### LOGalyze
[LOGalyze][11] is an organization based in Hungary that builds open source tools for system administrators and security experts to help them manage server logs and turn them into useful data points. Its primary product is available as a free download for either personal or commercial use.
![LOGalyze][12]
LOGalyze is designed to work as a massive pipeline in which multiple servers, applications, and network devices can feed information using the Simple Object Access Protocol (SOAP) method. It provides a frontend interface where administrators can log in to monitor the collection of data and start analyzing it.
From within the LOGalyze web interface, you can run dynamic reports and export them into Excel files, PDFs, or other formats. These reports can be based on multi-dimensional statistics managed by the LOGalyze backend. It can even combine data fields across servers or applications to help you spot trends in performance.
LOGalyze is designed to be installed and configured in less than an hour. It has prebuilt functionality that allows it to gather audit data in formats required by regulatory acts. For example, LOGalyze can easily run different HIPAA reports to ensure your organization is adhering to health regulations and remaining compliant.
### Fluentd
If your organization has data sources living in many different locations and environments, your goal should be to centralize them as much as possible. Otherwise, you will struggle to monitor performance and protect against security threats.
[Fluentd][13] is a robust solution for data collection and is entirely open source. It does not offer a full frontend interface but instead acts as a collection layer to help organize different pipelines. Fluentd is used by some of the largest companies worldwide but can be implemented in smaller organizations as well.
![Fluentd architecture][14]
The biggest benefit of Fluentd is its compatibility with the most common technology tools available today. For example, you can use Fluentd to gather data from web servers like Apache, sensors from smart devices, and dynamic records from MongoDB. What you do with that data is entirely up to you.
Fluentd is based around the JSON data format and can be used in conjunction with [more than 500 plugins][15] created by reputable developers. This allows you to extend your logging data into other applications and drive better analysis from it with minimal manual effort.
### The bottom line
If you aren't already using activity logs for security reasons, governmental compliance, and measuring productivity, commit to changing that. There are plenty of plugins on the market that are designed to work with multiple environments and platforms, even on your internal network. Don't wait for a serious incident to justify taking a proactive approach to logs maintenance and oversight.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/log-analysis-tools
作者:[Sam Bocetta][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sambocetta
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR (People work on a computer server)
[2]: https://opensource.com/article/18/4/gdpr-impact
[3]: https://www.graylog.org/products/open-source
[4]: https://opensource.com/sites/default/files/uploads/graylog-data.png (Graylog screenshot)
[5]: https://www.nagios.org/downloads/
[6]: https://opensource.com/sites/default/files/uploads/nagios_core_4.0.8.png (Nagios Core)
[7]: https://www.elastic.co/products
[8]: https://opensource.com/sites/default/files/uploads/elk-stack.png (ELK Stack)
[9]: https://www.wpsecurityauditlog.com/benefits-wordpress-activity-log/
[10]: https://websitesetup.org/how-to-speed-up-wordpress/
[11]: http://www.logalyze.com/
[12]: https://opensource.com/sites/default/files/uploads/logalyze.jpg (LOGalyze)
[13]: https://www.fluentd.org/
[14]: https://opensource.com/sites/default/files/uploads/fluentd-architecture.png (Fluentd architecture)
[15]: https://opensource.com/article/18/9/open-source-log-aggregation-tools

View File

@ -0,0 +1,158 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (9 features developers should know about Selenium IDE)
[#]: via: (https://opensource.com/article/19/4/features-selenium-ide)
[#]: author: (Al Sargent https://opensource.com/users/alsargent)
9 features developers should know about Selenium IDE
======
The new Selenium IDE brings the benefits of functional test automation
to many IT professionals—and to frontend developers specifically.
![magnifying glass on computer screen][1]
There has long been a stigma associated with using record-and-playback tools for testing rather than scripted QA automation tools like [Selenium Webdriver][2], [Cypress][3], and [WebdriverIO][4].
Record-and-playbook tools are perceived to suffer from many issues, including a lack of cross-browser support, no way to run scripts in parallel or from CI build scripts, poor support for responsive web apps, and no way to quickly diagnose frontend bugs.
Needless to say, it's been somewhat of a rough road for these tools, and after Selenium IDE [went end-of-life][5] in 2017, many thought the road for record and playback would end altogether.
Well, it turns out this perception was wrong. Not long after the Selenium IDE project was discontinued, my colleagues at [Applitools approached the Selenium open source community][6] to see how they could help.
Since then, much of Selenium IDE's code has been revamped. The code is now freely available on GitHub under an Apache 2.0 license, managed by the Selenium community, and supported by [two full-time engineers][7], one of whom literally wrote the book on [Selenium testing][8].
![Selenium IDE's GitHub repository][9]
The new Selenium IDE brings the benefits of functional test automation to many IT professionals—and to frontend developers specifically. Here are nine things developers should know about the new Selenium IDE.
### 1\. Selenium IDE is now cross-browser
When the record-and-playback tool first came out in 2006, Firefox was the shiny new browser it hitched its wagon to, and it remained that way for a decade. No more! Selenium IDE is now available as a [Google Chrome Extension][10] and [Firefox Add-on][11].
Even better, Selenium IDE can run its tests on Selenium WebDriver servers by using Selenium IDE's new command-line test runner, [SIDE Runner][12]. SIDE Runner blends elements of Selenium IDE and Selenium Webdriver. It takes a Selenium IDE script, saved as a [**.side** file][13], and runs it using browser drivers such as [ChromeDriver][14], [EdgeDriver][15], Firefox's [Geckodriver][16], [IEDriver][17], and [SafariDriver][18].
SIDE Runner and the other drivers above are available as [straightforward npm installs][12]. Here's what it looks like in action.
![SIDE Runner][19]
### 2\. No more brittle functional tests
For years, brittle tests have been an issue for functional tests—whether you record them or code them by hand. Now that developers are releasing new features more frequently, their user interface (UI) code is constantly changing as well. When a UI changes, object locators often change, too.
Selenium IDE fixes that by capturing multiple object locators when you record your script. During playback, if Selenium IDE can't find one locator, it tries each of the other locators until it finds one that works. Your test will fail only if none of the locators work. This doesn't guarantee scripts will always play back, but it does insulate scripts against numerous changes. As you can see below, Selenium IDE captures linkText, an xPath expression, and CSS-based locators.
![Selenium IDE captures linkText, an xPath expression, and CSS-based locators][20]
### 3\. Conditional logic to handle UI features
When testing web apps, scripts have to handle intermittent UI elements that can randomly appear in your app. These come in the form of cookie notices, popups for special offers, quote requests, newsletter subscriptions, paywall notifications, adblocker requests, and more.
Conditional logic is a great way to handle these intermittent UI features. Developers can easily insert conditional logic—also called control flow—into Selenium IDE scripts. [Here are details][21] and how it looks.
![Selenium IDE's Conditional logic][22]
### 4\. Support for embedded code
As broad as the new [Selenium IDE API][23] is, it doesn't do everything. For this reason, Selenium IDE has **[**execute** **script**][24]** and **[execute async script][25]** commands that let your script call a JavaScript snippet.
This provides developers with a tremendous amount of flexibility to take advantage of JavaScript's flexibility and wide range of libraries. To use it, click on the test step where you want JavaScript to run, choose **Insert New Command** , and enter **execute script** or **execute async script** in the command field, as shown below.
![Selenium IDE's command line][26]
### 5\. Selenium IDE runs from CI build scripts
Because SIDE Runner is called from the command line, you can easily fit it into CI build scripts, so long as the CI server can call **selenium-ide-runner** and upload the **.side** file (the test script) as a build artifact. For example, here's how to upload an input file in [Jenkins][27], [Travis][28], and [CircleCI][29].
This means Selenium IDE can be better integrated into the software development technology stack. In addition, the scripts created by less-technical QA team members—including business analysts—can run with every build. This helps better align QA with the developer so fewer bugs escape into production.
### 6\. Support for third-party plugins
Imagine companies building plugins to have Selenium IDE do all kinds of things, like uploading scripts to a functional testing cloud, a load testing cloud, or a production application monitoring service.
Plenty of companies have integrated Selenium Webdriver into their offerings, and I bet the same will happen with Selenium IDE. You can also [build your own Selenium IDE plugin][30].
### 7\. Visual UI testing
Speaking of new plugins, Applitools introduced a new Selenium IDE plugin to add artificial intelligence-powered visual validations to the equation. Available through the [Chrome][31] and [Firefox][32] stores via a three-second install, just plug in the Applitools API key and go.
Visual checkpoints are a great way to ensure a UI renders correctly. Rather than a bunch of assert statements on all the UI elements—which would be a pain to maintain—one visual checkpoint checks all your page elements.
Best of all, visual AI looks at a web app the same way a human does, ignoring minor differences. This means fewer fake bugs to frustrate a development team.
### 8\. Visually test responsive web apps
When testing the visual layout of [responsive web apps][33], it's best to do it on a wide range of screen sizes (also called viewports) to ensure nothing appears out of whack. It's all too easy for responsive web bugs to creep in, and when they do, the problems can range from merely cosmetic to business stopping.
When you use visual UI testing for Selenium IDE, you can visually test your webpages on the Applitools [Visual Grid][34], which has more than 100 combinations of browsers, emulated devices, and viewport sizes.
Once tests run on the Visual Grid, developers can easily check the test results on all the various combinations.
![Selenium IDE's Visual Grid][35]
### 9\. Responsive web bugs have nowhere to hide
Selenium IDE can help pinpoint the cause of frontend bugs. Every Selenium IDE script that's run with the Visual Grid can be analyzed with Applitools' [Root Cause Analysis][36]. It's no longer enough to find a bug—developers also need to fix it.
When a visual bug is discovered, it can be clicked on and just the relevant (not all) Document Object Model (DOM) and CSS differences will be displayed.
![Finding visual bugs][37]
In summary, much like many emerging technologies in software development, Selenium IDE is part of a larger trend of making life easier and simpler for technical professionals and enabling them to spend more time and effort on creating code for even faster feedback.
* * *
_This article is based on[16 reasons why to use Selenium IDE in 2019 (and 2 why not)][38] originally published on the Applitools blog._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/features-selenium-ide
作者:[Al Sargent][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alsargent
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen)
[2]: https://www.seleniumhq.org/projects/webdriver/
[3]: https://www.cypress.io/
[4]: https://webdriver.io/
[5]: https://seleniumhq.wordpress.com/2017/08/09/firefox-55-and-selenium-ide/
[6]: https://seleniumhq.wordpress.com/2018/08/06/selenium-ide-tng/
[7]: https://github.com/SeleniumHQ/selenium-ide/graphs/contributors
[8]: http://davehaeffner.com/
[9]: https://opensource.com/sites/default/files/uploads/selenium_ide_github_graphic_1.png (Selenium IDE's GitHub repository)
[10]: https://chrome.google.com/webstore/detail/selenium-ide/mooikfkahbdckldjjndioackbalphokd
[11]: https://addons.mozilla.org/en-US/firefox/addon/selenium-ide/
[12]: https://www.seleniumhq.org/selenium-ide/docs/en/introduction/command-line-runner/
[13]: https://www.seleniumhq.org/selenium-ide/docs/en/introduction/command-line-runner/#launching-the-runner
[14]: http://chromedriver.chromium.org/
[15]: https://developer.microsoft.com/en-us/microsoft-edge/tools/webdriver/
[16]: https://github.com/mozilla/geckodriver
[17]: https://github.com/SeleniumHQ/selenium/wiki/InternetExplorerDriver
[18]: https://developer.apple.com/documentation/webkit/testing_with_webdriver_in_safari
[19]: https://opensource.com/sites/default/files/uploads/selenium_ide_side_runner_2.png (SIDE Runner)
[20]: https://opensource.com/sites/default/files/uploads/selenium_ide_linktext_3.png (Selenium IDE captures linkText, an xPath expression, and CSS-based locators)
[21]: https://www.seleniumhq.org/selenium-ide/docs/en/introduction/control-flow/
[22]: https://opensource.com/sites/default/files/uploads/selenium_ide_conditional_logic_4.png (Selenium IDE's Conditional logic)
[23]: https://www.seleniumhq.org/selenium-ide/docs/en/api/commands/
[24]: https://www.seleniumhq.org/selenium-ide/docs/en/api/commands/#execute-script
[25]: https://www.seleniumhq.org/selenium-ide/docs/en/api/commands/#execute-async-script
[26]: https://opensource.com/sites/default/files/uploads/selenium_ide_command_line_5.png (Selenium IDE's command line)
[27]: https://stackoverflow.com/questions/27491789/how-to-upload-a-generic-file-into-a-jenkins-job
[28]: https://docs.travis-ci.com/user/uploading-artifacts/
[29]: https://circleci.com/docs/2.0/artifacts/
[30]: https://www.seleniumhq.org/selenium-ide/docs/en/plugins/plugins-getting-started/
[31]: https://chrome.google.com/webstore/detail/applitools-for-selenium-i/fbnkflkahhlmhdgkddaafgnnokifobik
[32]: https://addons.mozilla.org/en-GB/firefox/addon/applitools-for-selenium-ide/
[33]: https://en.wikipedia.org/wiki/Responsive_web_design
[34]: https://applitools.com/visualgrid
[35]: https://opensource.com/sites/default/files/uploads/selenium_ide_visual_grid_6.png (Selenium IDE's Visual Grid)
[36]: https://applitools.com/root-cause-analysis
[37]: https://opensource.com/sites/default/files/uploads/seleniumice_rootcauseanalysis_7.png (Finding visual bugs)
[38]: https://applitools.com/blog/why-selenium-ide-2019

View File

@ -0,0 +1,97 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 open source tools for teaching young children to read)
[#]: via: (https://opensource.com/article/19/4/early-literacy-tools)
[#]: author: (Laura B. Janusek https://opensource.com/users/lbjanusek)
5 open source tools for teaching young children to read
======
Early literacy apps give kids a foundation in letter recognition,
alphabet sequencing, word finding, and more.
![][1]
Anyone who sees a child using a tablet or smartphone observes their seemingly innate ability to scroll through apps and swipe through screens, flexing those "digital native" muscles. According to [Common Sense Media][2], the percentage of US households in which 0- to 8-year-olds have access to a smartphone has grown from 52% in 2011 to 98% in 2017. While the debates around age guidelines and screen time surge, it's hard to deny that children are developing familiarity and skills with technology at an unprecedented rate.
This rise in early technical literacy may be astonishing, but what about _traditional_ literacy, the good old-fashioned ability to read? What does the intersection of early literacy development and early tech use look like? Let's explore some open source tools for early learners that may help develop both of these critical skill sets.
### Balancing risks and rewards
But first, a disclaimer: Guidelines for technology use, especially for young children, are [constantly changing][3]. Organizations like the American Academy of Pediatrics, Common Sense Media, Zero to Three, and PBS Kids are continually conducting research and publishing recommendations. One position that all of these and other organizations can agree on is that plopping a child in front of a screen with unmonitored content for an unlimited set of time is highly inadvisable.
Even setting kids up with educational content or tools for extended periods of time may have risks. And on the flip side, research on the benefits of education technologies is often limited or unavailable. In short, there are many cases in which we don't know for certain if educational technology use at a young age is beneficial, detrimental, or simply neutral.
But if screen time is available to your child or student, it's logical to infer that educational resources would be preferable over simpler pop-the-bubble or slice-the-fruit games or platforms that could house inappropriate content or online predators. While we may not be able to prove that education apps will make a child's test scores soar, we can at least take comfort in their generally being safer and more age-appropriate than the internet at large.
That said, if you're open to exploring early-education technologies, there are many reasons to look to open source options. Open source technologies are not only free but open to collaborative improvement. In many cases, they are created by developers who are educators or parents themselves, and they're a great way to avoid in-app purchases, advertisements, and paid upgrades. Open source programs can often be downloaded and installed on your device and accessed without an internet connection. Plus, the idea of [open source in education][4] is a growing trend, and there are countless resources to [learn more][5] about the concept.
But for now, let's check out some open source tools for early literacy in action!
### Childsplay
![Childsplay screenshot][6]
Let's start simple. [Childsplay][7], licensed under the GPLv2, is the most basic of the resources on this list. It's a compilation of just over a dozen educational games for young learners, four of which are specific to letter recognition, including memory games and an activity where the learner identifies a spoken letter.
### eduActiv8
![eduActiv8 screenshot][8]
[eduActiv8][9] started in 2011 as a personal project for the developer's son, "whose thirst for learning and knowledge inspired the creation of this educational program." It includes activities for building basic math and early literacy skills, including a variety of spelling, matching, and listening activities. Games include filling in missing letters in the alphabet, unscrambling letters to form a word, matching words to images, and completing mazes by connecting letters in the correct order. eduActiv8 was written in [Python][10] and is available under the GPLv3.
### GCompris
![GCompris screenshot][11]
[GCompris][12] is an open source behemoth (licensed under the GPLv3) of early educational activities. A French software engineer started it in 2000, and it now includes over 130 educational games in nearly 20 languages. Tailored for learners under age 10, it includes activities for letter recognition and drawing, alphabet sequencing, vocabulary building, and games like hangman to identify missing letters in words, plus activities for learning braille. It also includes games in math and music, plus classics from tic-tac-toe to chess.
### Feed the Monster
![Feed the Monster screenshot][13]
The quality of the playful "monster" graphics in [Feed the Monster][14] definitely sets it apart from the others on this list, plus it supports nearly 40 languages! The app includes activities for sorting letters to form words, memory games to match words to images, and letter-tracing writing activities. The app is developed by Curious Learning, which states: "We create, localize, distribute, and optimize open source mobile software so every child can learn to read." While Feed the Monster's offerings are geared toward early readers, Curious Mind's roadmap suggests it's headed towards a more robust personalized literacy platform growing on a foundation of research with MIT, Tufts, and Georgia State University.
### Syntax Untangler
![Syntax Untangler screenshot][15]
[Syntax Untangler][16] is the outlier of this group. Developed by a technologist at the University of WisconsinMadison under the GPLv2, the application is "particularly designed for training language learners to recognize and parse linguistic features." Examples show the software being used for foreign language learning, but anyone can use it to create language identification games, including games for early literacy activities like letter recognition. It could also be applied to later literacy skills, like identifying parts of speech in complex sentences or literary techniques in poetry or fiction.
### Wrapping up
Access to [literary environments][17] has been shown to impact literacy and attitudes towards reading. Why not strive to create a digital literary environment for our kids by filling our devices with educational technologies, just like our shelves are filled with books?
Now it's your turn! What open source literacy tools have you used? Comment below to share.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/early-literacy-tools
作者:[Laura B. Janusek][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lbjanusek
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_kid_education.png?itok=3lRp6gFa
[2]: https://www.commonsensemedia.org/research/the-common-sense-census-media-use-by-kids-age-zero-to-eight-2017?action
[3]: https://www.businessinsider.com/smartphone-use-young-kids-toddlers-limits-science-2018-3
[4]: /article/18/1/best-open-education
[5]: https://opensource.com/resources/open-source-education
[6]: https://opensource.com/sites/default/files/uploads/cp_flashcards.gif (Childsplay screenshot)
[7]: http://www.childsplay.mobi/
[8]: https://opensource.com/sites/default/files/uploads/eduactiv8.jpg (eduActiv8 screenshot)
[9]: https://www.eduactiv8.org/
[10]: /article/17/11/5-approaches-learning-python
[11]: https://opensource.com/sites/default/files/uploads/gcompris2.png (GCompris screenshot)
[12]: https://gcompris.net/index-en.html
[13]: https://opensource.com/sites/default/files/uploads/feedthemonster.png (Feed the Monster screenshot)
[14]: https://www.curiouslearning.org/
[15]: https://opensource.com/sites/default/files/uploads/syntaxuntangler.png (Syntax Untangler screenshot)
[16]: https://courses.dcs.wisc.edu/untangler/
[17]: http://www.jstor.org/stable/41386459

View File

@ -0,0 +1,234 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (File sharing with Git)
[#]: via: (https://opensource.com/article/19/4/file-sharing-git)
[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
File sharing with Git
======
SparkleShare is an open source, Git-based, Dropbox-style file sharing
application. Learn more in our series about little-known uses of Git.
![][1]
[Git][2] is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git. Today, we'll look at SparkleShare, which uses Git as the backbone for file sharing.
### Git for file sharing
One of the nice things about Git is that it's inherently distributed. It's built to share. Even if you're sharing a repository just with other computers on your own network, Git brings transparency to the act of getting files from a shared location.
As interfaces go, Git is pretty simple. It varies from user to user, but the common incantation when sitting down to get some work done is just **git pull** or maybe the slightly more complex **git pull && git checkout -b my-branch**. Still, for some people, the idea of _entering a command_ into their computer at all is confusing or bothersome. Computers are meant to make life easy, and computers are good at repetitious tasks, and so there are easier ways to share files with Git.
### SparkleShare
The [SparkleShare][3] project is a cross-platform, open source, Dropbox-style file sharing application based on Git. It automates all Git commands, triggering the add, commit, push, and pull processes with the simple act of dragging-and-dropping a file into a specially designated SparkleShare directory. Because it is based on Git, you get fast, diff-based pushes and pulls, and you inherit all the benefits of Git version control and backend infrastructure (like Git hooks). It can be entirely self-hosted, or you can use it with Git hosting services like [GitLab][4], GitHub, Bitbucket, and others. Furthermore, because it's basically just a frontend to Git, you can access your SparkleShare files on devices that may not have a SparkleShare client but do have Git clients.
Just as you get all the benefits of Git, you also get all the usual Git restrictions: It's impractical to use SparkleShare to store hundreds of photos and music and videos because Git is designed and optimized for text. Git certainly has the capability to store large files of binary data but it is designed to track history, so once a file is added to it, it's nearly impossible to completely remove it. This somewhat limits the usefulness of SparkleShare for some people, but it makes it ideal for many workflows, including [calendaring][5].
#### Installing SparkleShare
SparkleShare is cross-platform, with installers for Windows and Mac available from its [website][6]. For Linux, there's a [Flatpak][7] in your software installer, or you can run these commands in a terminal:
```
$ sudo flatpak remote-add flathub <https://flathub.org/repo/flathub.flatpakrepo>
$ sudo flatpak install flathub org.sparkleshare.SparkleShare
```
### Creating a Git repository
SparkleShare isn't software-as-a-service (SaaS). You run SparkleShare on your computer to communicate with a Git repository—SparkleShare doesn't store your data. If you don't have a Git repository to sync a folder with yet, you must create one before launching SparkleShare. You have three options: hosted Git, self-hosted Git, or self-hosted SparkleShare.
#### Git hosting
SparkleShare can use any Git repository you can access for storage, so if you have or create an account with GitLab or any other hosting service, it can become the backend for your SparkleShare. For example, the open source [Notabug.org][8] service is a Git hosting service like GitHub and GitLab, but unique enough to prove SparkleShare's flexibility. Creating a new repository differs from host to host depending on the user interface, but all of the major ones follow the same general model.
First, locate the button in your hosting service to create a new project or repository and click on it to begin. Then step through the repository creation process, providing a name for your repository, privacy level (repositories often default to being public), and whether or not to initialize the repository with a README file. Whether you need a README or not, enable an initial README file. Starting a repository with a file isn't strictly necessary, but it forces the Git host to instantiate a **master** branch in the repository, which helps ensure that frontend applications like SparkleShare have a branch to commit and push to. It's also useful for you to see a file, even if it's an almost empty README file, to confirm that you have connected.
![Creating a Git repository][9]
Once you've created a repository, obtain the URL it uses for SSH clones. You can get this URL the same way anyone gets any URL for a Git project: navigate to the page of the repository and look for the **Clone** button or field.
![Cloning a URL on GitHub][10]
Cloning a GitHub URL.
![Cloning a URL on GitLab][11]
Cloning a GitLab URL.
This is the address SparkleShare uses to reach your data, so make note of it. Your Git repository is now configured.
#### Self-hosted Git
You can use SparkleShare to access a Git repository on any computer you have access to. No special setup is required, aside from a bare Git repository. However, if you want to give access to your Git repository to anyone else, then you should run a Git manager like [Gitolite][12] or SparkleShare's own Dazzle server to help you manage SSH keys and accounts. At the very least, create a user specific to Git so that users with access to your Git repository don't also automatically gain access to the rest of your server.
Log into your server as the Git user (or yourself, if you're very good at managing user and group permissions) and create a repository:
```
$ mkdir ~/sparkly.git
$ cd ~/sparkly.git
$ git init --bare .
```
Your Git repository is now configured.
#### Dazzle
SparkleShare's developers provide a Git management system called [Dazzle][13] to help you self-host Git repositories.
On your server, download the Dazzle application to some location in your path:
```
$ curl <https://raw.githubusercontent.com/hbons/Dazzle/master/dazzle.sh> \
\--output ~/bin/dazzle
$ chmod +x ~/bin/dazzle
```
Dazzle sets up a user specific to Git and SparkleShare and also implements access rights based on keys generated by the SparkleShare application. For now, just set up a project:
```
`$ dazzle create sparkly`
```
Your server is now configured as a SparkleShare host.
### Configuring SparkleShare
When you launch SparkleShare for the first time, you are prompted to configure what server you want SparkleShare to use for storage. This process may feel like a first-run setup wizard, but it's actually the usual process for setting up a new shared location within SparkleShare. Unlike many shared drive applications, with SparkleShare you can have several locations configured at once. The first shared location you configure isn't any more significant than any shared location you may set up later, and you're not signing up with SparkleShare or any other service. You're just pointing SparkleShare at a Git repository so that it knows what to keep your first SparkleShare folder in sync with.
On the first screen, identify yourself by whatever means you want on record in the Git commits that SparkleShare makes on your behalf. You can use anything, even fake information that resolves to nothing. It's purely for the commit messages, which you may never even see if you have no interest in reviewing the Git backend processes.
The next screen prompts you to choose your hosting type. If you are using GitLab, GitHub, Planio, or Bitbucket, then select the appropriate one. For anything else, select **Own server**.
![Choosing a Sparkleshare host][14]
At the bottom of this screen, you must enter the SSH clone URL. If you're self-hosting, the address is something like **<ssh://username@example.com>** and the remote path is the absolute path to the Git repository you created for this purpose.
Based on my self-hosted examples above, the address to my imaginary server is **<ssh://git@example.com:22122>** (the **:22122** indicates a nonstandard SSH port) and the remote path is **/home/git/sparkly.git**.
If I use my Notabug.org account instead, the address from the example above is **[git@notabug.org][15]** and the path is **seth/sparkly.git**.
SparkleShare will fail the first time it attempts to connect to the host because you have not yet copied the SparkleShare client ID (an SSH key specific to the SparkleShare application) to the Git host. This is expected, so don't cancel the process. Leave the SparkleShare setup window open and obtain the client ID from the SparkleShare icon in your system tray. Then copy the client ID to your clipboard so you can add it to your Git host.
![Getting the client ID from Sparkleshare][16]
#### Adding your client ID to a hosted Git account
Minor UI differences aside, adding an SSH key (which is all the client ID is) is basically the same process on any hosting service. In your Git host's web dashboard, navigate to your user settings and find the **SSH Keys** category. Click the **Add New Key** button (or similar) and paste the contents of your SparkleShare client ID.
![Adding an SSH key][17]
Save the key. If you want someone else, such as collaborators or family members, to be able to access this same repository, they must provide you with their SparkleShare client ID so you can add it to your account.
#### Adding your client ID to a self-hosted Git account
A SparkleShare client ID is just an SSH key, so copy and paste it into your Git user's **~/.ssh/authorized_keys** file.
#### Adding your client ID with Dazzle
If you are using Dazzle to manage your SparkleShare projects, add a client ID with this command:
```
`$ dazzle link`
```
When Dazzle prompts you for the ID, paste in the client ID found in the SparkleShare menu.
### Using SparkleShare
Once you've added your client ID to your Git host, click the **Retry** button in the SparkleShare window to finish setup. When it's finished cloning your repository, you can close the SparkleShare setup window, and you'll find a new **SparkleShare** folder in your home directory. If you set up a Git repository with a hosting service and chose to include a README or license file, you can see them in your SparkleShare directory.
![Sparkleshare file manager][18]
Otherwise, there are some hidden directories, which you can see by revealing hidden directories in your file manager.
![Showing hidden files in GNOME][19]
You use SparkleShare the same way you use any directory on your computer: you put files into it. Anytime a file or directory is placed into a SparkleShare folder, it's copied in the background to your Git repository.
#### Excluding certain files
Since Git is designed to remember _everything_ , you may want to exclude specific file types from ever being recorded. There are a few reasons to manage excluded files. By defining files that are off limits for SparkleShare, you can avoid accidental copying of large files. You can also design a scheme for yourself that enables you to store files that logically belong together (MIDI files with their **.flac** exports, for instance) in one directory, but manually back up the large files yourself while letting SparkleShare back up the text-based files.
If you can't see hidden files in your system's file manager, then reveal them. Navigate to your SparkleShare folder, then to the directory representing your repository, locate a file called **.gitignore** , and open it in a text editor. You can enter file extensions or file names, one per line, into **.gitignore** , and any file matching what you list will be (as the file name suggests) ignored.
```
Thumbs.db
$RECYCLE.BIN/
.DS_Store
._*
.fseventsd
.Spotlight-V100
.Trashes
.directory
.Trash-*
*.wav
*.ogg
*.flac
*.mp3
*.m4a
*.opus
*.jpg
*.png
*.mp4
*.mov
*.mkv
*.avi
*.pdf
*.djvu
*.epub
*.od{s,t}
*.cbz
```
You know the types of files you encounter most often, so concentrate on the ones most likely to sneak their way into your SparkleShare directory. If you want to exercise a little overkill, you can find good collections of **.gitignore** files on Notabug.org and also on the internet at large.
With those entries in your **.gitignore** file, you can place large files that you don't want sent to your Git host in your SparkleShare directory, and SparkleShare will ignore them entirely. Of course, that means it's up to you to make sure they get onto a backup or distributed to your SparkleShare collaborators through some other means.
### Automation
[Automation][20] is part of the silent agreement we have with computers: they do the repetitious, boring stuff that we humans either aren't very good at doing or aren't very good at remembering. SparkleShare is a nice, simple way to automate the routine distribution of data. It isn't right for every Git repository, by any means. It doesn't have an interface for advanced Git functions; it doesn't have a pause button or a manual override. And that's OK because its scope is intentionally limited. SparkleShare does what SparkleShare sets out to do, it does it well, and it's one Git repository you won't have to think about.
If you have a use for that kind of steady, invisible automation, give SparkleShare a try.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/file-sharing-git
作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_cloud21x_cc.png?itok=5UwC92dO
[2]: https://git-scm.com/
[3]: http://www.sparkleshare.org/
[4]: http://gitlab.com
[5]: https://opensource.com/article/19/4/calendar-git
[6]: http://sparkleshare.org
[7]: /business/16/8/flatpak
[8]: http://notabug.org
[9]: https://opensource.com/sites/default/files/uploads/git-new-repo.jpg (Creating a Git repository)
[10]: https://opensource.com/sites/default/files/uploads/github-clone-url.jpg (Cloning a URL on GitHub)
[11]: https://opensource.com/sites/default/files/uploads/gitlab-clone-url.jpg (Cloning a URL on GitLab)
[12]: http://gitolite.org
[13]: https://github.com/hbons/Dazzle
[14]: https://opensource.com/sites/default/files/uploads/sparkleshare-host.jpg (Choosing a Sparkleshare host)
[15]: mailto:git@notabug.org
[16]: https://opensource.com/sites/default/files/uploads/sparkleshare-clientid.jpg (Getting the client ID from Sparkleshare)
[17]: https://opensource.com/sites/default/files/uploads/git-ssh-key.jpg (Adding an SSH key)
[18]: https://opensource.com/sites/default/files/uploads/sparkleshare-file-manager.jpg (Sparkleshare file manager)
[19]: https://opensource.com/sites/default/files/uploads/gnome-show-hidden-files.jpg (Showing hidden files in GNOME)
[20]: /downloads/ansible-quickstart

View File

@ -0,0 +1,190 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Authenticate a Linux Desktop to Your OpenLDAP Server)
[#]: via: (https://www.linux.com/blog/how-authenticate-linux-desktop-your-openldap-server)
[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
How to Authenticate a Linux Desktop to Your OpenLDAP Server
======
![][1]
[Creative Commons Zero][2]
In this final part of our three-part series, we reach the conclusion everyone has been waiting for. The ultimate goal of using LDAP (in many cases) is enabling desktop authentication. With this setup, admins are better able to manage and control user accounts and logins. After all, Active Directory admins shouldnt have all the fun, right?
WIth OpenLDAP, you can manage your users on a centralized directory server and connect the authentication of every Linux desktop on your network to that server. And since you already have [OpenLDAP][3] and the [LDAP Authentication Manager][4] setup and running, the hard work is out of the way. At this point, there is just a few quick steps to enabling those Linux desktops to authentication with that server.
Im going to walk you through this process, using the Ubuntu Desktop 18.04 to demonstrate. If your desktop distribution is different, youll only have to modify the installation steps, as the configurations should be similar.
**What Youll Need**
Obviously youll need the OpenLDAP server up and running. Youll also need user accounts created on the LDAP directory tree, and a user account on the client machines with sudo privileges. With those pieces out of the way, lets get those desktops authenticating.
**Installation**
The first thing we must do is install the necessary client software. This will be done on all the desktop machines that require authentication with the LDAP server. Open a terminal window on one of the desktop machines and issue the following command:
```
sudo apt-get install libnss-ldap libpam-ldap ldap-utils nscd -y
```
During the installation, you will be asked to enter the LDAP server URI ( **Figure 1** ).
![][5]
Figure 1: Configuring the LDAP server URI for the client.
[Used with permission][6]
The LDAP URI is the address of the OpenLDAP server, in the form ldap://SERVER_IP (Where SERVER_IP is the IP address of the OpenLDAP server). Type that address, tab to OK, and press Enter on your keyboard.
In the next window ( **Figure 2)** , you are required to enter the Distinguished Name of the OpenLDAP server. This will be in the form dc=example,dc=com.
![][7]
Figure 2: Configuring the DN of your OpenLDAP server.
[Used with permission][6]
If youre unsure of what your OpenLDAP DN is, log into the LDAP Account Manager, click Tree View, and youll see the DN listed in the left pane ( **Figure 3** ).
![][8]
Figure 3: Locating your OpenLDAP DN with LAM.
[Used with permission][6]
The next few configuration windows, will require the following information:
* Specify LDAP version (select 3)
* Make local root Database admin (select Yes)
* Does the LDAP database require login (select No)
* Specify LDAP admin account suffice (this will be in the form cn=admin,dc=example,dc=com)
* Specify password for LDAP admin account (this will be the password for the LDAP admin user)
Once youve answered the above questions, the installation of the necessary bits is complete.
**Configuring the LDAP Client**
Now its time to configure the client to authenticate against the OpenLDAP server. This is not nearly as hard as you might think.
First, we must configure nsswitch. Open the configuration file with the command:
```
sudo nano /etc/nsswitch.conf
```
In that file, add ldap at the end of the following line:
```
passwd: compat systemd
group: compat systemd
shadow: files
```
These configuration entries should now look like:
```
passwd: compat systemd ldap
group: compat systemd ldap
shadow: files ldap
```
At the end of this section, add the following line:
```
gshadow files
```
The entire section should now look like:
```
passwd: compat systemd ldap
group: compat systemd ldap
shadow: files ldap
gshadow files
```
Save and close that file.
Now we need to configure PAM for LDAP authentication. Issue the command:
```
sudo nano /etc/pam.d/common-password
```
Remove use_authtok from the following line:
```
password [success=1 user_unknown=ignore default=die] pam_ldap.so use_authtok try_first_pass
```
Save and close that file.
Theres one more PAM configuration to take care of. Issue the command:
```
sudo nano /etc/pam.d/common-session
```
At the end of that file, add the following:
```
session optional pam_mkhomedir.so skel=/etc/skel umask=077
```
The above line will create the default home directory (upon first login), on the Linux desktop, for any LDAP user that doesnt have a local account on the machine. Save and close that file.
**Logging In**
Reboot the client machine. When the login is presented, attempt to log in with a user on your OpenLDAP server. The user account should authenticate and present you with a desktop. You are good to go.
Make sure to configure every single Linux desktop on your network in the same fashion, so they too can authenticate against the OpenLDAP directory tree. By doing this, any user in the tree will be able to log into any configured Linux desktop machine on your network.
You now have an OpenLDAP server running, with the LDAP Account Manager installed for easy account management, and your Linux clients authenticating against that LDAP server.
And that, my friends, is all there is to it.
Were done.
Keep using Linux.
Its been an honor.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/how-authenticate-linux-desktop-your-openldap-server
作者:[Jack Wallen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/jlwallen
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cyber-3400789_1280_0.jpg?itok=YiinDnTw
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
[3]: https://www.linux.com/blog/2019/3/how-install-openldap-ubuntu-server-1804
[4]: https://www.linux.com/blog/learn/2019/3/how-install-ldap-account-manager-ubuntu-server-1804
[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldapauth_1.jpg?itok=DgYT8iY1
[6]: /LICENSES/CATEGORY/USED-PERMISSION
[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldapauth_2.jpg?itok=CXITs7_J
[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldapauth_3.jpg?itok=HmhiYj7J

View File

@ -0,0 +1,240 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Run a server with Git)
[#]: via: (https://opensource.com/article/19/4/server-administration-git)
[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth/users/seth)
Run a server with Git
======
Thanks to Gitolite, you can manage a Git server with Git. Learn how in
our series about little-known Git uses.
![computer servers processing data][1]
As I've tried to demonstrate in this series leading up to Git's 14th anniversary on April 7, [Git][2] can do a wide range of things beyond tracking source code. Believe it or not, Git can even manage your Git server, so you can, more or less, run a Git server with Git itself.
Of course, this involves a lot of components beyond everyday Git, not the least of which is [Gitolite][3], the backend application managing the fiddly bits that you configure using Git. The great thing about Gitolite is that, because it uses Git as its frontend interface, it's easy to integrate Git server administration within the rest of your Git-based workflow. Gitolite provides precise control over who can access specific repositories on your server and what permissions they have. You can manage that sort of thing yourself with the usual Linux system tools, but it takes a lot of work if you have more than just one or two repos across a half-dozen users.
Gitolite's developers have done the hard work to make it easy for you to provide many users with access to your Git server without giving them access to your entire environment—and you can do it all with Git.
What Gitolite is _not_ is a GUI admin and user panel. That sort of experience is available with the excellent [Gitea][4] project, but this article focuses on the simple elegance and comforting familiarity of Gitolite.
### Install Gitolite
Assuming your Git server runs Linux, you can install Gitolite with your package manager ( **yum** on CentOS and RHEL, **apt** on Debian and Ubuntu, **zypper** on OpenSUSE, and so on). For example, on RHEL:
```
`$ sudo yum install gitolite3`
```
Many repositories still have older versions of Gitolite for legacy support, but the current version is version 3.
You must have passwordless SSH access to your server. You can use a password to log in if you prefer, but Gitolite relies on SSH keys, so you must configure the option to log in with keys. If you don't know how to configure a server for passwordless SSH access, go learn how to do that first (the [Setting up SSH key authentication][5] section of Steve Ovens's Ansible article explains it well). It's an essential part of secure server administration—as well as of running Gitolite.
### Configure a Git user
Without Gitolite, if a person requests access to a Git repository you host on a server, you have to provide that person with a user account. Git provides a special shell, the **git-shell** , which is an ultra-specific shell that performs only Git tasks. This lets you have users who can access your server only through the filter of a very limited shell environment.
That solution works, but it usually means a user gains access to all repositories on your server unless you have a very good schema for group permissions and maintain those permissions strictly whenever a new repository is created. It also requires a lot of manual configuration at the system level, an area usually reserved for a specific tier of sysadmins and not necessarily the person usually in charge of Git repositories.
Gitolite sidesteps this issue entirely by designating one username for every person who needs access to any repository. By default, the username is **git** , and because Gitolite's documentation assumes that's what is used, it's a good default to keep when you're learning the tool. It's also a well-known convention for anyone who's ever used GitLab or GitHub or any other Git hosting service.
Gitolite calls this user the _hosting user_. Create an account on your server to act as the hosting user (I'll stick with **git** because that's the convention):
```
` $ sudo adduser --create-home git`
```
For you to control the **git** user account, it must have a valid public SSH key that belongs to you. You should already have this set up, so **cp** your public key ( _not your private key_ ) to the **git** user's home directory:
```
$ sudo cp ~/.ssh/id_ed25519.pub /home/git/
$ sudo chown git:git /home/git/id_ed25519.pub
```
If your public key doesn't end with the extension **.pub** , Gitolite will not use it, so rename the file accordingly. Change to that user account to run Gitolite's setup:
```
$ sudo su - git
$ gitolite setup --pubkey id_ed25519.pub
```
After the setup script runs, the **git** home's user directory will have a **repositories** directory, which (for now) contains the files **git-admin.git** and **testing.git**. That's all the setup the server requires, so log out.
### Use Gitolite
Managing Gitolite is a matter of editing text files in a Git repository, specifically **gitolite-admin.git**. You won't SSH into your server for Git administration, and Gitolite encourages you not to try. The repositories you and your users store on the Gitolite server are _bare_ repositories, so it's best to stay out of them.
```
$ git clone [git@example.com][6]:gitolite-admin.git gitolite-admin.git
$ cd gitolite-admin.git
$ ls -1
conf
keydir
```
The **conf** directory in this repository contains a file called **gitolite.conf**. Open it in a text editor or use **cat** to view its contents:
```
repo gitolite-admin
RW+ = id_ed22519
repo testing
RW+ = @all
```
You may have an idea of what this configuration file does: **gitolite-admin** represents this repository, and the owner of the **id_ed25519** key has read, write, and Git administrative privileges. In other words, rather than mapping users to normal local Unix users (because all your users log in using the **git** hosting user identity), Gitolite maps users to SSH keys listed in the **keydir** directory.
The **testing.git** repository gives full permissions to everyone with access to the server using special group notation.
#### Add users
If you want to add a user called **alice** to your Git server, the person Alice must send you her public SSH key. Gitolite uses whatever is to the left of the **.pub** extension as the identifier for your Git users. Rather than using the default key name values, give keys a name indicative of the key owner. If a user has more than one key (e.g., one for her laptop, one for her desktop), you can use subdirectories to avoid file name collisions. For instance, the key Alice uses from her laptop might come to you as the default **id_rsa.pub** , so rename it **alice.pub** or similar (or let the users name the key according to their local user accounts on their computers), and place it into the **gitolite-admin.git/keydir/work/laptop/** directory. If she sends you another key from her desktop, name it **alice.pub** (the same as the previous one) and add it to **keydir/work/desktop/**. Another key might go into **keydir/home/desktop/** , and so on. Gitolite recursively searches **keydir** for a **.pub** file matching a repository "user" and treats any match as the same identity.
When you add keys to the **keydir** directory, you must commit them back to your server. This is such an easy thing to forget that there's a real argument here for using an automated Git application like [**Sparkleshare**][7] so any change is committed back to your Gitolite admin immediately. The first time you forget to commit and push—and waste three hours of your time and your user's time troubleshooting—you'll see that Gitolite is the perfect justification for using Sparkleshare.
```
$ git add keydir
$ git commit -m 'added alice-laptop-0.pub'
$ git push origin HEAD
```
Alice, by default, gains access to the **testing.git** directory so she can test connectivity and functionality with that.
#### Set permissions
As with users, directory permissions and groups are abstracted away from the normal Unix tools you might be used to (or find information about online). Permissions to projects are granted in the **gitolite.conf** file in **gitolite-admin.git/conf** directory. There are four levels of permissions:
* **R** allows read-only. A user with **R** permissions on a repository may clone it, and that's all.
* **RW** allows a user to perform a fast-forward push of a branch, create new branches, and create new tags. More or less, this one feels like a "normal" Git repository to most users.
* **RW+** allows Git actions that are potentially destructive. A user can perform normal fast-forward pushes, as well as rewind pushes, do rebases, and delete branches and tags. This may or may not be something you want to grant to all contributors on a project.
* **-** explicitly denies access to a repository. This is essentially the same as a user not being listed in the repository's configuration.
Create a new repository or modify an existing repository's permissions by adjusting **gitolite.conf**. For instance, to give Alice permissions to administrate a new repository called **widgets.git** :
```
repo gitolite-admin
RW+ = id_ed22519
repo testing
RW+ = @all
repo widgets
RW+ = alice
```
Now Alice—and Alice alone—can clone the repo:
```
[alice]$ git clone [git@example.com][6]:widgets.git
Cloning into 'widgets'...
warning: You appear to have cloned an empty repository.
```
On her initial push, Alice must use the **-u** option to send her branch to the empty repository (as she would have to do with any Git host).
To make user management easier, you can define groups of repositories:
```
@qtrepo = widgets
@qtrepo = games
repo gitolite-admin
RW+ = id_ed22519
repo testing
RW+ = @all
repo @qtrepo
RW+ = alice
```
Just as you can create group repositories, you can group users. One user group exists by default: **@all**. As you might expect, it includes all users, without exception. You can create your own:
```
@qtrepo = widgets
@qtrepo = games
@developers = alice bob
repo gitolite-admin
RW+ = id_ed22519
repo testing
RW+ = @all
repo @qtrepo
RW+ = @developers
```
As with adding or modifying key files, any change to the **gitolite.conf** file must be committed and pushed to take effect.
### Create a repository
By default, Gitolite assumes repository creation happens from the top down. For instance, a project manager with access to the Git server creates a project repository and, through the Gitolite administration repo, adds developers.
In practice, you might prefer to grant users permission to create repositories. Gitolite calls these "wild repos" (I'm not sure whether that's commentary on how the repos come into being or a reference to the wildcard characters required by the configuration file to let it happen). Here's an example:
```
@managers = alice bob
repo foo/CREATOR/[a-z]..*
C = @managers
RW+ = CREATOR
RW = WRITERS
R = READERS
```
The first line defines a group of users: the group is called **@managers** and contains users **alice** and **bob**. The next line sets up a wildcard allowing repositories that do not yet exist to be created in a directory called **foo** followed by a subdirectory named for the user creating the repo. For example:
```
[alice]$ git clone [git@example.com][6]:foo/alice/cool-app.git
Cloning into cool-app'...
Initialized empty Git repository in /home/git/repositories/foo/alice/cool-app.git
warning: You appear to have cloned an empty repository.
```
There are some mechanisms for the creator of a wild repo to define who can read and write to their repository, but they're limited in scope. For the most part, Gitolite assumes that a specific set of users governs project permission. One solution is to grant all users access to **gitolite-admin** using a Git hook to require manager approval to merge changes into the master branch.
### Learn more
Gitolite has many more features than what this introductory article covers, so try it out. The [documentation][8] is excellent, and once you read through it, you can customize your Gitolite server to provide your users whatever level of control you are comfortable with. Gitolite is a low-maintenance, simple system that you can install, set up, and then more or less forget about.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/server-administration-git
作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8 (computer servers processing data)
[2]: https://git-scm.com/
[3]: http://gitolite.com
[4]: http://gitea.io
[5]: Setting%20up%20SSH%20key%20authentication
[6]: mailto:git@example.com
[7]: https://opensource.com/article/19/4/file-sharing-git
[8]: http://gitolite.com/gitolite/quick_install.html

View File

@ -0,0 +1,247 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Manage multimedia files with Git)
[#]: via: (https://opensource.com/article/19/4/manage-multimedia-files-git)
[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
Manage multimedia files with Git
======
Learn how to use Git to track large multimedia files in your projects in
the final article in our series on little-known uses of Git.
![video editing dashboard][1]
Git is very specifically designed for source code version control, so it's rarely embraced by projects and industries that don't primarily work in plaintext. However, the advantages of an asynchronous workflow are appealing, especially in the ever-growing number of industries that combine serious computing with seriously artistic ventures, including web design, visual effects, video games, publishing, currency design (yes, that's a real industry), education… the list goes on and on.
In this series leading up to Git's 14th anniversary, we've shared six little-known ways to use Git. In this final article, we'll look at software that brings the advantages of Git to managing multimedia files.
### The problem with managing multimedia files with Git
It seems to be common knowledge that Git doesn't work well with non-text files, but it never hurts to challenge assumptions. Here's an example of copying a photo file using Git:
```
$ du -hs
108K .
$ cp ~/photos/dandelion.tif .
$ git add dandelion.tif
$ git commit -m 'added a photo'
[master (root-commit) fa6caa7] two photos
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 dandelion.tif
$ du -hs
1.8M .
```
Nothing unusual so far; adding a 1.8MB photo to a directory results in a directory 1.8MB in size. So, let's try removing the file:
```
$ git rm dandelion.tif
$ git commit -m 'deleted a photo'
$ du -hs
828K .
```
You can see the problem here: Removing a large file after it's been committed increases a repository's size roughly eight times its original, barren state (from 108K to 828K). You can perform tests to get a better average, but this simple demonstration is consistent with my experience. The cost of committing files that aren't text-based is minimal at first, but the longer a project stays active, the more changes people make to static content, and the more those fractions start to add up. When a Git repository becomes very large, the major cost is usually speed. The time to perform pulls and pushes goes from being how long it takes to take a sip of coffee to how long it takes to wonder if your computer got kicked off the network.
The reason static content causes Git to grow in size is that formats based on text allow Git to pull out just the parts that have changed. Raster images and music files make as much sense to Git as they would to you if you looked at the binary data contained in a .png or .wav file. So Git just takes all the data and makes a new copy of it, even if only one pixel changes from one photo to the next.
### Git-portal
In practice, many multimedia projects don't need or want to track the media's history. The media part of a project tends to have a different lifecycle than the text or code part of a project. Media assets generally progress in one direction: a picture starts as a pencil sketch, proceeds toward its destination as a digital painting, and, even if the text is rolled back to an earlier version, the art continues its forward progress. It's rare for media to be bound to a specific version of a project. The exceptions are usually graphics that reflect datasets—usually tables or graphs or charts—that can be done in text-based formats such as SVG.
So, on many projects that involve both media and text (whether it's narrative prose or code), Git is an acceptable solution to file management, as long as there's a playground outside the version control cycle for artists to play in.
![Graphic showing relationship between art assets and Git][2]
A simple way to enable that is [Git-portal][3], a Bash script armed with Git hooks that moves your asset files to a directory outside Git's purview and replaces them with symlinks. Git commits the symlinks (sometimes called aliases or shortcuts), which are trivially small, so all you commit are your text files and whatever symlinks represent your media assets. Because the replacement files are symlinks, your project continues to function as expected because your local machine follows the symlinks to their "real" counterparts. Git-portal maintains a project's directory structure when it swaps out a file with a symlink, so it's easy to reverse the process, should you decide that Git-portal isn't right for your project or you need to build a version of your project without symlinks (for distribution, for instance).
Git-portal also allows remote synchronization of assets over rsync, so you can set up a remote storage location as a centralized source of authority.
Git-portal is ideal for multimedia projects, including video game and tabletop game design, virtual reality projects with big 3D model renders and textures, [books][4] with graphics and .odt exports, collaborative [blog websites][5], music projects, and much more. It's not uncommon for an artist to perform versioning in their application—in the form of layers (in the graphics world) and tracks (in the music world)—so Git adds nothing to multimedia project files themselves. The power of Git is leveraged for other parts of artistic projects (prose and narrative, project management, subtitle files, credits, marketing copy, documentation, and so on), and the power of structured remote backups is leveraged by the artists.
#### Install Git-portal
There are RPM packages for Git-portal located at <https://klaatu.fedorapeople.org/git-portal>, which you can download and install.
Alternately, you can install Git-portal manually from its home on GitLab. It's just a Bash script and some Git hooks (which are also Bash scripts), but it requires a quick build process so that it knows where to install itself:
```
$ git clone <https://gitlab.com/slackermedia/git-portal.git> git-portal.clone
$ cd git-portal.clone
$ ./configure
$ make
$ sudo make install
```
#### Use Git-portal
Git-portal is used alongside Git. This means, as with all large-file extensions to Git, there are some added steps to remember. But you only need Git-portal when dealing with your media assets, so it's pretty easy to remember unless you've acclimated yourself to treating large files the same as text files (which is rare for Git users). There's one setup step you must do to use Git-portal in a project:
```
$ mkdir bigproject.git
$ cd !$
$ git init
$ git-portal init
```
Git-portal's **init** function creates a **_portal** directory in your Git repository and adds it to your .gitignore file.
Using Git-portal in a daily routine integrates smoothly with Git. A good example is a MIDI-based music project: the project files produced by the music workstation are text-based, but the MIDI files are binary data:
```
$ ls -1
_portal
song.1.qtr
song.qtr
song-Track_1-1.mid
song-Track_1-3.mid
song-Track_2-1.mid
$ git add song*qtr
$ git-portal song-Track*mid
$ git add song-Track*mid
```
If you look into the **_portal** directory, you'll find the original MIDI files. The files in their place are symlinks to **_portal** , which keeps the music workstation working as expected:
```
$ ls -lG
[...] _portal/
[...] song.1.qtr
[...] song.qtr
[...] song-Track_1-1.mid -> _portal/song-Track_1-1.mid*
[...] song-Track_1-3.mid -> _portal/song-Track_1-3.mid*
[...] song-Track_2-1.mid -> _portal/song-Track_2-1.mid*
```
As with Git, you can also add a directory of files:
```
$ cp -r ~/synth-presets/yoshimi .
$ git-portal add yoshimi
Directories cannot go through the portal. Sending files instead.
$ ls -lG _portal/yoshimi
[...] yoshimi.stat -> ../_portal/yoshimi/yoshimi.stat*
```
Removal works as expected, but when removing something in **_portal** , you should use **git-portal rm** instead of **git rm**. Using Git-portal ensures that the file is removed from **_portal** :
```
$ ls
_portal/ song.qtr song-Track_1-3.mid@ yoshimi/
song.1.qtr song-Track_1-1.mid@ song-Track_2-1.mid@
$ git-portal rm song-Track_1-3.mid
rm 'song-Track_1-3.mid'
$ ls _portal/
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
```
If you forget to use Git-portal, then you have to remove the portal file manually:
```
$ git-portal rm song-Track_1-1.mid
rm 'song-Track_1-1.mid'
$ ls _portal/
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
$ trash _portal/song-Track_1-1.mid
```
Git-portal's only other function is to list all current symlinks and find any that may have become broken, which can sometimes happen if files move around in a project directory:
```
$ mkdir foo
$ mv yoshimi foo
$ git-portal status
bigproject.git/song-Track_2-1.mid: symbolic link to _portal/song-Track_2-1.mid
bigproject.git/foo/yoshimi/yoshimi.stat: broken symbolic link to ../_portal/yoshimi/yoshimi.stat
```
If you're using Git-portal for a personal project and maintaining your own backups, this is technically all you need to know about Git-portal. If you want to add in collaborators or you want Git-portal to manage backups the way (more or less) Git does, you can a remote.
#### Add Git-portal remotes
Adding a remote location for Git-portal is done through Git's existing remote function. Git-portal implements Git hooks, scripts hidden in your repository's .git directory, to look at your remotes for any that begin with **_portal**. If it finds one, it attempts to **rsync** to the remote location and synchronize files. Git-portal performs this action anytime you do a Git push or a Git merge (or pull, which is really just a fetch and an automatic merge).
If you've only cloned Git repositories, then you may never have added a remote yourself. It's a standard Git procedure:
```
$ git remote add origin [git@gitdawg.com][6]:seth/bigproject.git
$ git remote -v
origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
origin [git@gitdawg.com][6]:seth/bigproject.git (push)
```
The name **origin** is a popular convention for your main Git repository, so it makes sense to use it for your Git data. Your Git-portal data, however, is stored separately, so you must create a second remote to tell Git-portal where to push to and pull from. Depending on your Git host, you may need a separate server because gigabytes of media assets are unlikely to be accepted by a Git host with limited space. Or maybe you're on a server that permits you to access only your Git repository and not external storage directories:
```
$ git remote add _portal [seth@example.com][7]:/home/seth/git/bigproject_portal
$ git remote -v
origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
origin [git@gitdawg.com][6]:seth/bigproject.git (push)
_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (fetch)
_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (push)
```
You may not want to give all of your users individual accounts on your server, and you don't have to. To provide access to the server hosting a repository's large file assets, you can run a Git frontend like **[Gitolite][8]** , or you can use **rrsync** (i.e., restricted rsync).
Now you can push your Git data to your remote Git repository and your Git-portal data to your remote portal:
```
$ git push origin HEAD
master destination detected
Syncing _portal content...
sending incremental file list
sent 9,305 bytes received 18 bytes 1,695.09 bytes/sec
total size is 60,358,015 speedup is 6,474.10
Syncing _portal content to example.com:/home/seth/git/bigproject_portal
```
If you have Git-portal installed and a **_portal** remote configured, your **_portal** directory will be synchronized, getting new content from the server and sending fresh content with every push. While you don't have to do a Git commit and push to sync with the server (a user could just use rsync directly), I find it useful to require commits for artistic changes. It integrates artists and their digital assets into the rest of the workflow, and it provides useful metadata about project progress and velocity.
### Other options
If Git-portal is too simple for you, there are other options for managing large files with Git. [Git Large File Storage][9] (LFS) is a fork of a defunct project called git-media and is maintained and supported by GitHub. It requires special commands (like **git lfs track** to protect large files from being tracked by Git) and requires the user to manage a .gitattributes file to update which files in the repository are tracked by LFS. It supports _only_ HTTP and HTTPS remotes for large files, so your LFS server must be configured so users can authenticate over HTTP rather than SSH or rsync.
A more flexible option than LFS is [git-annex][10], which you can learn more about in my article about [managing binary blobs in Git][11] (ignore the parts about the deprecated git-media, as its former flexibility doesn't apply to its successor, Git LFS). Git-annex is a flexible and elegant solution with a detailed system for adding, removing, and moving large files within a repository. Because it's flexible and powerful, there are lots of new commands and rules to learn, so take a look at its [documentation][12].
If, however, your needs are simple and you like a solution that utilizes existing technology to do simple and obvious tasks, Git-portal might be the tool for the job.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/manage-multimedia-files-git
作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard)
[2]: https://opensource.com/sites/default/files/uploads/git-velocity.jpg (Graphic showing relationship between art assets and Git)
[3]: http://gitlab.com/slackermedia/git-portal.git
[4]: https://www.apress.com/gp/book/9781484241691
[5]: http://mixedsignals.ml
[6]: mailto:git@gitdawg.com
[7]: mailto:seth@example.com
[8]: https://opensource.com/article/19/4/file-sharing-git
[9]: https://git-lfs.github.com/
[10]: https://git-annex.branchable.com/
[11]: https://opensource.com/life/16/8/how-manage-binary-blobs-git-part-7
[12]: https://git-annex.branchable.com/walkthrough/

View File

@ -0,0 +1,123 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What it means to be Cloud-Native approach — the CNCF way)
[#]: via: (https://medium.com/@sonujose993/what-it-means-to-be-cloud-native-approach-the-cncf-way-9e8ab99d4923)
[#]: author: (Sonu Jose https://medium.com/@sonujose993)
What it means to be Cloud-Native approach — the CNCF way
======
![](https://cdn-images-1.medium.com/max/2400/0*YknjM7T_Pxwz9deR)
While discussing on Digital Transformation and modern application development Cloud-Native is a term which frequently comes in. But what does it actually means to be cloud-native? This blog is all about giving a good understanding of the cloud-native approach and the ways to achieve it in the CNCF way.
Michael Dell once said that “the cloud isnt a place, its a way of doing IT”. He was right, and the same can be said of cloud-native.
Cloud-native is an approach to building and running applications that exploit the advantages of the cloud computing delivery model. Cloud-native is about how applications are created and deployed, not where. … Its appropriate for both public and private clouds.
Cloud native architectures take full advantage of on-demand delivery, global deployment, elasticity, and higher-level services. They enable huge improvements in developer productivity, business agility, scalability, availability, utilization, and cost savings.
### CNCF (Cloud native computing foundation)
Google has been using containers for many years and they led the Kubernetes project which is a leading container orchestration platform. But alone they cant really change the broad perspective in the industry around modern applications. So there was a huge need for industry leaders to come together and solve the major problems facing the modern approach. In order to achieve this broader vision, Google donated kubernetes to the Cloud Native foundation and this lead to the birth of CNCF in 2015.
![](https://cdn-images-1.medium.com/max/1200/1*S1V9R_C_rjLVlH3M8dyF-g.png)
Cloud Native computing foundation is created in the Linux foundation for building and managing platforms and solutions for modern application development. It really is a home for amazing projects that enable modern application development. CNCF defines cloud-native as “scalable applications” running in “modern dynamic environments” that use technologies such as containers, microservices, and declarative APIs. Kubernetes is the worlds most popular container-orchestration platform and the first CNCF project.
### The approach…
CNCF created a trail map to better understand the concept of Cloud native approach. In this article, we will be discussed based on this landscape. The newer version is available at https://landscape.cncf.io/
The Cloud Native Trail Map is CNCFs recommended path through the cloud-native landscape. This doesnt define a specific path with which we can approach digital transformation rather there are many possible paths you can follow to align with this concept based on your business scenario. This is just a trail to simplify the journey to cloud-native.
Let's start discussing the steps defined in this trail map.
### 1. CONTAINERIZATION
![][1]
You cant do cloud-native without containerizing your application. It doesnt matter what size the application is any type of application will do. **A container is a standard unit of software that packages up the code and all its dependencies** so the application runs quickly and reliably from one computing environment to another. Docker is the most preferred platform for containerization. A **Docker container** image is a lightweight, standalone, executable package of software that includes everything needed to run an application.
### 2. CI/CD
![][2]
Setup Continuous Integration/Continuous Delivery (CI/CD) so that changes to your source code automatically result in a new container being built, tested, and deployed to staging and eventually, perhaps, to production. Next thing we need to setup is automated rollouts, rollbacks as well as testing. There are a lot of platforms for CI/CD: **Jenkins, VSTS, Azure DevOps** , TeamCity, JFRog, Spinnaker, etc..
### 3. ORCHESTRATION
![][3]
Container orchestration is all about managing the lifecycles of containers, especially in large, dynamic environments. Software teams use container orchestration to control and automate many tasks. **Kubernetes** is the market-leading orchestration solution. There are other orchestrators like Docker swarm, Mesos, etc.. **Helm Charts** help you define, install, and upgrade even the most complex Kubernetes application.
### 4. OBSERVABILITY & ANALYSIS
Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster. Kubernetes provides detailed information about an applications resource usage at each of these levels. This information allows you to evaluate your applications performance and where bottlenecks can be removed to improve overall performance.
![][4]
Pick solutions for monitoring, logging, and tracing. Consider CNCF projects Prometheus for monitoring, Fluentd for logging and Jaeger for TracingFor tracing, look for an OpenTracing-compatible implementation like Jaeger.
### 5. SERVICE MESH
As its name says its all about connecting services, the **discovery of services** , **health checking, routing** and it is used to **monitoring ingress** from the internet. A service mesh also often has more complex operational requirements, like A/B testing, canary rollouts, rate limiting, access control, and end-to-end authentication.
![][5]
**Istio** provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications. **CoreDNS** is a fast and flexible tool that is useful for service discovery. **Envoy** and **Linkerd** each enable service mesh architectures.
### 6. NETWORKING AND POLICY
It is really important to enable more flexible networking layers. To enable more flexible networking, use a CNI compliant network project like Calico, Flannel, or Weave Net. Open Policy Agent (OPA) is a general purpose policy engine with uses ranging from authorization and admission control to data filtering
### 7. DISTRIBUTED DATABASE
A distributed database is a database in which not all storage devices are attached to a common processor. It may be stored in multiple computers, located in the same physical location; or may be dispersed over a network of interconnected computers.
![][6]
When you need more resiliency and scalability than you can get from a single database, **Vitess** is a good option for running MySQL at scale through sharding. Rook is a storage orchestrator that integrates a diverse set of storage solutions into Kubernetes. Serving as the “brain” of Kubernetes, etcd provides a reliable way to store data across a cluster of machine
### 8. MESSAGING
When you need higher performance than JSON-REST, consider using gRPC or NATS. gRPC is a universal RPC framework. NATS is a multi-modal messaging system that includes request/reply, pub/sub and load balanced queues. It is also applicable and take care of much newer and use cases like IoT.
### 9. CONTAINER REGISTRY & RUNTIMES
Container Registry is a single place for your team to manage Docker images, perform vulnerability analysis, and decide who can access what with fine-grained access control. There are many container registries available in market docker hub, Azure Container registry, Harbor, Nexus registry, Amazon Elastic Container Registry and way more…
![][7]
Container runtime **containerd** is available as a daemon for Linux and Windows. It manages the complete container lifecycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments and beyond.
### 10. SOFTWARE DISTRIBUTION
If you need to do secure software distribution, evaluate Notary, implementation of The Update Framework (TUF).
TUF provide a framework (a set of libraries, file formats, and utilities) that can be used to secure new and existing software update systems. The framework should enable applications to be secure from all known attacks on the software update process. It is not concerned with exposing information about what software is being updated (and thus what software the client may be running) or the contents of updates.
--------------------------------------------------------------------------------
via: https://medium.com/@sonujose993/what-it-means-to-be-cloud-native-approach-the-cncf-way-9e8ab99d4923
作者:[Sonu Jose][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://medium.com/@sonujose993
[b]: https://github.com/lujun9972
[1]: https://cdn-images-1.medium.com/max/1200/1*glD7bNJG3SlO0_xNmSGPcQ.png
[2]: https://cdn-images-1.medium.com/max/1600/1*qOno8YNzmwimlaL9j2fSbA.png
[3]: https://cdn-images-1.medium.com/max/1200/1*fw8YJnfF32dWsX_beQpWOw.png
[4]: https://cdn-images-1.medium.com/max/1600/1*sbjPYNq76s9lR7D_FK4ltg.png
[5]: https://cdn-images-1.medium.com/max/1600/1*kUFBuGfjZSS-n-32CCjtwQ.png
[6]: https://cdn-images-1.medium.com/max/1600/1*4OGiB3HHQZBFsALjaRb9pA.jpeg
[7]: https://cdn-images-1.medium.com/max/1600/1*VMCJN41mGZs4p2lQHD0nDw.png

View File

@ -0,0 +1,352 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A beginner's guide to building DevOps pipelines with open source tools)
[#]: via: (https://opensource.com/article/19/4/devops-pipeline)
[#]: author: (Bryant Son (Red Hat, Community Moderator) https://opensource.com/users/brson/users/milindsingh/users/milindsingh/users/dscripter)
A beginner's guide to building DevOps pipelines with open source tools
======
If you're new to DevOps, check out this five-step process for building
your first pipeline.
![Shaking hands, networking][1]
DevOps has become the default answer to fixing software development processes that are slow, siloed, or otherwise dysfunctional. But that doesn't mean very much when you're new to DevOps and aren't sure where to begin. This article explores what a DevOps pipeline is and offers a five-step process to create one. While this tutorial is not comprehensive, it should give you a foundation to start on and expand later. But first, a story.
### My DevOps journey
I used to work for the cloud team at Citi Group, developing an Infrastructure-as-a-Service (IaaS) web application to manage Citi's cloud infrastructure, but I was always interested in figuring out ways to make the development pipeline more efficient and bring positive cultural change to the development team. I found my answer in a book recommended by Greg Lavender, who was the CTO of Citi's cloud architecture and infrastructure engineering, called _[The Phoenix Project][2]_. The book reads like a novel while it explains DevOps principles.
A table at the back of the book shows how often different companies deploy to the release environment:
Company | Deployment Frequency
---|---
Amazon | 23,000 per day
Google | 5,500 per day
Netflix | 500 per day
Facebook | 1 per day
Twitter | 3 per week
Typical enterprise | 1 every 9 months
How are the frequency rates of Amazon, Google, and Netflix even possible? It's because these companies have figured out how to make a nearly perfect DevOps pipeline.
This definitely wasn't the case before we implemented DevOps at Citi. Back then, my team had different staged environments, but deployments to the development server were very manual. All developers had access to just one development server based on IBM WebSphere Application Server Community Edition. The problem was the server went down whenever multiple users simultaneously tried to make deployments, so the developers had to let each other know whenever they were about to make a deployment, which was quite a pain. In addition, there were problems with low code test coverages, cumbersome manual deployment processes, and no way to track code deployments with a defined task or a user story.
I realized something had to be done, and I found a colleague who felt the same way. We decided to collaborate to build an initial DevOps pipeline—he set up a virtual machine and a Tomcat application server while I worked on Jenkins, integrating with Atlassian Jira and BitBucket, and code testing coverages. This side project was hugely successful: we almost fully automated the development pipeline, we achieved nearly 100% uptime on our development server, we could track and improve code testing coverage, and the Git branch could be associated with the deployment and Jira task. And most of the tools we used to construct our DevOps pipeline were open source.
I now realize how rudimentary our DevOps pipeline was, as we didn't take advantage of advanced configurations like Jenkins files or Ansible. However, this simple process worked well, maybe due to the [Pareto][3] principle (also known as the 80/20 rule).
### A brief introduction to DevOps and the CI/CD pipeline
If you ask several people, "What is DevOps? you'll probably get several different answers. DevOps, like agile, has evolved to encompass many different disciplines, but most people will agree on a few things: DevOps is a software development practice or a software development lifecycle (SDLC) and its central tenet is cultural change, where developers and non-developers all breathe in an environment where formerly manual things are automated; everyone does what they are best at; the number of deployments per period increases; throughput increases; and flexibility improves.
While having the right software tools is not the only thing you need to achieve a DevOps environment, some tools are necessary. A key one is continuous integration and continuous deployment (CI/CD). This pipeline is where the environments have different stages (e.g., DEV, INT, TST, QA, UAT, STG, PROD), manual things are automated, and developers can achieve high-quality code, flexibility, and numerous deployments.
This article describes a five-step approach to creating a DevOps pipeline, like the one in the following diagram, using open source tools.
![Complete DevOps pipeline][4]
Without further ado, let's get started.
### Step 1: CI/CD framework
The first thing you need is a CI/CD tool. Jenkins, an open source, Java-based CI/CD tool based on the MIT License, is the tool that popularized the DevOps movement and has become the de facto standard.
So, what is Jenkins? Imagine it as some sort of a magical universal remote control that can talk to many many different services and tools and orchestrate them. On its own, a CI/CD tool like Jenkins is useless, but it becomes more powerful as it plugs into different tools and services.
Jenkins is just one of many open source CI/CD tools that you can leverage to build a DevOps pipeline.
Name | License
---|---
[Jenkins][5] | Creative Commons and MIT
[Travis CI][6] | MIT
[CruiseControl][7] | BSD
[Buildbot][8] | GPL
[Apache Gump][9] | Apache 2.0
[Cabie][10] | GNU
Here's what a DevOps process looks like with a CI/CD tool.
![CI/CD tool][11]
You have a CI/CD tool running in your localhost, but there is not much you can do at the moment. Let's follow the next step of DevOps journey.
### Step 2: Source control management
The best (and probably the easiest) way to verify that your CI/CD tool can perform some magic is by integrating with a source control management (SCM) tool. Why do you need source control? Suppose you are developing an application. Whenever you build an application, you are programming—whether you are using Java, Python, C++, Go, Ruby, JavaScript, or any of the gazillion programming languages out there. The programming codes you write are called source codes. In the beginning, especially when you are working alone, it's probably OK to put everything in your local directory. But when the project gets bigger and you invite others to collaborate, you need a way to avoid merge conflicts while effectively sharing the code modifications. You also need a way to recover a previous version—and the process of making a backup and copying-and-pasting gets old. You (and your teammates) want something better.
This is where SCM becomes almost a necessity. A SCM tool helps by storing your code in repositories, versioning your code, and coordinating among project members.
Although there are many SCM tools out there, Git is the standard and rightly so. I highly recommend using Git, but there are other open source options if you prefer.
Name | License
---|---
[Git][12] | GPLv2 & LGPL v2.1
[Subversion][13] | Apache 2.0
[Concurrent Versions System][14] (CVS) | GNU
[Vesta][15] | LGPL
[Mercurial][16] | GNU GPL v2+
Here's what the DevOps pipeline looks like with the addition of SCM.
![Source control management][17]
The CI/CD tool can automate the tasks of checking in and checking out source code and collaborating across members. Not bad? But how can you make this into a working application so billions of people can use and appreciate it?
### Step 3: Build automation tool
Excellent! You can check out the code and commit your changes to the source control, and you can invite your friends to collaborate on the source control development. But you haven't yet built an application. To make it a web application, it has to be compiled and put into a deployable package format or run as an executable. (Note that an interpreted programming language like JavaScript or PHP doesn't need to be compiled.)
Enter the build automation tool. No matter which build tool you decide to use, all build automation tools have a shared goal: to build the source code into some desired format and to automate the task of cleaning, compiling, testing, and deploying to a certain location. The build tools will differ depending on your programming language, but here are some common open source options to consider.
Name | License | Programming Language
---|---|---
[Maven][18] | Apache 2.0 | Java
[Ant][19] | Apache 2.0 | Java
[Gradle][20] | Apache 2.0 | Java
[Bazel][21] | Apache 2.0 | Java
[Make][22] | GNU | N/A
[Grunt][23] | MIT | JavaScript
[Gulp][24] | MIT | JavaScript
[Buildr][25] | Apache | Ruby
[Rake][26] | MIT | Ruby
[A-A-P][27] | GNU | Python
[SCons][28] | MIT | Python
[BitBake][29] | GPLv2 | Python
[Cake][30] | MIT | C#
[ASDF][31] | Expat (MIT) | LISP
[Cabal][32] | BSD | Haskell
Awesome! You can put your build automation tool configuration files into your source control management and let your CI/CD tool build it.
![Build automation tool][33]
Everything is good, right? But where can you deploy it?
### Step 4: Web application server
So far, you have a packaged file that might be executable or deployable. For any application to be truly useful, it has to provide some kind of a service or an interface, but you need a vessel to host your application.
For a web application, a web application server is that vessel. An application server offers an environment where the programming logic inside the deployable package can be detected, render the interface, and offer the web services by opening sockets to the outside world. You need an HTTP server as well as some other environment (like a virtual machine) to install your application server. For now, let's assume you will learn about this along the way (although I will discuss containers below).
There are a number of open source web application servers available.
Name | License | Programming Language
---|---|---
[Tomcat][34] | Apache 2.0 | Java
[Jetty][35] | Apache 2.0 | Java
[WildFly][36] | GNU Lesser Public | Java
[GlassFish][37] | CDDL & GNU Less Public | Java
[Django][38] | 3-Clause BSD | Python
[Tornado][39] | Apache 2.0 | Python
[Gunicorn][40] | MIT | Python
[Python Paste][41] | MIT | Python
[Rails][42] | MIT | Ruby
[Node.js][43] | MIT | Javascript
Now the DevOps pipeline is almost usable. Good job!
![Web application server][44]
Although it's possible to stop here and integrate further on your own, code quality is an important thing for an application developer to be concerned about.
### Step 5: Code testing coverage
Implementing code test pieces can be another cumbersome requirement, but developers need to catch any errors in an application early on and improve the code quality to ensure end users are satisfied. Luckily, there are many open source tools available to test your code and suggest ways to improve its quality. Even better, most CI/CD tools can plug into these tools and automate the process.
There are two parts to code testing: _code testing frameworks_ that help write and run the tests, and _code quality suggestion tools_ that help improve code quality.
#### Code test frameworks
Name | License | Programming Language
---|---|---
[JUnit][45] | Eclipse Public License | Java
[EasyMock][46] | Apache | Java
[Mockito][47] | MIT | Java
[PowerMock][48] | Apache 2.0 | Java
[Pytest][49] | MIT | Python
[Hypothesis][50] | Mozilla | Python
[Tox][51] | MIT | Python
#### Code quality suggestion tools
Name | License | Programming Language
---|---|---
[Cobertura][52] | GNU | Java
[CodeCover][53] | Eclipse Public (EPL) | Java
[Coverage.py][54] | Apache 2.0 | Python
[Emma][55] | Common Public License | Java
[JaCoCo][56] | Eclipse Public License | Java
[Hypothesis][50] | Mozilla | Python
[Tox][51] | MIT | Python
[Jasmine][57] | MIT | JavaScript
[Karma][58] | MIT | JavaScript
[Mocha][59] | MIT | JavaScript
[Jest][60] | MIT | JavaScript
Note that most of the tools and frameworks mentioned above are written for Java, Python, and JavaScript, since C++ and C# are proprietary programming languages (although GCC is open source).
Now that you've implemented code testing coverage tools, your DevOps pipeline should resemble the DevOps pipeline diagram shown at the beginning of this tutorial.
### Optional steps
#### Containers
As I mentioned above, you can host your application server on a virtual machine or a server, but containers are a popular solution.
[What are][61] [containers][61]? The short explanation is that a VM needs the huge footprint of an operating system, which overwhelms the application size, while a container just needs a few libraries and configurations to run the application. There are clearly still important uses for a VM, but a container is a lightweight solution for hosting an application, including an application server.
Although there are other options for containers, Docker and Kubernetes are the most popular.
Name | License
---|---
[Docker][62] | Apache 2.0
[Kubernetes][63] | Apache 2.0
To learn more, check out these other [Opensource.com][64] articles about Docker and Kubernetes:
* [What Is Docker?][65]
* [An introduction to Docker][66]
* [What is Kubernetes?][67]
* [From 0 to Kubernetes][68]
#### Middleware automation tools
Our DevOps pipeline mostly focused on collaboratively building and deploying an application, but there are many other things you can do with DevOps tools. One of them is leveraging Infrastructure as Code (IaC) tools, which are also known as middleware automation tools. These tools help automate the installation, management, and other tasks for middleware software. For example, an automation tool can pull applications, like a web application server, database, and monitoring tool, with the right configurations and deploy them to the application server.
Here are several open source middleware automation tools to consider:
Name | License
---|---
[Ansible][69] | GNU Public
[SaltStack][70] | Apache 2.0
[Chef][71] | Apache 2.0
[Puppet][72] | Apache or GPL
For more on middleware automation tools, check out these other [Opensource.com][64] articles:
* [A quickstart guide to Ansible][73]
* [Automating deployment strategies with Ansible][74]
* [Top 5 configuration management tools][75]
### Where can you go from here?
This is just the tip of the iceberg for what a complete DevOps pipeline can look like. Start with a CI/CD tool and explore what else you can automate to make your team's job easier. Also, look into [open source communication tools][76] that can help your team work better together.
For more insight, here are some very good introductory articles about DevOps:
* [What is DevOps][77]
* [5 things to master to be a DevOps engineer][78]
* [DevOps is for everyone][79]
* [Getting started with predictive analytics in DevOps][80]
Integrating DevOps with open source agile tools is also a good idea:
* [What is agile?][81]
* [4 steps to becoming an awesome agile developer][82]
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/devops-pipeline
作者:[Bryant Son (Red Hat, Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/brson/users/milindsingh/users/milindsingh/users/dscripter
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/network_team_career_hand.png?itok=_ztl2lk_ (Shaking hands, networking)
[2]: https://www.amazon.com/dp/B078Y98RG8/
[3]: https://en.wikipedia.org/wiki/Pareto_principle
[4]: https://opensource.com/sites/default/files/uploads/1_finaldevopspipeline.jpg (Complete DevOps pipeline)
[5]: https://github.com/jenkinsci/jenkins
[6]: https://github.com/travis-ci/travis-ci
[7]: http://cruisecontrol.sourceforge.net
[8]: https://github.com/buildbot/buildbot
[9]: https://gump.apache.org
[10]: http://cabie.tigris.org
[11]: https://opensource.com/sites/default/files/uploads/2_runningjenkins.jpg (CI/CD tool)
[12]: https://git-scm.com
[13]: https://subversion.apache.org
[14]: http://savannah.nongnu.org/projects/cvs
[15]: http://www.vestasys.org
[16]: https://www.mercurial-scm.org
[17]: https://opensource.com/sites/default/files/uploads/3_sourcecontrolmanagement.jpg (Source control management)
[18]: https://maven.apache.org
[19]: https://ant.apache.org
[20]: https://gradle.org/
[21]: https://bazel.build
[22]: https://www.gnu.org/software/make
[23]: https://gruntjs.com
[24]: https://gulpjs.com
[25]: http://buildr.apache.org
[26]: https://github.com/ruby/rake
[27]: http://www.a-a-p.org
[28]: https://www.scons.org
[29]: https://www.yoctoproject.org/software-item/bitbake
[30]: https://github.com/cake-build/cake
[31]: https://common-lisp.net/project/asdf
[32]: https://www.haskell.org/cabal
[33]: https://opensource.com/sites/default/files/uploads/4_buildtools.jpg (Build automation tool)
[34]: https://tomcat.apache.org
[35]: https://www.eclipse.org/jetty/
[36]: http://wildfly.org
[37]: https://javaee.github.io/glassfish
[38]: https://www.djangoproject.com/
[39]: http://www.tornadoweb.org/en/stable
[40]: https://gunicorn.org
[41]: https://github.com/cdent/paste
[42]: https://rubyonrails.org
[43]: https://nodejs.org/en
[44]: https://opensource.com/sites/default/files/uploads/5_applicationserver.jpg (Web application server)
[45]: https://junit.org/junit5
[46]: http://easymock.org
[47]: https://site.mockito.org
[48]: https://github.com/powermock/powermock
[49]: https://docs.pytest.org
[50]: https://hypothesis.works
[51]: https://github.com/tox-dev/tox
[52]: http://cobertura.github.io/cobertura
[53]: http://codecover.org/
[54]: https://github.com/nedbat/coveragepy
[55]: http://emma.sourceforge.net
[56]: https://github.com/jacoco/jacoco
[57]: https://jasmine.github.io
[58]: https://github.com/karma-runner/karma
[59]: https://github.com/mochajs/mocha
[60]: https://jestjs.io
[61]: /resources/what-are-linux-containers
[62]: https://www.docker.com
[63]: https://kubernetes.io
[64]: http://Opensource.com
[65]: https://opensource.com/resources/what-docker
[66]: https://opensource.com/business/15/1/introduction-docker
[67]: https://opensource.com/resources/what-is-kubernetes
[68]: https://opensource.com/article/17/11/kubernetes-lightning-talk
[69]: https://www.ansible.com
[70]: https://www.saltstack.com
[71]: https://www.chef.io
[72]: https://puppet.com
[73]: https://opensource.com/article/19/2/quickstart-guide-ansible
[74]: https://opensource.com/article/19/1/automating-deployment-strategies-ansible
[75]: https://opensource.com/article/18/12/configuration-management-tools
[76]: https://opensource.com/alternatives/slack
[77]: https://opensource.com/resources/devops
[78]: https://opensource.com/article/19/2/master-devops-engineer
[79]: https://opensource.com/article/18/11/how-non-engineer-got-devops
[80]: https://opensource.com/article/19/1/getting-started-predictive-analytics-devops
[81]: https://opensource.com/article/18/10/what-agile
[82]: https://opensource.com/article/19/2/steps-agile-developer

View File

@ -0,0 +1,114 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Beyond SD-WAN: VMwares vision for the network edge)
[#]: via: (https://www.networkworld.com/article/3387641/beyond-sd-wan-vmwares-vision-for-the-network-edge.html#tk.rss_all)
[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/)
Beyond SD-WAN: VMwares vision for the network edge
======
Under the ownership of VMware, the VeloCloud Business Unit is greatly expanding its vision of what an SD-WAN should be. VMware calls the strategy “the network edge.”
![istock][1]
VeloCloud is now a Business Unit within VMware since being acquired in December 2017. The two companies have had sufficient time to integrate their operations and fit their technologies together to build a cohesive offering. In January, Neal Weinberg provided [an overview of where VMware is headed with its reinvention][2]. Now lets look at it from the VeloCloud [SD-WAN][3] perspective.
I recently talked to Sanjay Uppal, vice president and general manager of the VeloCloud Business Unit. He shared with me where VeloCloud is heading, adding that its all possible because of the complementary products that VMware brings to VeloClouds table.
**[ Read also:[Edge computing is the place to address a host of IoT security concerns][4] ]**
It all starts with this architecture chart that shows the VMware vision for the network edge.
![][5]
The left side of the chart shows that in the branch office, you can put an edge device that can be either a VeloCloud hardware appliance or VeloCloud software running on some third-party hardware. Then the right side of the chart shows where the workloads are — the traditional data center, the public cloud, and SaaS applications. You can put one or more edge devices there and then you have the classic hub-and-spoke model with the VeloCloud SD-WAN on running on top.
In the middle of the diagram are the gateways, which are a differentiator and a unique benefit of VeloCloud.
“If you have applications in the public cloud or SaaS, then you can use our gateways instead of spinning up individual edges at each of the applications,” Uppal said. “Those gateways really perform a multi-tenanted edge function. So, instead of locating an individual edge at every termination point at the cloud, you basically go from an edge in the branch to a gateway in the cloud, and then from that gateway you go to your final destination. We've engineered it so that the gateways are close to where the end applications are — typically within five milliseconds.”
Going back to the architecture diagram, there are two clouds in the middle of the chart. The left-hand cloud is the over-the-top (OTT) service run by VeloCloud. It uses 800 gateways deployed over 30 points of presence (PoPs) around the world. The right-hand cloud is the telco cloud, which deploys gateways as network-based services. VeloCloud has several telco partners that take the same VeloCloud gateways and deploy them in their cloud.
“Between a telco service, a cloud service, and hub and spoke on premise, we essentially have covered all the bases in terms of how enterprises would want to consume software-defined WAN. This flexibility is part of the reason why we've been successful in this market,” Uppal said.
Where is VeloCloud going with this strategy? Again, looking at the architecture chart, the “vision” pieces are labeled 1 through 5. Lets look at each of those areas.
### Edge compute
Starting with number 1 on the left-hand side of the diagram, there is the expansion from the edge itself going deeper into the branch by crossing over a LAN or a Wi-Fi boundary to get to where the individual users and IoT “things” are. This approach uses the same VeloCloud platform to spin up [compute at the edge][6], which can be either a container or a virtual machine (VM).
“Of course, VMwareis very strong in compute in the data center. Our CEO recently articulated the VMware edge story, which is compute edge and device edge. When you combine it with the network edge, which is VeloCloud, then you have a full edge solution,” Uppal explained. “So, this first piece that you see is our foray into getting deeper into the branch all the way up to the individual users and things and combining compute functions on to the VeloCloud solution. There's been a lot of talk about edge compute and we do know that the pendulum is swinging back, but one of the major challenges is how to manage it all. VMware has strong technology in the data center space that we are bringing to bear out there at the edge.”
### 5G underlay intelligence
The next piece, number 2 on the diagram, is [5G][7]. At the Mobile World Congress, VMware and AT&T announced they are bringing SD-WAN out running on 5G. The idea here is that 5G should give you a low-latency connection and you get on-demand control, so you can tell 5G on the fly that you want this type of connection. Once that is done, the right network slices would be put in place and then you can get a connection according to the specifications that you asked for.
“We as VeloCloud would measure the underlay continuously. It's like a speed test on steroids. We would measure bandwidth, packet loss, jitter and latency continuously with low overhead because we piggyback on real user traffic. And then on the basis of that measurement, we would steer the traffic one way or another,” Uppal said. “For example, your real-time voice is important, so let's pick the best performing network at that instant of time, which might change in the next instant, so that's why we have to make that decision on a per-packet basis.”
Uppal continued, “What 5G allows us to do is to look at that underlay as not just being one underlay, but it could be several different underlays, and it's programmable so you could ask it for a type of underlay. That is actually pretty revolutionary — that we would run an overlay with the intelligence of SD-WAN counting on the underlay intelligence of 5G.
“We are working pretty closely with our partner AT&T in this space. We are talking about the business aspect of 5G being used as a transport mechanism for enterprise data, rather than consumer phones having 5G on them. This is available from AT&T today in a handful of cities. So as 5G becomes more ubiquitous, you'll begin to see it deployed more and more. Then we will do an Ethernet or Wi-Fi handoff to the hotspot, and from then on, we'll jump onto the 5G network for the SD-WAN. Then the next phase of that will be 5G natively on our devices, which is what we are working on today.”
### Gateway federation
The third part of the vision is gateway federation, some of which is available today. The left-hand cloud in the diagram, which is the OTT service, should be able to interoperate gateway to gateway with the cloud on the right-hand side, which is the network-based service. For example, if you have a telco cloud of gateways but those gateways don't reach out into areas where the telco doesnt have a presence, then you can reuse VeloCloud gateways that are sitting in other locations. A gateway would federate with another gateway, so it would extend the telcos network beyond the facilities that they own. That's the first step of gateway federation, which is available from VeloCloud today.
Uppal said the next step is a telco-to telco-federation. “There's a lot of interest from folks in the industry on how to get that federation done. We're working with the Metro Ethernet Forum (MEF) on that,” he said.
### SD-WAN as a platform
The next piece of the vision is SD-WAN as a platform. VeloCloud already incorporates security services into its SD-WAN platform in the form of [virtual network functions][8] (VNFs) from Palo Alto, Check Point Software, and other partners. Deploying a service as a VNF eliminates having separate hardware on the network. Now the company is starting to bring more services onto its platform.
“Analytics is the area we are bringing in next,” Uppal said. “We partnered with SevOne and Plixer so that they can take analytics that we are providing, correlate them with other analytics that they have and then come up with inferences on whether things worked correctly or not, or to check for anomalous behavior.”
Two additional areas that VeloCloud is working on are unified communications as a service (UCaaS) and universal customer premises equipment (uCPE).
“We announced that we are working with RingCentral in the UCaaS space, and with ADVA and Telco Systems for uCPE. We have our own uCPE offering today but with a limited number of VNFs, so ADVA and Telco Systems will help us expand those capabilities,” Uppal explained. “With SD-WAN becoming a platform for on-premise deployments, you can virtualize functions and manage them from the same place, whether they're VNF-type of functions or compute-type of functions. This is an important direction that we are moving towards.”
### Hybrid and multi-cloud integration
The final piece of the strategy is hybrid and multi-cloud integration. Since its inception, VeloCloud has had gateways to facilitate access to specific applications running in the cloud. These gateways provide a secure end-to-end connection and an ROI advantage.
Recognizing that workloads have expanded to multi-cloud and hybrid cloud, VeloCloud is broadening this approach utilizing VMwares relationships with Microsoft, Amazon, and Google and offerings on Azure, Amazon Web Services, and Google Cloud, respectively. From a networking standpoint, you can get the same consistency of access using VeloCloud because you can decide from the gateway whichever direction you want to go. That direction will be chosen — and services added — based on your business policy.
“We think this is the next hurdle in terms of deployment of SD-WAN, and once that is solved, people are going to deploy a lot more for hybrid and multi-cloud,” said Uppal. “We want to be the first ones out of the gate to get that done.”
Uppal further said, “These five areas are where we see our SD-WAN headed, and we call this a network edge because it's beyond just the traditional SD-WAN functions. It includes edge computing, SD-WAN becoming a broader platform, integrating with hybrid multi cloud — these are all aspects of features that go way beyond just the narrower definition of SD-WAN.”
**More about edge networking:**
* [How edge networking and IoT will reshape data centers][9]
* [Edge computing best practices][10]
* [How edge computing can help secure the IoT][11]
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3387641/beyond-sd-wan-vmwares-vision-for-the-network-edge.html#tk.rss_all
作者:[Linda Musthaler][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Linda-Musthaler/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/01/istock-864405678-100747484-large.jpg
[2]: https://www.networkworld.com/article/3340259/vmware-s-transformation-takes-hold.html
[3]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
[4]: https://www.networkworld.com/article/3307859/edge-computing-helps-a-lot-of-iot-security-problems-by-getting-it-involved.html
[5]: https://images.idgesg.net/images/article/2019/04/vmware-vision-for-network-edge-100793086-large.jpg
[6]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
[7]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
[8]: https://www.networkworld.com/article/3206709/what-s-the-difference-between-sdn-and-nfv.html
[9]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[10]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[11]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[12]: https://www.facebook.com/NetworkWorld/
[13]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,111 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with Python's cryptography library)
[#]: via: (https://opensource.com/article/19/4/cryptography-python)
[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez)
Getting started with Python's cryptography library
======
Encrypt your data and keep it safe from attackers.
![lock on world map][1]
The first rule of cryptography club is: never _invent_ a cryptography system yourself. The second rule of cryptography club is: never _implement_ a cryptography system yourself: many real-world holes are found in the _implementation_ phase of a cryptosystem as well as in the design.
One useful library for cryptographic primitives in Python is called simply [**cryptography**][2]. It has both "secure" primitives as well as a "hazmat" layer. The "hazmat" layer requires care and knowledge of cryptography and it is easy to implement security holes using it. We will not cover anything in the "hazmat" layer in this introductory article!
The most useful high-level secure primitive in **cryptography** is the Fernet implementation. Fernet is a standard for encrypting buffers in a way that follows best-practices cryptography. It is not suitable for very big files—anything in the gigabyte range and above—since it requires you to load the whole buffer that you want to encrypt or decrypt into memory at once.
Fernet supports _symmetric_ , or _secret key_ , cryptography: the same key is used for encryption and decryption, and therefore must be kept safe.
Generating a key is easy:
```
>>> k = fernet.Fernet.generate_key()
>>> type(k)
<class 'bytes'>
```
Those bytes can be written to a file with appropriate permissions, ideally on a secure machine.
Once you have key material, encrypting is easy as well:
```
>>> frn = fernet.Fernet(k)
>>> encrypted = frn.encrypt(b"x marks the spot")
>>> encrypted[:10]
b'gAAAAABb1'
```
You will get slightly different values if you encrypt on your machine. Not only because (I hope) you generated a different key from me, but because Fernet concatenates the value to be encrypted with some randomly generated buffer. This is one of the "best practices" I alluded to earlier: it will prevent an adversary from being able to tell which encrypted values are identical, which is sometimes an important part of an attack.
Decryption is equally simple:
```
>>> frn = fernet.Fernet(k)
>>> frn.decrypt(encrypted)
b'x marks the spot'
```
Note that this only encrypts and decrypts _byte strings_. In order to encrypt and decrypt _text strings_ , they will need to be encoded and decoded, usually with [UTF-8][3].
One of the most interesting advances in cryptography in the mid-20th century was _public key_ cryptography. It allows the encryption key to be published while the _decryption key_ is kept secret. It can, for example, be used to store API keys to be used by a server: the server is the only thing with access to the decryption key, but anyone can add to the store by using the public encryption key.
While **cryptography** does not have any public key cryptographic _secure_ primitives, the [**PyNaCl**][4] library does. PyNaCl wraps and offers some nice ways to use the [**NaCl**][5] encryption system invented by Daniel J. Bernstein.
NaCl always _encrypts_ and _signs_ or _decrypts_ and _verifies signatures_ simultaneously. This is a way to prevent malleability-based attacks, where an adversary modifies the encrypted value.
Encryption is done with a public key, while signing is done with a secret key:
```
>>> from nacl.public import PrivateKey, PublicKey, Box
>>> source = PrivateKey.generate()
>>> with open("target.pubkey", "rb") as fpin:
... target_public_key = PublicKey(fpin.read())
>>> enc_box = Box(source, target_public_key)
>>> result = enc_box.encrypt(b"x marks the spot")
>>> result[:4]
b'\xe2\x1c0\xa4'
```
Decryption reverses the roles: it needs the private key for decryption and the public key to verify the signature:
```
>>> from nacl.public import PrivateKey, PublicKey, Box
>>> with open("source.pubkey", "rb") as fpin:
... source_public_key = PublicKey(fpin.read())
>>> with open("target.private_key", "rb") as fpin:
... target = PrivateKey(fpin.read())
>>> dec_box = Box(target, source_public_key)
>>> dec_box.decrypt(result)
b'x marks the spot'
```
The [**PocketProtector**][6] library builds on top of PyNaCl and contains a complete secrets management solution.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/cryptography-python
作者:[Moshe Zadka (Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-cloud-safe.png?itok=yj2TFPzq (lock on world map)
[2]: https://cryptography.io/en/latest/
[3]: https://en.wikipedia.org/wiki/UTF-8
[4]: https://pynacl.readthedocs.io/en/stable/
[5]: https://nacl.cr.yp.to/
[6]: https://github.com/SimpleLegal/pocket_protector/blob/master/USER_GUIDE.md

View File

@ -0,0 +1,84 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to quickly deploy, run Linux applications as unikernels)
[#]: via: (https://www.networkworld.com/article/3387299/how-to-quickly-deploy-run-linux-applications-as-unikernels.html#tk.rss_all)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How to quickly deploy, run Linux applications as unikernels
======
Unikernels are a smaller, faster, and more secure option for deploying applications on cloud infrastructure. With NanoVMs OPS, anyone can run a Linux application as a unikernel with no additional coding.
![Marcho Verch \(CC BY 2.0\)][1]
Building and deploying lightweight apps is becoming an easier and more reliable process with the emergence of unikernels. While limited in functionality, unikernals offer many advantages in terms of speed and security.
### What are unikernels?
A unikernel is a very specialized single-address-space machine image that is similar to the kind of cloud applications that have come to dominate so much of the internet, but they are considerably smaller and are single-purpose. They are lightweight, providing only the resources needed. They load very quickly and are considerably more secure -- having a very limited attack surface. Any drivers, I/O routines and support libraries that are required are included in the single executable. The resultant virtual image can then be booted and run without anything else being present. And they will often run 10 to 20 times faster than a container.
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
Would-be attackers cannot drop into a shell and try to gain control because there is no shell. They can't try to grab the system's /etc/passwd or /etc/shadow files because these files don't exist. Creating a unikernel is much like turning your application into its own OS. With a unikernel, the application and the OS become a single entity. You omit what you don't need, thereby removing vulnerabilities and improving performance many times over.
In short, unikernels:
* Provide improved security (e.g., making shell code exploits impossible)
* Have much smaller footprints then standard cloud apps
* Are highly optimized
* Boot extremely quickly
### Are there any downsides to unikernels?
The only serious downside to unikernels is that you have to build them. For many developers, this has been a giant step. Trimming down applications to just what is needed and then producing a tight, smoothly running application can be complex because of the application's low-level nature. In the past, you pretty much had to have been a systems developer or a low level programmer to generate them.
### How is this changing?
Just recently (March 24, 2019) [NanoVMs][3] announced a tool that loads any Linux application as a unikernel. Using NanoVMs OPS, anyone can run a Linux application as a unikernel with no additional coding. The application will also run faster, more safely and with less cost and overhead.
### What is NanoVMs OPS?
NanoVMs is a unikernel tool for developers. It allows you to run all sorts of enterprise class software yet still have extremely tight control over how it works.
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][4] ]**
Other benefits associated with OPS include:
* Developers need no prior experience or knowledge to build unikernels.
* The tool can be used to build and run unikernels locally on a laptop.
* No accounts need to be created and only a single download and one command is required to execute OPS.
An intro to NanoVMs is available on [NanoVMs on youtube][5]. You can also check out the company's [LinkedIn page][6] and can read about NanoVMs security [here][7].
Here is some information on how to [get started][8].
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3387299/how-to-quickly-deploy-run-linux-applications-as-unikernels.html#tk.rss_all
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/corn-kernels-100792925-large.jpg
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[3]: https://nanovms.com/
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[5]: https://www.youtube.com/watch?v=VHWDGhuxHPM
[6]: https://www.linkedin.com/company/nanovms/
[7]: https://nanovms.com/security
[8]: https://nanovms.gitbook.io/ops/getting_started
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,135 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (InitRAMFS, Dracut, and the Dracut Emergency Shell)
[#]: via: (https://fedoramagazine.org/initramfs-dracut-and-the-dracut-emergency-shell/)
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
InitRAMFS, Dracut, and the Dracut Emergency Shell
======
![][1]
The [Linux startup process][2] goes through several stages before reaching the final [graphical or multi-user target][3]. The initramfs stage occurs just before the root file system is mounted. Dracut is a tool that is used to manage the initramfs. The dracut emergency shell is an interactive mode that can be initiated while the initramfs is loaded.
This article will show how to use the dracut command to modify the initramfs. Some basic troubleshooting commands that can be run from the dracut emergency shell will also be demonstrated.
### The InitRAMFS
[Initramfs][4] stands for Initial Random-Access Memory File System. On modern Linux systems, it is typically stored in a file under the /boot directory. The kernel version for which it was built will be included in the file name. A new initramfs is generated every time a new kernel is installed.
![A Linux Boot Directory][5]
By default, Fedora keeps the previous two versions of the kernel and its associated initramfs. This default can be changed by modifying the value of the _installonly_limit_ setting the /etc/dnf/dnf.conf file.
You can use the _lsinitrd_ command to list the contents of your initramfs archive:
![The LsInitRD Command][6]
The above screenshot shows that my initramfs archive contains the _nouveau_ GPU driver. The _modinfo_ command tells me that the nouveau driver supports several models of NVIDIA video cards. The _lspci_ command shows that there is an NVIDIA GeForce video card in my computers PCI slot. There are also several basic Unix commands included in the archive such as _cat_ and _cp_.
By default, the initramfs archive only includes the drivers that are needed for your specific computer. This allows the archive to be smaller and decreases the time that it takes for your computer to boot.
### The Dracut Command
The _dracut_ command can be used to modify the contents of your initramfs. For example, if you are going to move your hard drive to a new computer, you might want to temporarily include all drivers in the initramfs to be sure that the operating system can load on the new computer. To do so, you would run the following command:
```
# dracut --force --no-hostonly
```
The _force_ parameter tells dracut that it is OK to overwrite the existing initramfs archive. The _no-hostonly_ parameter overrides the default behavior of including only drivers that are germane to the currently-running computer and causes dracut to instead include all drivers in the initramfs.
By default dracut operates on the initramfs for the currently-running kernel. You can use the _uname_ command to display which version of the Linux kernel you are currently running:
```
$ uname -r
5.0.5-200.fc29.x86_64
```
Once you have your hard drive installed and running in your new computer, you can re-run the dracut command to regenerate the initramfs with only the drivers that are needed for the new computer:
```
# dracut --force
```
There are also parameters to add arbitrary drivers, dracut modules, and files to the initramfs archive. You can also create configuration files for dracut and save them under the /etc/dracut.conf.d directory so that your customizations will be automatically applied to all new initramfs archives that are generated when new kernels are installed. As always, check the man page for the details that are specific to the version of dracut you have installed on your computer:
```
$ man dracut
```
### The Dracut Emergency Shell
![The Dracut Emergency Shell][7]
Sometimes something goes wrong during the initramfs stage of your computers boot process. When this happens, you will see “Entering emergency mode” printed to the screen followed by a shell prompt. This gives you a chance to try and fix things up manually and continue the boot process.
As a somewhat contrived example, lets suppose that I accidentally deleted an important kernel parameter in my boot loader configuration:
```
# sed -i 's/ rd.lvm.lv=fedora\/root / /' /boot/grub2/grub.cfg
```
The next time I reboot my computer, it will seem to hang for several minutes while it is trying to find the root partition and eventually give up and drop to an emergency shell.
From the emergency shell, I can enter _journalctl_ and then use the **Space** key to page down though the startup logs. Near the end of the log I see a warning that reads “/dev/mapper/fedora-root does not exist”. I can then use the _ls_ command to find out what does exist:
```
# ls /dev/mapper
control fedora-swap
```
Hmm, the fedora-root LVM volume appears to be missing. Lets see what I can find with the lvm command:
```
# lvm lvscan
ACTIVE '/dev/fedora/swap' [3.85 GiB] inherit
inactive '/dev/fedora/home' [22.85 GiB] inherit
inactive '/dev/fedora/root' [46.80 GiB] inherit
```
Ah ha! Theres my root partition. Its just inactive. All I need to do is activate it and exit the emergency shell to continue the boot process:
```
# lvm lvchange -a y fedora/root
# exit
```
![The Fedora Login Screen][8]
The above example only demonstrates the basic concept. You can check the [troubleshooting section][9] of the [dracut guide][10] for a few more examples.
It is possible to access the dracut emergency shell manually by adding the _rd.break_ parameter to your kernel command line. This can be useful if you need to access your files before any system services have been started.
Check the _dracut.kernel_ man page for details about what kernel options your version of dracut supports:
```
$ man dracut.kernel
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/initramfs-dracut-and-the-dracut-emergency-shell/
作者:[Gregory Bartholomew][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/glb/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/dracut-816x345.png
[2]: https://en.wikipedia.org/wiki/Linux_startup_process
[3]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system_administrators_guide/sect-managing_services_with_systemd-targets
[4]: https://en.wikipedia.org/wiki/Initial_ramdisk
[5]: https://fedoramagazine.org/wp-content/uploads/2019/04/boot.jpg
[6]: https://fedoramagazine.org/wp-content/uploads/2019/04/lsinitrd.jpg
[7]: https://fedoramagazine.org/wp-content/uploads/2019/04/dracut-shell.jpg
[8]: https://fedoramagazine.org/wp-content/uploads/2019/04/fedora-login-1024x768.jpg
[9]: http://www.kernel.org/pub/linux/utils/boot/dracut/dracut.html#_troubleshooting
[10]: http://www.kernel.org/pub/linux/utils/boot/dracut/dracut.html

View File

@ -0,0 +1,94 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux Server Hardening Using Idempotency with Ansible: Part 1)
[#]: via: (https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-1)
[#]: author: (Chris Binnie https://www.linux.com/users/chrisbinnie)
Linux Server Hardening Using Idempotency with Ansible: Part 1
======
![][1]
[Creative Commons Zero][2]
I think its safe to say that the need to frequently update the packages on our machines has been firmly drilled into us. To ensure the use of latest features and also keep security bugs to a minimum, skilled engineers and even desktop users are well-versed in the need to update their software.
Hardware, software and SaaS (Software as a Service) vendors have also firmly embedded the word “firewall” into our vocabulary for both domestic and industrial uses to protect our computers. In my experience, however, even within potentially more sensitive commercial environments, few engineers actively tweak the operating system (OS) theyre working on, to any great extent at least, to bolster security.
Standard fare on Linux systems, for example, might mean looking at configuring a larger swap file to cope with your hungry applications demands. Or, maybe adding a separate volume to your server for extra disk space, specifying a more performant CPU at launch time, installing a few of your favorite DevOps tools, or chucking a couple of certificates onto the filesystem for each new server you build. This isnt quite the same thing.
### Improve your Security Posture
What I am specifically referring to is a mixture of compliance and security, I suppose. In short, theres a surprisingly large number of areas in which a default OS can improve its security posture. We can agree that tweaking certain aspects of an OS are a little riskier than others. Consider your network stack, for example. Imagine that, completely out of the blue, your servers networking suddenly does something unexpected and causes you troubleshooting headaches or even some downtime. This might happen because a new application or updated package suddenly expects routing to behave in a less-common way or needs a specific protocol enabled to function correctly.
However, there are many changes that you can make to your servers without suffering any sleepless nights. The version and flavor of an OS helps determine which changes and to what extent you might want to comfortably make. Most importantly though whats good for the goose is rarely good for the gander. In other words every single server estate has different, both broad and subtle, requirements which makes each use case unique. And, dont forget that a database server also has very different needs to a web server so you can have a number of differing needs even within one small cluster of servers.
Over the last few years Ive introduced these hardening and compliance tweaks more than a handful of times across varying server estates in my DevSecOps roles. The OSs have included: Debian, Red Hat Enterprise Linux (RHEL) and their respective derivatives (including what I suspect will be the increasingly popular RHEL derivative, Amazon Linux). There have been times that, admittedly including a multitude of relatively tiny tweaks, the number of changes to a standard server build was into the hundreds. It all depended on the time permitted for the work, the appetite for any risks and the generic or specific nature of the OS tweaks.
In this article, well discuss the theory around something called idempotency which, in hand with an automation tool such as Ansible, can provide the ongoing improvements to your server estates security posture. For good measure well also look at a number of Ansible playbook examples and additionally refer to online resources so that you can introduce idempotency to a server estate near you.
### Say What?
In simple terms the word “idempotent” just means returning something back to how it was prior to a change. It can also mean that lots of things you wanted to be the same, for consistency, are exactly the same, too.
Picture that in action for a moment on a server estate; well use AWS (Amazon Web Services) as our example. You create a new server image (Amazon Machine Images == AMIs) precisely how you want it with compliance and hardening introduced, custom packages, the removal of unwanted packages, SSH keys, user accounts etc and then spin up twenty servers using that AMI.
You know for certain that all the servers, at least at the time that they are launched, are absolutely identical. Trust me when I say that this is a “good thing” ™. The lack of whats known as “config drift” means that if one package on a server needs updated for security reasons then all the servers need that package updated too. Or if theres a typo in a config file thats breaking an application then it affects all servers equally. Theres less administrative overhead, less security risk and greater levels of predictability in terms of achieving better uptime.
What about config drift from a security perspective? As youve guessed its definitely not welcome. Thats because engineers making manual changes to a “base OS build” can only lead to heartache and stress. The predictability of how a system is working suffers greatly as a result and servers running unique config become less reliable. These server systems are known as “snowflakes” as theyre unique but far less beautiful than actual snow.
Equally an attacker might have managed to breach one aspect, component or service on a server but not all of its facets. By rewriting our base config again and again were able to, with 100% certainty (if its set up correctly), predict exactly what a server will look like and therefore how it will perform. Using various tools you can also trigger alarms if changes are detected to request that a pair of human eyes have a look to see if its a serious issue and then adjust the base config if needed.
To make our machines idempotent we might overwrite our config changes every 20 or 30 minutes, for example. When it comes to running servers, that in essence, is what is meant by idempotency.
### Central Station
My mechanism of choice for repeatedly writing config across a large number of servers is running Ansible playbooks. Its relatively easy to implement and removes the all-too-painful additional logic required when using shell scripts. Of the popular configuration management tools Ive seen in action is Puppet, used successfully on a large government estate in an idempotent manner, but I prefer Ansible due to its more logical syntax (to my mind at least) and its readily available documentation.
Before we look at some simple Ansible examples of hardening an OS with idempotency in mind we should explore how to trigger our Ansible playbooks.
This is a larger area for debate than you might first imagine. Say, for example, you have nicely segmented server estate with production servers being carefully locked away from development servers, sitting behind a production-grade firewall. Consider the other servers on the estate, belonging to staging (pre-production) or other development environments, intentionally having different access permissions for security reasons.
If youre going to run a centralized server that has superuser permissions (which are required to make privileged changes to your core system files) then that server will need to have high-level access permissions potentially across your entire server estate. It must therefore be guarded very closely.
You will also want to test your playbooks against development environments (in plural) to test their efficacy which means youll probably need two all-powerful centralised Ansible servers, one for production and one for the multiple development environments.
The actual approach of how to achieve other logistical issues is up for debate and Ive heard it discussed a few times. Bear in mind that Ansible runs using plain, old SSH keys (a feature that something other configuration management tools have started to copy over time) but ideally you want a mechanism for keeping non-privileged keys on your centralised servers so youre not logging in as the “root” user across the estate every twenty minutes or thirty minutes.
From a network perspective I like the idea of having firewalling in place to enforce one-way traffic only into the environment that youre affecting. This protects your centralised host so that a compromised server cant attack that main Ansible host easily and then as a result gain access to precious SSH keys in order to damage the whole estate.
Speaking of which, are servers actually needed for a task like this? What about using AWS Lambda (<https://aws.amazon.com/lambda>) to execute your playbooks? A serverless approach stills needs to be secured carefully but unquestionably helps to limit the attack surface and also potentially reduces administrative responsibilities.
I suspect how this all-powerful server is architected and deployed is always going to be contentious and there will never be a one-size-fits-all approach but instead a unique, bespoke solution will be required for every server estate.
### How Now, Brown Cow
Its important to think about how often you run your Ansible and also how to prepare for your first execution of the playbook. Lets get the frequency of execution out of the way first as its the easiest to change in the future.
My preference would be three times an hour or instead every thirty minutes. If we include enough detail in our configuration then our playbooks might prevent an attacker gaining a foothold on a system as the original configuration overwrites any altered config. Twenty minutes seems more appropriate to my mind.
Again, this is an aspect you need to have a think about. You might be dumping small config databases locally onto a filesystem every sixty minutes for example and that scheduled job might add an extra little bit of undesirable load to your server meaning you have to schedule around it.
Next time, well take a look at some specific changes that can be made to various systems.
_Chris Binnies latest book, Linux Server Security: Hack and Defend, shows you how to make your servers invisible and perform a variety of attacks. You can find out more about DevSecOps, containers and Linux security on his website:[https://www.devsecops.cc][3]_
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-1
作者:[Chris Binnie][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/chrisbinnie
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/geometric-1732847_1280.jpg?itok=YRux0Tua
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
[3]: https://www.devsecops.cc/

View File

@ -0,0 +1,129 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Performance-Based Routing (PBR) The gold rush for SD-WAN)
[#]: via: (https://www.networkworld.com/article/3387152/performance-based-routing-pbr-the-gold-rush-for-sd-wan.html#tk.rss_all)
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
Performance-Based Routing (PBR) The gold rush for SD-WAN
======
The inefficiency factor in the case of traditional routing is one of the main reasons why SD-WAN is really taking off.
![Getty Images][1]
BGP (Border Gateway Protocol) is considered the glue of the internet. If we view through the lens of farsightedness, however, theres a question that still remains unanswered for the future. Will BGP have the ability to route on the best path versus the shortest path?
There are vendors offering performance-based solutions for BGP-based networks. They have adopted various practices, such as, sending out pings to monitor the network and then modifying the BGP attributes, such as the AS prepending to make BGP do the performance-based routing (PBR). However, this falls short in a number of ways.
The problem with BGP is that it's not capacity or performance aware and therefore its decisions can sink the applications performance. The attributes that BGP relies upon for path selection are, for example, AS-Path length and multi-exit discriminators (MEDs), which do not always correlate with the networks performance.
[The time of 5G is almost here][2]
Also, BGP changes paths only in reaction to changes in the policy or the set of available routes. It traditionally permits the use of only one path to reach a destination. Hence, traditional routing falls short as it doesn't always look for the best path which may not be the shortest path.
### Blackout and brownouts
As a matter of fact, we live in a world where we have more brownouts than blackouts. However, BGP was originally designed to detect only the blackouts i.e. the events wherein a link fails to reroute the traffic to another link. In a world where brownouts can last from 10 milliseconds to 10 seconds, you ought to be able to detect the failure in sub-seconds and re-route to a better path.
This triggered my curiosity to dig out some of the real yet significant reasons why [SD-WAN][3] was introduced. We all know it saves cost and does many other things but were the inefficiencies in routing one of the main reasons? I decided to sit down with [Sorell][4] to discuss the need for policy-based routing (PBR).
### SD-WAN is taking off
The inefficiency factor in the case of traditional routing is one of the main reasons why SD-WAN is really taking off. SD-WAN vendors are adding proprietary mechanisms to their routing in order to select the best path, not the shortest path.
Originally, we didn't have real-time traffic, such as, voice and video, which is latency and jitter sensitive. Besides, we also assumed that all links were equal. But in today's world, we witness more of a mix and match, for example, 100Gig and slower long-term evolution (LTE) links. The assumption that the shortest path is the best no longer holds true.
### Introduction of new protocols
To overcome the drawbacks of traditional routing, we have had the onset of new protocols, such as, [IPv6 segment routing][5] and named data networking along with specific SD-WAN vendor mechanisms that improve routing.
For optimum routing, effective packet steering is a must. And SD-WAN overlays provide this by utilizing encapsulation which could be a combination of GRE, UDP, Ethernet, MPLS, [VxLAN][6] and IPsec. IPv6 segment routing implements a stack of segments (IPv6 address list) inserted in every packet and the named data networking can be distributed with routing protocols.
Another critical requirement is the hop-by-hop payload encryption. You should be able to encrypt payloads for sessions that do not have transport layer encryption. Re-encrypting data can be expensive; it fragments the packets and further complicates the networks. Therefore, avoiding double encryption is also a must.
The SD-WAN overlays furnish an all or nothing approach with [IPsec][7]. IPv6 segment routing requires application layer security that is provided by [IPsec][8] and named data network can offer since its object-based.
### The various SD-WAN solutions
The above are some of the new protocols available and some of the technologies that the SD-WAN vendors offer. Different vendors will have different mechanisms to implement PBR. Different vendors term PBR with different names, such as, “application-aware routing.”
SD-WAN vendors are using many factors to influence the routing decision. They are not just making routing decisions on the number of hops or links the way traditional routing does by default. They monitor how the link is performing and do not just evaluate if the link is up or down.
They are using a variety of mechanisms to perform PBR. For example, some are adding timestamps to every packet. Whereas, others are adding sequence numbers to the packets over and above what you would get in a transmission control protocol (TCP) sequence number.
Another option is the use of the domain name system (DNS) and [transport layer security][9] (TLS) certificates to automatically identify the application and then based on the identity of the application; they have default classes for it. However, others use timestamps by adding a proprietary label. This is the same as adding a sequence number to the packets, but the sequence number is at Layer 3 instead of Layer 4.
I can tie all my applications and sequence numbers and then use the network time protocol (NTP) to identify latency, jitter and dropped packets. Running NTP on both ends enables the identification of end-to-end vs hop-by-hop performance.
Some vendors use an internet control message protocol (ICMP) or bidirectional forwarding detection (BFD). Hence, instead of adding a label to every packet which can introduce overhead, they are doing a sampling for every quarter or half a second.
Realistically, it is yet to be determined which technology is the best to use, but what is consistent is that these mechanisms are examining elements, such as, the latency, dropped packets and jitter on the links. Essentially, different vendors are using different technologies to choose the best path, but the end result is still the same.
With these approaches, one can, for example, identify a WebEx session and since a WebEx session has voice and video, can create that session as a high-priority session. All packets associated with the WebEx sessions get placed in a high-value queue.
The rules are set to say, “I want my WebEx session to go over the multiprotocol label switching (MPLS) link instead of a slow LTE link.” Hence, if your MPLS link faces latency or jitter problems, it automatically reroutes the flow to a better alternate path.
### Problems with TCP
One critical problem that surfaces today due to the transmission control protocol (TCP) and adaptive codex is called waves. Lets say you have 30 file transfers across a link, now to carry out the file transfers, the TCP window size will grow to a point where the link gets maxed out. The router will start to drop packets, followed by the reduced TCP window size. As a result, the bandwidth shrinks and at times when not dropping packets the window size increases. This hits the threshold and eventually, the packets start getting dropped again.
This can be a continuous process, happening again and again. With all these waves obstructing the efficiency, we need products, like wide area network (WAN) optimizations to manage multiple TCP flows. Why? Because only TCP is aware of the flow that it controls, the single flow. It is not the networking aware of other flows moving across the path. Primarily, the TCP window size is only aware of one single file transfer.
### Problems with adaptive codex
Adaptive codex will use upward of 6 megabytes of the video if the link is clean but as soon as it starts to drop packets, the adaptive codex will send more packets for forwarding error-control in the codex. Therefore, it makes the problem even worse before it backs off to change the frame rate and resolution.
Adaptive codex is the opposite of fixed codex that will always send out a fixed packet size. Adaptive codex is the standard used in WebRTC and can vary the jitter, buffer size and the frequency of packets based on the network conditions.
Adaptive codex works better off Internet connections that have higher loss and jitter rate than, for example, more stable links, such as MPLS. This is the reason why real-time voice and the video does not use TCP because if the packet gets dropped, there is no point in sending a new packet. Logically, having the additional headers of TCP does not buy you anything.
QUIC, on the other hand, can take a single flow and run it across multiple network-flows. This helps the video applications in rebuffering and improves throughput. In addition, it helps in boosting the response for bandwidth-intensive applications.
### The introduction of new technologies
With the introduction of [edge computing][10], augmented reality (AR), virtual reality (VR), real-time driving applications, [IoT sensors][11] on critical systems and other hypersensitive latency applications, PBR becomes a necessity.
With AR you want the computing to be accomplished between 5 to 10 milliseconds of the endpoint. In the world of brownouts and path congestion, you need to pick a better path much more quickly. Also, service providers (SP) are rolling out 5G networks and announcing the use of different routing protocols that are being used as PBR. So the future looks bright for PBR.
As voice and video, edge and virtual reality gain more existence in the market, PBR will become more popular. Even Facebook and Google are putting PBR inside their internal networks. Over time it will have a role in all the networks, specifically, the Internet Exchange points, both private and public.
### Internet exchange points
Back in the early 90s, there were only 4 internet exchange points in the US and 9 across the world overall. Now we have more than 3,000 where different providers have come together, and they exchange Internet traffic.
When BGP was first rolled out in the mid-90s, because the internet exchange points were located far apart, the concept of shortest path held true more than today, where you have an internet that is highly distributed.
The internet architecture will get changed as different service providers move to software-defined networking and update the routing protocols that they use. As far as the foreseeable future is concerned, however, the core internet exchanges will still use BGP.
**This article is published as part of the IDG Contributor Network.[Want to Join?][12]**
Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3387152/performance-based-routing-pbr-the-gold-rush-for-sd-wan.html#tk.rss_all
作者:[Matt Conran][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Matt-Conran/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/10/smart-city_iot_digital-transformation_networking_wireless_city-scape_skyline-100777499-large.jpg
[2]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
[3]: https://network-insight.net/2017/08/sd-wan-networks-scalpel/
[4]: https://techvisionresearch.com/
[5]: https://network-insight.net/2015/07/segment-routing-introduction/
[6]: https://youtu.be/5XtkCSfRy3c
[7]: https://network-insight.net/2015/01/design-guide-ipsec-fault-tolerance/
[8]: https://network-insight.net/2015/01/ipsec-virtual-private-network-vpn-overview/
[9]: https://network-insight.net/2015/10/back-to-basics-ssl-security/
[10]: https://youtu.be/5mbPiKd_TFc
[11]: https://network-insight.net/2016/11/internet-of-things-iot-networking/
[12]: /contributor-network/signup.html
[13]: https://www.facebook.com/NetworkWorld/
[14]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,54 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 Linux rookie mistakes)
[#]: via: (https://opensource.com/article/19/4/linux-rookie-mistakes)
[#]: author: (Jen Wike Huger (Red Hat) https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p)
5 Linux rookie mistakes
======
Linux enthusiasts share some of the biggest mistakes they made.
![magnifying glass on computer screen, finding a bug in the code][1]
It's smart to learn new skills throughout your life—it keeps your mind nimble and makes you more competitive in the job market. But some skills are harder to learn than others, especially those where small rookie mistakes can cost you a lot of time and trouble when you're trying to fix them.
Take learning [Linux][2], for example. If you're used to working in a Windows or MacOS graphical interface, moving to Linux, with its unfamiliar commands typed into a terminal, can have a big learning curve. But the rewards are worth it, as the millions and millions of people who have gone before you have proven.
That said, the journey won't be without pitfalls. We asked some of Linux enthusiasts to think back to when they first started using Linux and tell us about the biggest mistakes they made.
"Don't go into [any sort of command line interface (CLI) work] with an expectation that commands work in rational or consistent ways, as that is likely to lead to frustration. This is not due to poor design choices—though it can feel like it when you're banging your head against the proverbial desk—but instead reflects the fact that these systems have evolved and been added onto through generations of software and OS evolution. Go with the flow, write down or memorize the commands you need, and (try not to) get frustrated when [things aren't what you'd expect][3]." _—[Gina Likins][4]_
"As easy as it might be to just copy and paste commands to make the thing go, read the command first and at least have a general understanding of the actions that are about to be performed. Especially if there is a pipe command. Double especially if there is more than one. There are a lot of destructive commands that look innocuous until you realize what they can do (e.g., **rm** , **dd** ), and you don't want to accidentally destroy things. (Ask me how I know.)" _—[Katie McLaughlin][5]_
"Early on in my Linux journey, I wasn't as aware of the importance of knowing where you are in the filesystem. I was deleting some file in what I thought was my home directory, and I entered **sudo rm -rf *** and deleted all of the boot files on my system. Now, I frequently use **pwd** to ensure that I am where I think I am before issuing such commands. Fortunately for me, I was able to boot my wounded laptop with a USB drive and recover my files." _—[Don Watkins][6]_
"Do not reset permissions on the entire file system to [777][7] because you think 'permissions are hard to understand' and you want an application to have access to something." _—[Matthew Helmke][8]_
"I was removing a package from my system, and I did not check what other packages it was dependent upon. I just let it remove whatever it wanted and ended up causing some of my important programs to crash and become unavailable." _—[Kedar Vijay Kulkarni][9]_
What mistakes have you made while learning to use Linux? Share them in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/linux-rookie-mistakes
作者:[Jen Wike Huger (Red Hat)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mistake_bug_fix_find_error.png?itok=PZaz3dga (magnifying glass on computer screen, finding a bug in the code)
[2]: https://opensource.com/resources/linux
[3]: https://lintqueen.com/2017/07/02/learning-while-frustrated/
[4]: https://opensource.com/users/lintqueen
[5]: https://opensource.com/users/glasnt
[6]: https://opensource.com/users/don-watkins
[7]: https://www.maketecheasier.com/file-permissions-what-does-chmod-777-means/
[8]: https://twitter.com/matthewhelmke
[9]: https://opensource.com/users/kkulkarn

View File

@ -0,0 +1,131 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 open source mobile apps)
[#]: via: (https://opensource.com/article/19/4/mobile-apps)
[#]: author: (Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen)
5 open source mobile apps
======
You can count on these apps to meet your needs for productivity,
communication, and entertainment.
![][1]
Like most people in the world, I'm rarely further than an arm's reach from my smartphone. My Android device provides a seemingly limitless number of communication, productivity, and entertainment services thanks to the open source mobile apps I've installed from Google Play and F-Droid.
Of the many open source apps on my phone, the following five are the ones I consistently turn to whether I want to listen to music; connect with friends, family, and colleagues; or get work done on the go.
### MPDroid
_An Android controller for the Music Player Daemon (MPD)_
![MPDroid][2]
MPD is a great way to get music from little music server computers out to the big black stereo boxes. It talks straight to ALSA and therefore to the Digital-to-Analog Converter ([DAC][3]) via the ALSA hardware interface, and it can be controlled over my network—but by what? Well, it turns out that MPDroid is a great MPD controller. It manages my music database, displays album art, handles playlists, and supports internet radio. And it's open source, so if something doesn't work…
MPDroid is available on [Google Play][4] and [F-Droid][5].
### RadioDroid
_An Android internet radio tuner that I use standalone and with Chromecast_
**
**
**
_![RadioDroid][6]_
RadioDroid is to internet radio as MPDroid is to managing my music database; essentially, RadioDroid is a frontend to [Internet-Radio.com][7]. Moreover, RadioDroid can be enjoyed by plugging headphones into the Android device, by connecting the Android device directly to the stereo via the headphone jack or USB, or by using its Chromecast capability with a compatible device. It's a fine way to check the weather in Finland, listen to the Spanish top 40, or hear the latest news from down under.
RadioDroid is available on [Google Play][8] and [F-Droid][9].
### Signal
_A secure messaging client for Android, iOS, and desktop_
**
**
**
_![Signal][10]_
If you like WhatsApp but are bothered by its [getting-closer-every-day][11] relationship to Facebook, Signal should be your next thing. The only problem with Signal is convincing your contacts they're better off replacing WhatsApp with Signal. But other than that, it has a similar interface; great voice and video calling; great encryption; decent anonymity; and it's supported by a foundation that doesn't plan to monetize your use of the software. What's not to like?
Signal is available for [Android][12], [iOS][13], and [desktop][14].
### ConnectBot
_Android SSH client_
**
**
**
_![ConnectBot][15]_
Sometimes I'm far away from my computer, but I need to log into the server to do something. [ConnectBot][16] is a great solution for moving SSH sessions onto my phone.
ConnectBot is available on [Google Play][17].
### Termux
_Android terminal emulator with many familiar utilities_
**
**
**
_![Termux][18]_
Have you ever needed to run an **awk** script on your phone? [Termux][19] is your solution. If you need to do terminal-type stuff, and you don't want to maintain an SSH connection to a remote computer the whole time, bring the files over to your phone with ConnectBot, quit the session, do your stuff in Termux, and send the results back with ConnectBot.
Termux is available on [Google Play][20] and [F-Droid][21].
* * *
What are your favorite open source mobile apps for work or fun? Please share them in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/mobile-apps
作者:[Chris Hermansen (Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_041x_0.png?itok=tfg6_I78
[2]: https://opensource.com/sites/default/files/uploads/mpdroid.jpg (MPDroid)
[3]: https://opensource.com/article/17/4/fun-new-gadget
[4]: https://play.google.com/store/apps/details?id=com.namelessdev.mpdroid&hl=en_US
[5]: https://f-droid.org/en/packages/com.namelessdev.mpdroid/
[6]: https://opensource.com/sites/default/files/uploads/radiodroid.png (RadioDroid)
[7]: https://www.internet-radio.com/
[8]: https://play.google.com/store/apps/details?id=net.programmierecke.radiodroid2
[9]: https://f-droid.org/en/packages/net.programmierecke.radiodroid2/
[10]: https://opensource.com/sites/default/files/uploads/signal.png (Signal)
[11]: https://opensource.com/article/19/3/open-messenger-client
[12]: https://play.google.com/store/apps/details?id=org.thoughtcrime.securesms
[13]: https://itunes.apple.com/us/app/signal-private-messenger/id874139669?mt=8
[14]: https://signal.org/download/
[15]: https://opensource.com/sites/default/files/uploads/connectbot.png (ConnectBot)
[16]: https://connectbot.org/
[17]: https://play.google.com/store/apps/details?id=org.connectbot
[18]: https://opensource.com/sites/default/files/uploads/termux.jpg (Termux)
[19]: https://termux.com/
[20]: https://play.google.com/store/apps/details?id=com.termux
[21]: https://f-droid.org/packages/com.termux/

View File

@ -0,0 +1,66 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (AI Ops: Let the data talk)
[#]: via: (https://www.networkworld.com/article/3388217/ai-ops-let-the-data-talk.html#tk.rss_all)
[#]: author: (Marie Fiala, Director of Portfolio Marketing for Blue Planet at Ciena )
AI Ops: Let the data talk
======
The catalysts and ROI of AI-powered network analytics for automated operations were the focus of discussion for service providers at the recent FutureNet conference in London. Blue Planets Marie Fiala details the conversation.
![metamorworks][1]
![Marie Fiala, Director of Portfolio Marketing for Blue Planet at Ciena][2]
_The catalysts and ROI of AI-powered network analytics for automated operations were the focus of discussion for service providers at the recent FutureNet conference in London. Blue Planets Marie Fiala details the conversation._
Do we need perfect data? Or is good enough data good enough? Certainly, there is a need to find a pragmatic approach or else one could get stalled in analysis-paralysis. Is closed-loop automation the end goal? Or is human-guided open loop automation desired? If the quality of data defines the quality of the process, then for closed-loop automation of critical business processes, one needs near-perfect data. Is that achievable?
These issues were discussed and debated at the recent FutureNet conference in London, where the show focused on solving network operators toughest challenges. Industry presenters and panelists stayed true to the themes of AI and automation, all touting the necessity of these interlinked software technologies, yet there were varied opinions on approaches. Network and service providers such as BT, Colt, Deutsche Telekom, KPN, Orange, Telecom Italia, Telefonica, Telenor, Telia, Telus, Turk Telkom, and Vodafone weighed in on the discussion.
**Catalysts for AI-powered analytics**
On one point, most service providers were in agreement: there is a need to identify a specific business use case with measurable ROI, as an initial validation point when introducing AI-powered analytics into operations.
Host operator, Vodafone, positioned 5G as the catalyst. With the advent of 5G technology supporting 100x connections, 10Gbps super-bandwidth, and ultra-low <10ms latency, the volume, velocity and variety of data is exploding. Its a virtuous cycle 5G technologies generate a plethora of data, and conversely, a 5G network requires data-driven automation to function accurately and optimally (how else can virtualized network functions be managed in real-time?).
![5G as catalyst for digitalisation][3]
Another operator stated that the AI gateway for telecom is the customer experience domain, citing how agents can use analytics to better serve the customer base. For another operator, capacity planning is the killer use case: first leverage AI to understand whats going on in your network, then use predictive AI for planning so that you can make smarter investment decisions. Another point of view was that service assurance is the area where the most benefits from AI will be realized. There was even mention of AI as a business by enabling the creation of new services, such as home assistants. At the broadest level, it was noted that AI allows network operators to remain relevant in the eyes of customers.
**The human side of AI and automation**
When it comes to implementation, the significant human impact of AI and automation was not overlooked. Across the board, service providers acknowledged that a new skillset is needed in network operations centers. Network engineers have to upskill to become data scientists and DevOps developers in order to best leverage the new AI-driven software tools.
Furthermore, it is a challenge to recruit specialist AI experts, especially since web-scale providers are also vying for the same talent. On the flip side of the dire need for new skills, there is also a shortage of qualified experts in legacy technologies. Operators need automated, zero-touch management before the workforce retires!
![FutureNet panelists discuss how automated AI can be leveraged as a competitive differentiator][4]
**The ROI of AI**
In many cases, the approach to AI has been a technology-driven Field of Dreams: build it and they will come. A strategic decision was made to hire experts, build data lakes, collect data, and then the business case that yielded positive returns was discovered. In other cases, the business use case came first. But no matter what the approach, the ROI was significant.
These positive results are spurring determination for continued research to uncover ever more areas where AI can deliver tangible benefits. This is however no easy task one operator highlighted that data collection takes 80% of the effort, with the remaining 20% spent on development of algorithms. For AI to really proliferate throughout all aspects of operations, that trend needs to be reversed. It needs to be relatively easy and quick to collect massive amounts of heterogeneous data, aggregate it, and correlate it. This would allow investment to be overwhelmingly applied to the development of predictive and prescriptive analytics tailored to specific use cases, and to enacting intelligent closed-loop automation. Only then will data be able to truly talk and tell us what we havent even thought of yet.
[Discover Intelligent Automation at Blue Planet][5]
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3388217/ai-ops-let-the-data-talk.html#tk.rss_all
作者:[Marie Fiala, Director of Portfolio Marketing for Blue Planet at Ciena][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/istock-957627892-100793278-large.jpg
[2]: https://images.idgesg.net/images/article/2019/04/marla-100793273-small.jpg
[3]: https://images.idgesg.net/images/article/2019/04/ciena-post-5-image-1-100793275-large.jpg
[4]: https://images.idgesg.net/images/article/2019/04/ciena-post-5-image-2-100793276-large.jpg
[5]: https://www.blueplanet.com/resources/Intelligent-Automation-Driving-Digital-Automation-for-Service-Providers.html?utm_campaign=X1058319&utm_source=NWW&utm_term=BPVision&utm_medium=newsletter

View File

@ -0,0 +1,182 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Anbox Easy Way To Run Android Apps On Linux)
[#]: via: (https://www.2daygeek.com/anbox-best-android-emulator-for-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Anbox Easy Way To Run Android Apps On Linux
======
Android emulator applications are allow us to run our favorite Android apps or games directly from Linux system.
There are many android emulators were available for Linux and we had covered few applications in the past.
You can review those by navigating to the following URLs.
* [How To Install Official Android Emulator (SDK) On Linux][1]
* [How To Install GenyMotion (Android Emulator) On Linux][2]
Today we are going to discuss about the Anbox Android emulator.
### What Is Anbox?
Anbox stands for Android in a box. Anbox is a container-based approach to boot a full Android system on a regular GNU/Linux system.
Its new and modern emulator among others.
Since Anbox places the core Android OS into a container using Linux namespaces (LXE) so, there is no slowness while accessing the installed applications.
Anbox will let you run Android on your Linux system without the slowness of virtualization because the core Android OS has placed into a container using Linux namespaces (LXE).
There is no direct access to any hardware from the Android container. All hardware access are going through the anbox daemon on the host.
Each applications will be open in a separate window, just like other native system applications, and it can be showing up in the launcher.
### How To Install Anbox In Linux?
Anbox application is available as snap package so, make sure you have enabled snap support on your system.
Anbox package is recently added to the Ubuntu (Cosmic) and Debian (Buster) repositories. If you are running these version then you can easily install with help of official distribution package manager. Other wise go with snap package installation.
Make sure the necessary kernel modules should be installed in your system in order to work Anbox. For Ubuntu based users, use the following PPA to install it.
```
$ sudo add-apt-repository ppa:morphis/anbox-support
$ sudo apt update
$ sudo apt install linux-headers-generic anbox-modules-dkms
```
After you installed the `anbox-modules-dkms` package you have to manually reload the kernel modules or system reboot is required.
```
$ sudo modprobe ashmem_linux
$ sudo modprobe binder_linux
```
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][3]** or **[APT Command][4]** to install anbox.
```
$ sudo apt install anbox
```
We always used to get package for Arch Linux based systems from AUR repository. So, use any of the **[AUR helper][5]** to install it. I prefer to go with **[Yay utility][6]**.
```
$ yuk -S anbox-git
```
If no, you can **[install and configure snaps in Linux][7]** by navigating to the following article. Others can ignore if you have already installed snaps on your system.
```
$ sudo snap install --devmode --beta anbox
```
### Prerequisites For Anbox
By default, Anbox doesnt ship with the Google Play Store.
Hence, we need to manually download each application (APK) and install it using Android Debug Bridge (ADB).
The ADB tool is readily available in most of the distributions repository so, we can easily install it.
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][3]** or **[APT Command][4]** to install ADB.
```
$ sudo apt install android-tools-adb
```
For **`Fedora`** system, use **[DNF Command][8]** to install ADB.
```
$ sudo dnf install android-tools
```
For **`Arch Linux`** based systems, use **[Pacman Command][9]** to install ADB.
```
$ sudo pacman -S android-tools
```
For **`openSUSE Leap`** system, use **[Zypper Command][10]** to install ADB.
```
$ sudo zypper install android-tools
```
### Where To Download The Android Apps?
Since you cant use the Play Store so, you have to download the APK packages from trusted sites like [APKMirror][11] then manually install it.
### How To Launch Anbox?
Anbox can be launched from the Dash. This is how the default Anbox looks.
![][13]
### How To Push The Apps Into Anbox?
As i told previously, we need to manually install it. For testing purpose, we are going to install `YouTube` and `Firefox` apps.
First, you need to start ADB server. To do so, run the following command.
```
$ adb devices
```
We have already downloaded the `YouTube` and `Firefox` apps and the same we will install now.
**Common Syntax:**
```
$ adb install Name-Of-Your-Application.apk
```
Installing YouTube and Firefox app.
```
$ adb install 'com.google.android.youtube_14.13.54-1413542800_minAPI19(x86_64)(nodpi)_apkmirror.com.apk'
Success
$ adb install 'org.mozilla.focus_9.0-330191219_minAPI21(x86)(nodpi)_apkmirror.com.apk'
Success
```
I have installed `YouTube` and `Firefox` in my Anbox. See the screenshot below.
![][14]
As we told in the beginning of the article, it will open any app as a new tab. Here, Im going to open Firefox and accessing the **[2daygeek.com][15]** website.
![][16]
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/anbox-best-android-emulator-for-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/install-configure-sdk-android-emulator-on-linux/
[2]: https://www.2daygeek.com/install-genymotion-android-emulator-on-ubuntu-debian-fedora-arch-linux/
[3]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[4]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[5]: https://www.2daygeek.com/category/aur-helper/
[6]: https://www.2daygeek.com/install-yay-yet-another-yogurt-aur-helper-on-arch-linux/
[7]: https://www.2daygeek.com/linux-snap-package-manager-ubuntu/
[8]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[10]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
[11]: https://www.apkmirror.com/
[12]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[13]: https://www.2daygeek.com/wp-content/uploads/2019/04/anbox-best-android-emulator-for-linux-1.jpg
[14]: https://www.2daygeek.com/wp-content/uploads/2019/04/anbox-best-android-emulator-for-linux-2.jpg
[15]: https://www.2daygeek.com/
[16]: https://www.2daygeek.com/wp-content/uploads/2019/04/anbox-best-android-emulator-for-linux-3.jpg

View File

@ -0,0 +1,78 @@
[#]: collector: (lujun9972)
[#]: translator: (tomjlw)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco, Google reenergize multicloud/hybrid cloud joint development)
[#]: via: (https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco, Google reenergize multicloud/hybrid cloud joint development
======
Cisco, VMware, HPE and others tap into new Google Cloud Athos cloud technology
![Thinkstock][1]
Cisco and Google have expanded their joint cloud-development activities to help customers more easily build secure multicloud and hybrid applications everywhere from on-premises data centers to public clouds.
**[Check out[what hybrid cloud computing is][2] and learn [what you need to know about multi-cloud][3]. Get regularly scheduled insights by [signing up for Network World newsletters][4]]**
The expansion centers around Googles new open-source hybrid cloud package called Anthos, which was introduced at the companys Google Next event this week. Anthos is based on and supplants the company's existing Google Cloud Service beta. Anthos will let customers run applications, unmodified, on existing on-premises hardware or in the public cloud and will be available on [Google Cloud Platform][5] (GCP) with [Google Kubernetes Engine][6] (GKE), and in data centers with [GKE On-Prem][7], the company says. Anthos will also let customers for the first time manage workloads running on third-party clouds such as AWS and Azure from the Google platform without requiring administrators and developers to learn different environments and APIs, Google said.
Essentially, Athos offers a single managed service that promises to let customers manage and deploy workloads across clouds, without having to worry about dissimilar environments or APIs.
As part of the rollout, Google also announced a beta program called[ Anthos Migrate][8] that Google says auto-migrates VMs from on-premises, or other clouds, directly into containers in GKE. “This unique migration technology lets you migrate and modernize your infrastructure in one streamlined motion, without upfront modifications to the original VMs or applications,” Google said. It gives companies the flexibility to move on-prem apps to a cloud environment at the customers pace, Google said.
### Cisco and Google
For its part Cisco announced support of Anthos and promised to tightly integrate it with Cisco data center-technologies, such as its HyperFlex hyperconverged package, Application Centric Infrastructure (Ciscos flagship SDN offering), SD-WAN and Stealthwatch Cloud. The integrations will enable a consistent, cloud-like experience whether on-prem or in the cloud with automatic upgrades to the latest versions and security patches, Cisco stated.
"Google Clouds expertise in containerization and service mesh Kubernetes and Istio, respectively as well as their leadership in the developer community, combined with Ciscos enterprise-class networking, compute, storage and security products and services makes for a winning combination for our customers," [wrote][9] Kip Compton, Cisco senior vice president, Cloud Platform and Solutions Group. “The Cisco integrations for Anthos will help customers build and manage multicloud and hybrid applications across their on-premises datacenters and public clouds and let them focus on innovation and agility without compromising security or increasing complexity.”
### Google Cloud and Cisco
Eyal Manor, vice president, engineering at Google Cloud, [wrote][10] that with Ciscos support for Anthos, customers will be able to:
* Benefit from a fully-managed service, like GKE, and Ciscos hyperconverged infrastructure, networking, and security technologies.
* Operate consistently across an enterprise data center and the cloud.
* Consume cloud services from an enterprise data center.
* Modernize now on premises with the latest cloud technologies.
Cisco and Google have been working closely together since October 2017, when the companies said they were working on an open hybrid cloud platform that bridges on-premises and cloud environments. That package, [Cisco Hybrid Cloud Platform for Google Cloud][11], became generally available in September 2018. It lets customer develop enterprise-grade capabilities from Google Cloud-managed Kubernetes containers that include Cisco networking and security technology as well as service mesh monitoring from Istio.
Google says Istios open-source, container- and microservice-optimized technology offers developers a uniform way to connect, secure, manage and monitor microservices across clouds through service-to-service level mTLS [Mutual Transport Layer Security] authentication access control. As a result, customers can easily implement new, portable services and centrally configure and manage those services.
Cisco wasnt the only vendor to announce support for Anthos. At least 30 other big Google partners including [VMware][12], [Dell EMC][13], [HPE][14], Intel, and Lenovo committed to delivering Anthos on their own hyperconverged infrastructure for their customers, Google stated.
Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[tomjlw](https://github.com/tomjlw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2016/12/hybrid_cloud-100700390-large.jpg
[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://cloud.google.com/
[6]: https://cloud.google.com/kubernetes-engine/
[7]: https://cloud.google.com/gke-on-prem/
[8]: https://cloud.google.com/contact/
[9]: https://blogs.cisco.com/news/next-phase-cisco-google-cloud
[10]: https://cloud.google.com/blog/topics/partners/google-cloud-partners-with-cisco-on-hybrid-cloud-next19?utm_medium=unpaidsocial&utm_campaign=global-googlecloud-liveevent&utm_content=event-next
[11]: https://cloud.google.com/cisco/
[12]: https://blogs.vmware.com/networkvirtualization/2019/04/vmware-and-google-showcase-hybrid-cloud-deployment.html/
[13]: https://www.dellemc.com/en-us/index.htm
[14]: https://www.hpe.com/us/en/newsroom/blog-post/2019/04/hpe-and-google-cloud-join-forces-to-accelerate-innovation-with-hybrid-cloud-solutions-optimized-for-containerized-applications.html
[15]: https://www.facebook.com/NetworkWorld/
[16]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,225 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Install And Configure Chrony As NTP Client?)
[#]: via: (https://www.2daygeek.com/configure-ntp-client-using-chrony-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How To Install And Configure Chrony As NTP Client?
======
The NTP server and NTP client allow us to sync the clock across the network.
We had written an article about **[NTP server and NTP client installation and configuration][1]** in the past.
If you would like to check these, navigate to the above URL.
### What Is Chrony Client?
Chrony is replacement of NTP client.
It can synchronize the system clock faster with better time accuracy and it can be particularly useful for the systems which are not online all the time.
chronyd is smaller, it uses less memory and it wakes up the CPU only when necessary, which is better for power saving.
It can perform well even when the network is congested for longer periods of time.
It supports hardware timestamping on Linux, which allows extremely accurate synchronization on local networks.
It offers following two services.
* **`chronyc:`** Command line interface for chrony.
* **`chronyd:`** Chrony daemon service.
### How To Install And Configure Chrony In Linux?
Since the package is available in most of the distributions official repository. So, use the package manager to install it.
For **`Fedora`** system, use **[DNF Command][2]** to install chrony.
```
$ sudo dnf install chrony
```
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][3]** or **[APT Command][4]** to install chrony.
```
$ sudo apt install chrony
```
For **`Arch Linux`** based systems, use **[Pacman Command][5]** to install chrony.
```
$ sudo pacman -S chrony
```
For **`RHEL/CentOS`** systems, use **[YUM Command][6]** to install chrony.
```
$ sudo yum install chrony
```
For **`openSUSE Leap`** system, use **[Zypper Command][7]** to install chrony.
```
$ sudo zypper install chrony
```
In this article, we are going to use the following setup to test this.
* **`NTP Server:`** HostName: CentOS7.2daygeek.com, IP:192.168.1.5, OS:CentOS 7
* **`Chrony Client:`** HostName: Ubuntu18.2daygeek.com, IP:192.168.1.3, OS:Ubuntu 18.04
Navigate to the following URL for **[NTP server installation and configuration in Linux][1]**.
I have installed and configured the NTP server on `CentOS7.2daygeek.com` so, append the same into all the client machines. Also, include the other required information on it.
The `chrony.conf` file will be placed in the different locations based on your distribution.
For RHEL based systems, its located at `/etc/chrony.conf`.
For Debian based systems, its located at `/etc/chrony/chrony.conf`.
```
# vi /etc/chrony/chrony.conf
server CentOS7.2daygeek.com prefer iburst
keyfile /etc/chrony/chrony.keys
driftfile /var/lib/chrony/chrony.drift
logdir /var/log/chrony
maxupdateskew 100.0
makestep 1 3
cmdallow 192.168.1.0/24
```
Bounce the Chrony service once you update the configuration.
For sysvinit systems. For RHEL based system we need to run `chronyd` instead of chrony.
```
# service chrony restart
# chkconfig chrony on
```
For systemctl systems. For RHEL based system we need to run `chronyd` instead of chrony.
```
# systemctl restart chrony
# systemctl enable chrony
```
Use the following commands like tacking, sources and sourcestats to check chrony synchronization details.
To check chrony tracking status.
```
# chronyc tracking
Reference ID : C0A80105 (CentOS7.2daygeek.com)
Stratum : 3
Ref time (UTC) : Thu Mar 28 05:57:27 2019
System time : 0.000002545 seconds slow of NTP time
Last offset : +0.001194361 seconds
RMS offset : 0.001194361 seconds
Frequency : 1.650 ppm fast
Residual freq : +184.101 ppm
Skew : 2.962 ppm
Root delay : 0.107966967 seconds
Root dispersion : 1.060455322 seconds
Update interval : 2.0 seconds
Leap status : Normal
```
Run the sources command to displays information about the current time sources.
```
# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* CentOS7.2daygeek.com 2 6 17 62 +36us[+1230us] +/- 1111ms
```
The sourcestats command displays information about the drift rate and offset estimation process for each of the sources currently being examined by chronyd.
```
# chronyc sourcestats
210 Number of sources = 1
Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
==============================================================================
CentOS7.2daygeek.com 5 3 71 -97.314 78.754 -469us 441us
```
When chronyd is configured as an NTP client or peer, you can have the transmit and receive timestamping modes and the interleaved mode reported for each NTP source by the chronyc ntpdata command.
```
# chronyc ntpdata
Remote address : 192.168.1.5 (C0A80105)
Remote port : 123
Local address : 192.168.1.3 (C0A80103)
Leap status : Normal
Version : 4
Mode : Server
Stratum : 2
Poll interval : 6 (64 seconds)
Precision : -23 (0.000000119 seconds)
Root delay : 0.108994 seconds
Root dispersion : 0.076523 seconds
Reference ID : 85F3EEF4 ()
Reference time : Thu Mar 28 06:43:35 2019
Offset : +0.000160221 seconds
Peer delay : 0.000664478 seconds
Peer dispersion : 0.000000178 seconds
Response time : 0.000243252 seconds
Jitter asymmetry: +0.00
NTP tests : 111 111 1111
Interleaved : No
Authenticated : No
TX timestamping : Kernel
RX timestamping : Kernel
Total TX : 46
Total RX : 46
Total valid RX : 46
```
Finally run the `date` command.
```
# date
Thu Mar 28 03:08:11 CDT 2019
```
To step the system clock immediately, bypassing any adjustments in progress by slewing, issue the following command as root (To adjust the system clock manually).
```
# chronyc makestep
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/configure-ntp-client-using-chrony-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/install-configure-ntp-server-ntp-client-in-linux/
[2]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[3]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[4]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[5]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[6]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[7]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/

View File

@ -0,0 +1,262 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Install And Configure NTP Server And NTP Client In Linux?)
[#]: via: (https://www.2daygeek.com/install-configure-ntp-server-ntp-client-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How To Install And Configure NTP Server And NTP Client In Linux?
======
You might heard this word many times and also you might had worked on this.
However, i will tell you clearly in this article about NTP Server setup and NTP Client setup
We will see about **[Chrony NTP Client setup][1]** later.
### What Is NTP Server?
NTP stands for Network Time Protocol.
It is a networking protocol that synchronize the clock between computer systems over the network.
In other hand I can say. It will keep the same time (It keep an accurate time) to all the systems which are connected to NTP server through NTP or Chrony client.
NTP can usually maintain time to within tens of milliseconds over the public Internet, and can achieve better than one millisecond accuracy in local area networks under ideal conditions.
It uses User Datagram Protocol (UDP) on port number 123 for send and receive timestamps. Its a client/server application.
It send and receive timestamps using the User Datagram Protocol (UDP) on port number 123.
### What Is NTP Client?
NTP client will synchronize its clock to the network time server.
### What Is Chrony Client?
Chrony is replacement of NTP client. It can synchronize the system clock faster with better time accuracy and it can be particularly useful for the systems which are not online all the time.
### Why We Need NTP Server?
To keep all the servers in your organization in-sync with an accurate time to perform time based jobs.
To clarify this, I will tell you a scenario. Say for example, we have two servers (Server1 and Server2). The server1 usually complete the batch jobs at 10:55 then the server2 needs to run another job at 11:00 based on the server1 job completion report.
If both the system is using in a different time (if one system is ahead of the others, the others are behind that particular one) then we cant perform this. To achieve this, we should setup NTP. Hope it cleared your doubts about NTP.
In this article, we are going to use the following setup to test this.
* **`NTP Server:`** HostName: CentOS7.2daygeek.com, IP:192.168.1.8, OS:CentOS 7
* **`NTP Client:`** HostName: Ubuntu18.2daygeek.com, IP:192.168.1.5, OS:Ubuntu 18.04
### NTP SERVER SIDE: How To Install NTP Server In Linux?
There is no different packages for NTP server and NTP client since its a client/server model. The NTP package is available in distribution official repository so, use the distribution package manger to install it.
For **`Fedora`** system, use **[DNF Command][2]** to install ntp.
```
$ sudo dnf install ntp
```
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][3]** or **[APT Command][4]** to install ntp.
```
$ sudo apt install ntp
```
For **`Arch Linux`** based systems, use **[Pacman Command][5]** to install ntp.
```
$ sudo pacman -S ntp
```
For **`RHEL/CentOS`** systems, use **[YUM Command][6]** to install ntp.
```
$ sudo yum install ntp
```
For **`openSUSE Leap`** system, use **[Zypper Command][7]** to install ntp.
```
$ sudo zypper install ntp
```
### How To Configure The NTP Server In Linux?
Once you have installed the NTP package, make sure you have to uncomment the following configuration in the `/etc/ntp.conf` file on server side.
By default the NTP server configuration relies on `X.distribution_name.pool.ntp.org`. If you want you can use the default configuration or you can change it as per your location (country specific) by visiting <https://www.ntppool.org/zone/@> site.
Say for example. If you are in India then your NTP server will be `0.in.pool.ntp.org` and it will work for most of the countries.
```
# vi /etc/ntp.conf
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict -6 ::1
server 0.asia.pool.ntp.org
server 1.asia.pool.ntp.org
server 2.asia.pool.ntp.org
server 3.asia.pool.ntp.org
restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
driftfile /var/lib/ntp/drift
keys /etc/ntp/keys
```
We have allowed only `192.168.1.0/24` subnet clients to access the NTP server.
Since firewall is enabled by default on RHEL 7 based distributions so, allow the ntp server/service.
```
# firewall-cmd --add-service=ntp --permanent
# firewall-cmd --reload
```
Bounce the service once you update the configuration.
For sysvinit systems. For Debian based system we need to run `ntp` instead of ntpd.
```
# service ntpd restart
# chkconfig ntpd on
```
For systemctl systems. For Debian based system we need to run `ntp` instead of ntpd.
```
# systemctl restart ntpd
# systemctl enable ntpd
```
### NTP CLIENT SIDE: How To Install NTP Client On Linux?
As I mentioned earlier in this article. There is no specific package for NTP server and client. So, install the same package on client also.
For **`Fedora`** system, use **[DNF Command][2]** to install ntp.
```
$ sudo dnf install ntp
```
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][3]** or **[APT Command][4]** to install ntp.
```
$ sudo apt install ntp
```
For **`Arch Linux`** based systems, use **[Pacman Command][5]** to install ntp.
```
$ sudo pacman -S ntp
```
For **`RHEL/CentOS`** systems, use **[YUM Command][6]** to install ntp.
```
$ sudo yum install ntp
```
For **`openSUSE Leap`** system, use **[Zypper Command][7]** to install ntp.
```
$ sudo zypper install ntp
```
I have installed and configured the NTP server on `CentOS7.2daygeek.com` so, append the same into all the client machines.
```
# vi /etc/ntp.conf
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict -6 ::1
server CentOS7.2daygeek.com prefer iburst
driftfile /var/lib/ntp/drift
keys /etc/ntp/keys
```
Bounce the service once you update the configuration.
For sysvinit systems. For Debian based system we need to run `ntp` instead of ntpd.
```
# service ntpd restart
# chkconfig ntpd on
```
For systemctl systems. For Debian based system we need to run `ntp` instead of ntpd.
```
# systemctl restart ntpd
# systemctl enable ntpd
```
Wait for few minutes post restart of the NTP service to get synchronize time from the NTP server.
Run the following commands to verify the NTP server synchronization status on Linux.
```
# ntpq p
Or
# ntpq -pn
remote refid st t when poll reach delay offset jitter
==============================================================================
*CentOS7.2daygee 133.243.238.163 2 u 14 64 37 0.686 0.151 16.432
```
Run the following command to get the current status of ntpd.
```
# ntpstat
synchronised to NTP server (192.168.1.8) at stratum 3
time correct to within 508 ms
polling server every 64 s
```
Finally run the `date` command.
```
# date
Tue Mar 26 23:17:05 CDT 2019
```
If you are observing a significant offset in the NTP output. Run the following command to sync clock manually from the NTP server. Make sure that your NTP client should be inactive state when you perform the command.
```
# ntpdate uv CentOS7.2daygeek.com
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/install-configure-ntp-server-ntp-client-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/configure-ntp-client-using-chrony-in-linux/
[2]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[3]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[4]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[5]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[6]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[7]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/

View File

@ -0,0 +1,80 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Juniper opens SD-WAN service for the cloud)
[#]: via: (https://www.networkworld.com/article/3388030/juniper-opens-sd-wan-service-for-the-cloud.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Juniper opens SD-WAN service for the cloud
======
Juniper rolls out its Contrail SD-WAN cloud offering.
![Thinkstock][1]
Juniper has taken the wraps off a cloud-based SD-WAN service it says will ease the management and bolster the security of wired and wireless-connected branch office networks.
The Contrail SD-WAN cloud offering expands on the companys existing on-premise ([SRX][2]-based) and virtual ([NFX][3]-based) SD-WAN offerings to include greater expansion possibilities up to 10,000 spoke-attached sites and support for more variants of passive redundant hybrid WAN links and topologies such as hub and spoke, partial, and dynamic full mesh, Juniper stated.
**More about SD-WAN**
* [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][4]
* [How to pick an off-site data-backup method][5]
* [SD-Branch: What it is and why youll need it][6]
* [What are the options for security SD-WAN?][7]
The service brings with it Junipers Contrail Service Orchestration package, which secures, automates, and runs the service life cycle across [NFX Series][3] Network Services Platforms, [EX Series][8] Ethernet Switches, [SRX Series][2] next-generation firewalls, and [MX Series][9] 5G Universal Routing Platforms. Ultimately it lets customers manage and set up SD-WANs all from a single portal.
The package is also a service orchestrator for the [vSRX][10] Virtual Firewall and [vMX][11] Virtual Router, available in public cloud marketplaces such as Amazon Web Services (AWS) and Microsoft Azure, Juniper said. The SD-WAN offering also includes integration with cloud security provider ZScaler.
Contrail Service Orchestration offers organizations visibility across SD-WAN, as well as branch wired and now wireless infrastructure. Monitoring and intelligent analytics offer real-time insight into network operations, allowing administrators to preempt looming threats and degradations, as well as pinpoint issues for faster recovery.
The new service also includes support for Junipers [recently acquired][12] Mist Systems wireless technology, which lets the service access and manage Mists wireless access points, allowing customers to meld wireless and wired networks.
Juniper recently closed the agreement to buy innovative wireless-gear-maker Mist for $405 million. Mist touts itself as having developed an artificial-intelligence-based wireless platform that makes Wi-Fi more predictable, reliable, and measurable.
With Contrail, administrators can control a growing mix of legacy and modern scale-out architectures while automating their operational workflows using software that provides smarter, easier-to-use automation, orchestration and infrastructure visibility, wrote Juniper CTO [Bikash Koley][13] in a [blog about the SD-WAN announcement][14].
“Management complexity and policy enforcement are traditional network administrator fears, while both data and network security are growing in importance for organizations of all sizes,” Koley stated. ** **“Cloud-delivered SD-WAN removes the complexity of software operations, arguably the most difficult part of Software Defined Networking.”
Analysts said the Juniper announcement could help the company compete in a super-competitive, rapidly evolving SD-WAN world.
“The announcement is more a me too than a particular technological breakthrough,” said Lee Doyle, principal analyst with Doyle Research. “The Mist integration is whats interesting here, and that could help them, but there are 15 to 20 other vendors that have the same technology, bigger partners, and bigger sales channels than Juniper does.”
Indeed the SD-WAN arena is a crowded one with Cisco, VMware, Silver Peak, Riverbed, Aryaka, Nokia, and Versa among the players.
The cloud-based Contrail SD-WAN offering is available as an annual or multi-year subscription.
Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3388030/juniper-opens-sd-wan-service-for-the-cloud.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/01/cloud_network_blockchain_bitcoin_storage-100745950-large.jpg
[2]: https://www.juniper.net/us/en/products-services/security/srx-series/
[3]: https://www.juniper.net/us/en/products-services/sdn/nfx-series/
[4]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
[5]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
[6]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
[7]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
[8]: https://www.juniper.net/us/en/products-services/switching/ex-series/
[9]: https://www.juniper.net/us/en/products-services/routing/mx-series/
[10]: https://www.juniper.net/us/en/products-services/security/srx-series/vsrx/
[11]: https://www.juniper.net/us/en/products-services/routing/mx-series/vmx/
[12]: https://www.networkworld.com/article/3353042/juniper-grabs-mist-for-wireless-ai-cloud-service-delivery-technology.html
[13]: https://www.networkworld.com/article/3324374/juniper-cto-talks-cloud-intent-computing-revolution-high-speed-networking-and-open-source-growth.html?nsdr=true
[14]: https://forums.juniper.net/t5/Engineering-Simplicity/Cloud-Delivered-Branch-Simplicity-Now-Surpasses-SD-WAN/ba-p/461188
[15]: https://www.facebook.com/NetworkWorld/
[16]: https://www.linkedin.com/company/network-world

Some files were not shown because too many files have changed in this diff Show More